Archives For international

We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (“CJEU”) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.

In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.

The IDPC’s decision employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then standard contractual clauses—an alternative mechanism for firms for transferring data—are incapable of satisfying the requirements of EU law.

The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms practically incapable doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.

But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.

It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.

After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.

That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.

  1. U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
  2. The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.

As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.

Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.

Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.

(note: edited to clarify that the Irish High Court is not reviewing SCC’s directly and that the CLOUD Act would not impose legal barriers for firms, but practical ones).

Amazingly enough, at a time when legislative proposals for new antitrust restrictions are rapidly multiplying—see the Competition and Antitrust Law Enforcement Reform Act (CALERA), for example—Congress simultaneously is seriously considering granting antitrust immunity to a price-fixing cartel among members of the newsmedia. This would thereby authorize what the late Justice Antonin Scalia termed “the supreme evil of antitrust: collusion.” What accounts for this bizarre development?

Discussion

The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:

We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …

A strong, diverse, free press is critical for any successful democracy. …

Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.

This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.

The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.

Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”

In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):

In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.

The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.

Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.

Yun concludes by summarizing the case against this legislation (citations omitted):

Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.

Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.

Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.

There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.

Conclusion

Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.

This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.

In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.

In the wake of its departure from the European Union, the United Kingdom will have the opportunity to enter into new free trade agreements (FTAs) with its international trading partners that lower existing tariff and non-tariff barriers. Achieving major welfare-enhancing reductions in trade restrictions will not be easy. Trade negotiations pose significant political sensitivities, such as those arising from the high levels of protection historically granted certain industry sectors, particularly agriculture.

Nevertheless, the political economy of protectionism suggests that, given deepening globalization and the sudden change in U.K. trade relations wrought by Brexit, the outlook for substantial liberalization of U.K. trade has become much brighter. Below, I address some of the key challenges facing U.K. trade negotiators as they seek welfare-enhancing improvements in trade relations and offer a proposal to deal with novel trade distortions in the least protectionist manner.

Two New Challenges Affecting Trade Liberalization

In addition to traditional trade issues, such as tariff levels and industry sector-specific details, U.K, trade negotiators—indeed, trade negotiators from all nations—will have to confront two relatively new and major challenges that are creating several frictions.

First, behind-the-border anticompetitive market distortions (ACMDs) have largely replaced tariffs as the preferred means of protection in many areas. As I explained in a previous post on this site (citing an article by trade-law scholar Shanker Singham and me), existing trade and competition law have not been designed to address the ACMD problem:

[I]nternational trade agreements simply do not reach a variety of anticompetitive welfare-reducing government measures that create de facto trade barriers by favoring domestic interests over foreign competitors. Moreover, many of these restraints are not in place to discriminate against foreign entities, but rather exist to promote certain favored firms. We dub these restrictions “anticompetitive market distortions” or “ACMDs,” in that they involve government actions that empower certain private interests to obtain or retain artificial competitive advantages over their rivals, be they foreign or domestic. ACMDs are often a manifestation of cronyism, by which politically-connected enterprises successfully pressure government to shield them from effective competition, to the detriment of overall economic growth and welfare. …

As we emphasize in our article, existing international trade rules have been able to reach ACMDs, which include: (1) governmental restraints that distort markets and lessen competition; and (2) anticompetitive private arrangements that are backed by government actions, have substantial effects on trade outside the jurisdiction that imposes the restrictions, and are not readily susceptible to domestic competition law challenge. Among the most pernicious ACMDs are those that artificially alter the cost-base as between competing firms. Such cost changes will have large and immediate effects on market shares, and therefore on international trade flows.

Second, in recent years, the trade remit has expanded to include “nontraditional” issues such as labor, the environment, and now climate change. These concerns have generated support for novel tariffs that could help promote protectionism and harmful trade distortions. As explained in a recent article by the Special Trade Commission advisory group (former senior trade and antitrust officials who have provided independent policy advice to the U.K. government):

[The rise of nontraditional trade issues] has renewed calls for border tax adjustments or dual tariffs on an ex-ante basis. This is in sharp tension with the W[orld Trade Organization’s] long-standing principle of technological neutrality, and focus on outcomes as opposed to discriminating on the basis of the manner of production of the product. The problem is that it is too easy to hide protectionist impulses into concerns about the manner of production, and once a different tariff applies, it will be very difficult to remove. The result will be to significantly damage the liberalisation process itself leading to severe harm to the global economy at a critical time as we recover from Covid-19. The potentially damaging effects of ex ante tariffs will be visited most significantly in developing countries.

Dealing with New Trade Challenges in the Least Protectionist Manner

A broad approach to U.K. trade liberalization that also addresses the two new trade challenges is advanced in a March 2 report by the U.K. government’s Trade and Agricultural Commission (TAC, an independent advisory agency established in 2020). Although addressed primarily to agricultural trade, the TAC report enunciates principles applicable to U.K. trade policy in general, considering the impact of ACMDs and nontraditional issues. Key aspects of the TAC report are summarized in an article by Shanker Singham (the scholar who organized and convened the Special Trade Commission and who also served as a TAC commissioner):

The heart of the TAC report’s import policy contains an innovative proposal that attempts to simultaneously promote a trade liberalising agenda in agriculture, while at the same time protecting the UK’s high standards in food production and ensuring the UK fully complies with WTO rules on animal and plant health, as well as technical regulations that apply to food trade.

This proposal includes a mechanism to deal with some of the most difficult issues in agricultural trade which relate to animal welfare, environment and labour rules. The heart of this mechanism is the potential for the application of a tariff in cases where an aggrieved party can show that a trading partner is violating agreed standards in an FTA.

The result of the mechanism is a tariff based on the scale of the distortion which operates like a trade remedy. The mechanism can also be used offensively where a country is preventing market access by the UK as a result of the market distortion, or defensively where a distortion in a foreign market leads to excess exports from that market. …

[T]he tariff would be calibrated to the scale of the distortion and would apply only to the product category in which the distortion is occurring. The advantage of this over a more conventional trade remedy is that it is based on cost as opposed to price and is designed to remove the effects of the distorting activity. It would not be applied on a retaliatory basis in other unrelated sectors.

In exchange for this mechanism, the UK commits to trade liberalisation and, within a reasonable timeframe, zero tariffs and zero quotas. This in turn will make the UK’s advocacy of higher standards in international organisations much more credible, another core TAC proposal.

The TAC report also notes that behind the border barriers and anti-competitive market distortions (“ACMDs”) have the capacity to damage UK exports and therefore suggests a similar mechanism or set of disciplines could be used offensively. Certainly, where the ACMD is being used to protect a particular domestic industry, using the ACMD mechanism to apply a tariff for the exports of that industry would help, but this may not apply where the purpose is protective, and the industry does not export much.

I would argue that in this case, it would be important to ensure that UK FTAs include disciplines on these ACMDs which if breached could lead to dispute settlement and the potential for retaliatory tariffs for sectors in the UK’s FTA partner that do export. This is certainly normal WTO-sanctioned practice, and could be used here to encourage compliance. It is clear from the experience in dealing with countries that engage in ACMDs for trade or competition advantage that unless there are robust disciplines, mere hortatory language would accomplish little or nothing.

But this sort of mechanism with its concomitant commitment to freer trade has much wider potential application than just UK agricultural trade policy. It could also be used to solve a number of long standing trade disputes such as the US-China dispute, and indeed the most vexed questions in trade involving environment and climate change in ways that do not undermine the international trading system itself.

This is because the mechanism is based on an ex post tariff as opposed to an ex ante one which contains within it the potential for protectionism, and is prone to abuse. Because the tariff is actually calibrated to the cost advantage which is secured as a result of the violation of agreed international standards, it is much more likely that it will be simply limited to removing this cost advantage as opposed to becoming a punitive measure that curbs ordinary trade flows.

It is precisely this type of problem solving and innovative thinking that the international trading system needs as it faces a range of challenges that threaten liberalisation itself and the hard-won gains of the post war GATT/WTO system itself. The TAC report represents UK leadership that has been sought after since the decision to leave the EU. It has much to commend it.

Assessment and Conclusion

Even when administered by committed free traders, real-world trade liberalization is an exercise in welfare optimization, subject to constraints imposed by the actions of organized interest groups expressed through the political process. The rise of new coalitions (such as organizations committed to specified environmental goals, including limiting global warming) and the proliferation of ADMCs further complicates the trade negotiation calculus.

Fortunately, recognizing the “reform moment” created by Brexit, free trade-oriented experts (in particular, the TAC, supported by the Special Trade Commission) have recommended that the United Kingdom pursue a bold move toward zero tariffs and quotas. Narrow exceptions to this policy would involve after-the-fact tariffications to offset (1) the distortive effects of ACMDs and (2) derogation from rules embodying nontraditional concerns, such as environmental commitments. Such tariffications would be limited and cost-based, and, as such, welfare-superior to ex ante tariffs calibrated to price.

While the details need to be worked out, the general outlines of this approach represent a thoughtful and commendable market-oriented effort to secure substantial U.K. trade liberalization, subject to unavoidable constraints. More generally, one would hope that other jurisdictions (including the United States) take favorable note of this development as they generate their own trade negotiation policies. Stay tuned.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Seth L. Cooper is director of policy studies and a senior fellow at the Free State Foundation.]

During Chairman Ajit Pai’s tenure, the Federal Communications Commission adopted key reforms that improved the agency’s processes. No less important than process reform is process integrity. The commission’s L-Band Order and the process that produced it will be the focus here. In that proceeding, Chairman Pai led a careful and deliberative process that resulted in a clearly reasoned and substantively supportable decision to put unused valuable L-Band spectrum into commercial use for wireless services.

Thanks to one of Chairman Pai’s most successful process reforms, the FCC now publicly posts draft items to be voted on three weeks in advance of the commission’s public meetings. During his chairmanship, the commission adopted reforms to help expedite the regulatory-adjudication process by specifying deadlines and facilitating written administrative law judge (ALJ) decisions rather than in-person hearings. The “Team Telecom” process also was reformed to promote faster agency determinations on matters involving foreign ownership.

Along with his process-reform achievements, Chairman Pai deserves credit for ensuring that the FCC’s proceedings were conducted in a lawful and sound manner. For example, the commission’s courtroom track record was notably better during Chairman Pai’s tenure than during the tenures of his immediate predecessors. Moreover, Chairman Pai deserves high marks for the agency process that preceded the L-Band Order – a process that was perhaps subject to more scrutiny than the process of any other proceeding during his chairmanship. The public record supports the integrity of that process, as well as the order’s merits.

In April 2020, the FCC unanimously approved an order authorizing Ligado Networks to deploy a next-generation mixed mobile-satellite network using licensed spectrum in the L-Band. This action is critical to alleviating the shortage of commercial spectrum in the United States and to ensuring our nation’s economic competitiveness. Ligado’s proposed network will provide industrial Internet-of-Things (IoT) services, and its L-Band spectrum has been identified as capable of pairing with C-Band and other mid-band spectrum for delivering future 5G services. According to the L-Band Order, Ligado plans to invest up to $800 million in network capabilities, which could create over 8,000 jobs. Economist Coleman Bazelon estimated that Ligado’s network could help create up to 3 million jobs and contribute up to $500 billion to the U.S. economy.

Opponents of the L-Band Order have claimed that Ligado’s proposed network would create signal interference with GPS services in adjacent spectrum. Moreover, in attempts to delay or undo implementation of the L-Band Order, several opponents lodged harsh but baseless attacks against the FCC’s process. Some of those process criticisms were made at a May 2020 Senate Armed Services Committee hearing that failed to include any Ligado representatives or any FCC commissioners for their viewpoints. And in a May 2020 floor speech, Sen. James Inhofe (R-Okla.) repeatedly criticized the commission’s process as sudden, hurried, and taking place “in the darkness of a weekend.”

But those process criticisms fail in the face of easily verifiable facts. Under Chairman Pai’s leadership, the FCC acted within its conceded authority, consistent with its lawful procedures, and with careful—even lengthy—deliberation.

The FCC’s proceeding concerning Ligado’s license applications dates back to 2011. It included public notice and comment periods in 2016 and 2018. An August 2019 National Telecommunications and Information Administration (NTIA) report noted the commission’s forthcoming decision. In the fall of 2019, the commission shared a draft of its order with NTIA. Publicly stated opposition to Ligado’s proposed network by GPS operators and Defense Secretary Mark Esper, as well as publicly stated support for the network by Attorney General William Barr and Secretary of State Mike Pompeo, ensured that the proceeding received ongoing attention. Claims of “surprise” when the commission finalized its order in April 2020 are impossible to credit.

Importantly, the result of the deliberative agency process helmed by Chairman Pai was a substantively supportable decision. The FCC applied its experience in adjudicating competing technical claims to make commercial spectrum policy decisions. It was persuaded in part by signal testing conducted by the National Advanced Spectrum and Communications Test Network, as well as testing by technology consultants Roberson and Associates. By contrast, the commission found unpersuasive reports of alleged signal interference involving military devices operating outside of their assigned spectrum band.

The FCC also applied its expertise in addressing potential harmful signal interference to incumbent operations in adjacent spectrum bands by imposing several conditions on Ligado’s operations. For example, the L-Band Order requires Ligado to adhere to its agreements with major GPS equipment manufacturers for resolving signal interference concerns. Ligado must dedicate 23 megahertz of its own licensed spectrum as a guard-band from neighboring spectrum and also reduce its base station power levels 99% compared to what Ligado proposed in 2015. The commission requires Ligado to expeditiously replace or repair any U.S. government GPS devices that experience harmful interference from its network. And Ligado must maintain “stop buzzer” capability to halt its network within 15 minutes of any request by the commission.

From a process standpoint, the L-Band Order is a commendable example of Chairman Pai’s perseverance in leading the FCC to a much-needed decision on an economically momentous matter in the face of conflicting government agency and market provider viewpoints. Following a careful and deliberative process, the commission persevered to make a decision that is amply supported by the record and poised to benefit America’s economic welfare.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

Rolled by Rewheel, Redux

Eric Fruits —  15 December 2020

The Finnish consultancy Rewheel periodically issues reports using mobile wireless pricing information to make claims about which countries’ markets are competitive and which are not. For example, Rewheel claims Canada and Greece have the “least competitive monthly prices” while the United Kingdom and Finland have the most competitive.

Rewheel often claims that the number of carriers operating in a country is the key determinant of wireless pricing. 

Their pricing studies attract a great deal of attention. For example, in February 2019 testimony before the U.S. House Energy and Commerce Committee, Phillip Berenbroick of Public Knowledge asserted: “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.” So, what’s wrong with Rewheel? An earlier post highlights some of the flaws in Rewheel’s methodology. But there’s more.

Rewheel creates fictional market baskets of mobile plans for each provider in a county. Country-by-country comparisons are made by evaluating the lowest-priced basket for each country and the basket with the median price.

Rewheel’s market baskets are hypothetical packages that say nothing about which plans are actually chosen by consumers or what the actual prices paid by those consumers were. This is not a new criticism. In 2014, Pauline Affeldt and Rainer Nitsche called these measures “meaningless”:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr … Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

For example, reporting that the average price of a certain T-Mobile USA smartphone, tablet and home Internet plan is $125 is about as useless as knowing that the average price of a Kroger shopping cart containing a six-pack of Budweiser, a dozen eggs, and a pound of oranges is $10. Is Safeway less “competitive” if the price of the same cart of goods is $12? What could you say about pricing at a store that doesn’t sell Budweiser (e.g., Trader Joe’s)?

Rewheel solves that last problem by doing something bonkers. If a carrier doesn’t offer a plan in one of Rewheel’s baskets, they “assign” the HIGHEST monthly price in the world. 

For example, Rewheel notes that Vodafone India does not offer a fixed wireless broadband plan with at least 1,000GB of data and download speeds of 100 Mbps or faster. So, Rewheel “assigns” Vodafone India the highest price in its dataset. That price belongs to a plan that’s sold in the United Kingdom. It simply makes no sense. 

To return to the supermarket analogy, it would be akin to saying that, if a Trader Joe’s in the United States doesn’t sell six-packs of Budweiser, we should assume the price of Budweiser at Trader Joe’s is equal to the world’s most expensive six-pack of the beer. In reality, Trader Joe’s is known for having relatively low prices. But using the Rewheel approach, the store would be assessed to have some of the highest prices.

Because of Rewheel’s “assignment” of highest monthly prices to many plans, it’s irrelevant whether their analysis is based on a country’s median price or lowest price. The median is skewed and the lowest actual may be missing from the dataset.

Rewheel publishes these reports to support its argument that mobile prices are lower in markets with four carriers than in those with three carriers. But even if we accept Rewheel’s price data as reliable, which it isn’t, their own data show no relationship between the number of carriers and average price.

Notice the huge overlap of observations among markets with three and four carriers. 

Rewheel’s latest report provides a redacted dataset, reporting only data usage and weighted average price for each provider. So, we have to work with what we have. 

A simple regression analysis shows there is no statistically significant difference in the intercept or the slopes for markets with three, four or five carriers (the default is three carriers in the regression). Based on the data Rewheel provides to the public, the number of carriers in a country has no relationship to wireless prices.

Rewheel seems to have a rich dataset of pricing information that could be useful to inform policy. It’s a shame that their topline summaries seem designed to support a predetermined conclusion.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Doug Melamed (Professor of the Practice of Law, Stanford law School).
]

The big digital platforms make people uneasy.  Part of the unease is no doubt attributable to widespread populist concerns about large and powerful business entities.  Platforms like Facebook and Google in particular cause unease because they affect sensitive issues of communications, community, and politics.  But the platforms also make people uneasy because they seem boundless – enduring monopolies protected by ever-increasing scale and network economies, and growing monopolies aided by scope economies that enable them to conquer complementary markets.  They provoke a discussion about whether antitrust law is sufficient for the challenge.

Nicolas Petit’s Big Tech and the Digital Economy: The Moligopoly Scenario provides an insightful and valuable antidote to this unease.  While neither Panglossian nor comprehensive, Petit’s analysis persuasively argues that some of the concerns about the platforms are misguided or at least overstated.  As Petit sees it, the platforms are not so much monopolies in discrete markets – search, social networking, online commerce, and so on – as “multibusiness firms with business units in partly overlapping markets” that are engaged in a “dynamic oligopoly game” that might be “the socially optimal industry structure.”  Petit suggests that we should “abandon or at least radically alter traditional antitrust principles,” which are aimed at preserving “rivalry,” and “adapt to the specific non-rival economics of digital markets.”  In other words, the law should not try to diminish the platforms’ unique dominance in their individual sectors, which have already tipped to a winner-take-all (or most) state and in which protecting rivalry is not “socially beneficial.”  Instead, the law should encourage reductions of output in tipped markets in which the dominant firm “extracts a monopoly rent” in order to encourage rivalry in untipped markets. 

Petit’s analysis rests on the distinction between “tipped markets,” in which “tech firms with observed monopoly positions can take full advantage of their market power,” and “untipped markets,” which are “characterized by entry, instability and uncertainty.”  Notably, however, he does not expect “dispositive findings” as to whether a market is tipped or untipped.  The idea is to define markets, not just by “structural” factors like rival goods and services, market shares and entry barriers, but also by considering “uncertainty” and “pressure for change.”

Not surprisingly, given Petit’s training and work as a European scholar, his discussion of “antitrust in moligopoly markets” includes prescriptions that seem to one schooled in U.S. antitrust law to be a form of regulation that goes beyond proscribing unlawful conduct.  Petit’s principal concern is with reducing monopoly rents available to digital platforms.  He rejects direct reduction of rents by price regulation as antithetical to antitrust’s DNA and proposes instead indirect reduction of rents by permitting users on the inelastic side of a platform (the side from which the platform gains most of its revenues) to collaborate in order to gain countervailing market power and by restricting the platforms’ use of vertical restraints to limit user bypass. 

He would create a presumption against all horizontal mergers by dominant platforms in order to “prevent marginal increases of the output share on which the firms take a monopoly rent” and would avoid the risk of defining markets narrowly and thus failing to recognize that platforms are conglomerates that provide actual or potential competition in multiple partially overlapping commercial segments. By contrast, Petit would restrict the platforms’ entry into untipped markets only in “exceptional circumstances.”  For this, Petit suggests four inquiries: whether leveraging of network effects is involved; whether platform entry deters or forecloses entry by others; whether entry by others pressures the monopoly rents; and whether entry into the untipped market is intended to deter entry by others or is a long-term commitment.

One might question the proposition, which is central to much of Petit’s argument, that reducing monopoly rents in tipped markets will increase the platforms’ incentives to enter untipped markets.  Entry into untipped markets is likely to depend more on expected returns in the untipped market, the cost of capital, and constraints on managerial bandwidth than on expected returns in the tipped market.  But the more important issue, at least from the perspective of competition law, is whether – even assuming the correctness of all aspects of Petit’s economic analysis — the kind of categorical regulatory intervention proposed by Petit is superior to a law enforcement regime that proscribes only anticompetitive conduct that increases or threatens to increase market power.  Under U.S. law, anticompetitive conduct is conduct that tends to diminish the competitive efficacy of rivals and does not sufficiently enhance economic welfare by reducing costs, increasing product quality, or reducing above-cost prices.

If there were no concerns about the ability of legal institutions to know and understand the facts, a law enforcement regime would seem clearly superior.  Consider, for example, Petit’s recommendation that entry by a platform monopoly into untipped markets should be restricted only when network effects are involved and after taking into account whether the entry tends to protect the tipped market monopoly and whether it reflects a long-term commitment.  Petit’s proposed inquiries might make good sense as a way of understanding as a general matter whether market extension by a dominant platform is likely to be problematic.  But it is hard to see how economic welfare is promoted by permitting a platform to enter an adjacent market (e.g., Amazon entering a complementary product market) by predatory pricing or by otherwise unprofitable self-preferencing, even if the entry is intended to be permanent and does not protect the platform monopoly. 

Similarly, consider the proposed presumption against horizontal mergers.  That might not be a good idea if there is a small (10%) chance that the acquired firm would otherwise endure and modestly reduce the platform’s monopoly rents and an equal or even smaller chance that the acquisition will enable the platform, by taking advantage of economies of scope and asset complementarities, to build from the acquired firm an improved business that is much more valuable to consumers.  In that case, the expected value of the merger in welfare terms might be very positive.  Similarly, Petit would permit acquisitions by a platform of firms outside the tipped market as long as the platform has the ability and incentive to grow the target.  But the growth path of the target is not set in stone.  The platform might use it as a constrained complement, while an unaffiliated owner might build it into something both more valuable to consumers and threatening to the platform.  Maybe one of these stories describes Facebook’s acquisition of Instagram.

The prototypical anticompetitive horizontal merger story is one in which actual or potential competitors agree to share the monopoly rents that would be dissipated by competition between them. That story is confounded by communications that seem like threats, which imply a story of exclusion rather than collusion.  Petit refers to one such story.  But the threat story can be misleading.  Suppose, for example, that Platform sees Startup introduce a new business concept and studies whether it could profitably emulate Startup.  Suppose further that Platform concludes that, because of scale and scope economies available to it, it could develop such a business and come to dominate the market for a cost of $100 million acting alone or $25 million if it can acquire Startup and take advantage of its existing expertise, intellectual property, and personnel.  In that case, Platform might explain to Startup the reality that Platform is going to take the new market either way and propose to buy Startup for $50 million (thus offering Startup two-thirds of the gains from trade).  Startup might refuse, perhaps out of vanity or greed, in which case Platform as promised might enter aggressively and, without engaging in predatory or other anticompetitive conduct, drive Startup from the market.  To an omniscient law enforcement regime, there should be no antitrust violation from either an acquisition or the aggressive competition.  Either way, the more efficient provider prevails so the optimum outcome is realized in the new market.  The merger would have been more efficient because it would have avoided wasteful duplication of startup costs, and the merger proposal (later characterized as a threat) was thus a benign, even procompetitive, invitation to collude.  It would be a different story of course if Platform could overcome Startup’s first mover advantage only by engaging in anticompetitive conduct.

The problem is that antitrust decision makers often cannot understand all the facts.  Take the threat story, for example.  If Startup acquiesces and accepts the $50 million offer, the decision maker will have to determine whether Platform could have driven Startup from the market without engaging in predatory or anticompetitive conduct and, if not, whether absent the merger the parties would have competed against one another.  In other situations, decision makers are asked to determine whether the conduct at issue would be more likely than the but-for world to promote innovation or other, similarly elusive matters.

U.S. antitrust law accommodates its unavoidable uncertainty by various default rules and practices.  Some, like per se rules and the controversial Philadelphia National Bank presumption, might on occasion prohibit conduct that would actually have been benign or even procompetitive.  Most, however, insulate from antitrust liability conduct that might actually be anticompetitive.  These include rules applicable to predatory pricing, refusals to deal, two-sided markets, and various matters involving patents.  Perhaps more important are proof requirements in general.  U.S. antitrust law is based on the largely unexamined notion that false positives are worse than false negatives and thus, for the most part, puts the burden of uncertainty on the plaintiff.

Petit is proposing, in effect, an alternative approach for the digital platforms.  This approach would not just proscribe anticompetitive conduct.  It would, instead, apply to specific firms special rules that are intended to promote a desired outcome, the reduction in monopoly rents in tipped digital markets.  So, one question suggested by Petit’s provocative study is whether the inevitable uncertainty surrounding issues of platform competition are best addressed by the kinds of categorical rules Petit proposes or by case-by-case application of abstract legal principles.  Put differently, assuming that economic welfare is the objective, what is the best way to minimize error costs?

Broadly speaking, there are two kinds of error costs: specification errors and application errors.  Specification errors reflect legal rules that do not map perfectly to the normative objectives of the law (e.g., a rule that would prohibit all horizontal mergers by dominant platforms when some such mergers are procompetitive or welfare-enhancing).  Application errors reflect mistaken application of the legal rule to the facts of the case (e.g., an erroneous determination whether the conduct excludes rivals or provides efficiency benefits).   

Application errors are the most likely source of error costs in U.S. antitrust law.  The law relies largely on abstract principles that track the normative objectives of the law (e.g., conduct by a monopoly that excludes rivals and has no efficiency benefit is illegal). Several recent U.S. antitrust decisions (American Express, Qualcomm, and Farelogix among them) suggest that error costs in a law enforcement regime like that in the U.S. might be substantial and even that case-by-case application of principles that require applying economic understanding to diverse factual circumstances might be beyond the competence of generalist judges.  Default rules applicable in special circumstances reduce application errors but at the expense of specification errors.

Specification errors are more likely with categorical rules, like those suggested by Petit.  The total costs of those specification errors are likely to exceed the costs of mistaken decisions in individual cases because categorical rules guide firm conduct in general, not just in decided cases, and rules that embody specification errors are thus likely to encourage undesirable conduct and to discourage desirable conduct in matters that are not the subject of enforcement proceedings.  Application errors, unless systematic and predictable, are less likely to impose substantial costs beyond the costs of mistaken decisions in the decided cases themselves.  Whether any particular categorical rules are likely to have error costs greater than the error costs of the existing U.S. antitrust law will depend in large part on the specification errors of the rules and on whether their application is likely to be accompanied by substantial application costs.

As discussed above, the particular rules suggested by Petit appear to embody important specification errors.  They are likely also to lead to substantial application errors because they would require determination of difficult factual issues.  These include, for example, whether the market at issue has tipped, whether the merger is horizontal, and whether the platform’s entry into an untipped market is intended to be permanent.  It thus seems unlikely, at least from this casual review, that adoption of the rules suggested by Petit will reduce error costs.

 Petit’s impressive study might therefore be most valuable, not as a roadmap for action, but as a source of insight and understanding of the facts – what Petit calls a “mental model to help decision makers understand the idiosyncrasies of digital markets.”  If viewed, not as a prescription for action, but as a description of the digital world, the Moligopoly Scenario can help address the urgent matter of reducing the costs of application errors in U.S. antitrust law.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.]

To mark the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario”, Truth on the Market and  International Center for Law & Economics (ICLE) are hosting some of the world’s leading scholars and practitioners of competition law and economics to discuss some of the book’s themes.

In his book, Petit offers a “moligopoly” framework for understanding competition between large tech companies that may have significant market shares in their ‘home’ markets but nevertheless compete intensely in adjacent ones. Petit argues that tech giants coexist as both monopolies and oligopolies in markets defined by uncertainty and dynamism, and offers policy tools for dealing with the concerns people have about these markets that avoid crude “big is bad” assumptions and do not try to solve non-economic harms with the tools of antitrust.

This symposium asks contributors to give their thoughts either on the book as a whole or on a selected chapter that relates to their own work. In it we hope to explore some of Petit’s arguments with different perspectives from our contributors.

Confirmed Participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Kelly Fayne, Antitrust Associate, Latham & Watkins
  • Shane Greenstein, Professor of Business Administration; Co-chair of the HBS Digital Initiative, Harvard Business School
  • Peter Klein, Professor of Entrepreneurship and Chair, Department of Entrepreneurship and Corporate Innovation, Baylor University
  • William Kovacic, Global Competition Professor of Law and Policy; Director, Competition Law Center, George Washington University Law
  • Kai-Uwe Kuhn, Academic Advisor, University of East Anglia
  • Richard Langlois, Professor of Economics, University of Connecticut
  • Doug Melamed, Professor of the Practice of Law, Stanford law School
  • David Teece, Professor in Global Business, University of California’s Haas School of Business (Berkeley); Director, Center for Global Strategy; Governance and Faculty Director, Institute for Business Innovation

Thank you again to all of the excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting later today, October 12, 2020.

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramaz Samrout, (Principal, REIM Strategies; Lay Member, Competition Tribunal of Canada)]

At a time when nations are engaged in bidding wars in the worldwide market to alleviate the shortages of critical medical necessities for the Covid-19 crisis, it certainly bares the question, have free trade and competition policies resulting in efficient global integrated market networks gone too far? Did economists and policy makers advocating for efficient competitive markets not foresee a failure of the supply chain in meeting a surge in demand during an inevitable global crisis such as this one?

The failures in securing medical supplies have escalated a global health crisis to geopolitical spats fuelled by strong nationalistic public sentiments. In the process of competing to acquire highly treasured medical equipment, governments are confiscating, outbidding, and diverting shipments at the risk of not adhering to the terms of established free trade agreements and international trading rules, all at the cost of the humanitarian needs of other nations.

Since the start of the Covid-19 crisis, all levels of government in Canada have been working on diversifying the supply chain for critical equipment both domestically and internationally. But, most importantly, these governments are bolstering domestic production and an integrated domestic supply network recognizing the increasing likelihood of tightening borders impacting the movement of critical products.

For the past 3 weeks in his daily briefings, Canada’s Prime Minister, Justin Trudeau, has repeatedly confirmed the Government’s support of domestic enterprises that are switching their manufacturing lines to produce critical medical supplies and of other “made in Canada” products.

As conditions worsen in the US and the White House hardens its position towards collaboration and sharing for the greater global humanitarian good—even in the presence of a recent bilateral agreement to keep the movement of essential goods fluid—Canada’s response has become more retaliatory. Now shifting to a message emphasizing that the need for “made in Canada” products is one of extreme urgency.

On April 3rd, President Trump ordered Minnesota-based 3M to stop exporting medical-grade masks to Canada and Latin America; a decision that was enabled by the triggering of the 1950 Defence Production Act. In response, Ontario Premier, Doug Ford, stated in his public address:

Never again in the history of Canada should we ever be beholden to companies around the world for the safety and wellbeing of the people of Canada. There is nothing we can’t build right here in Ontario. As we get these companies round up and we get through this, we can’t be going over to other sources because we’re going to save a nickel.

Premier Ford’s words ring true for many Canadians as they watch this crisis unfold and wonder where would it stop if the crisis worsens? Will our neighbour to the south block shipments of a Covid-19 vaccine when it is developed? Will it extend to other essential goods such as food or medicine? 

There are reports that the decline in the number of foreign workers in farming caused by travel restrictions and quarantine rules in both Canada and the US will cause food production shortages, which makes the actions of the White House very unsettling for Canadians.  Canada’s exports to the US constitute 75% of total Canadian exports, while imports from the US constitute 46%. Canada’s imports of food and beverages from the US were valued at US $24 billion in 2018 including: prepared foods, fresh vegetables, fresh fruits, other snack foods, and non-alcoholic beverages.

The length and depth of the crisis will determine to what extent the US and Canadian markets will experience shortages in products. For Canada, the severity of the pandemic in the US could result in further restrictions on the border. And it is becoming progressively more likely that it will also result in a significant reduction in the volume of necessities crossing the border between the two nations.

Increasingly, the depth and pain experienced from shortages in necessities will shape public sentiment towards free trade and strengthen mainstream demands of more nationalistic and protectionist policies. This will result in more pressure on political and government establishments to take action.

The reliance on free trade and competition policies favouring highly integrated supply chain networks is showing cracks in meeting national interests in this time of crisis. This goes well beyond the usual economic factors of contention between countries of domestic employment, job loss and resource allocation. The need for correction, however, risks moving the pendulum too far to the side of protectionism.

Free trade setbacks and global integration disruptions would become the new economic reality to ensure that domestic self-sufficiency comes first. A new trade trend has been set in motion and there is no going back from some level of disintegrating globalised supply chain productions.

How would domestic self-sufficiency be achieved? 

Would international conglomerates build local plants and forgo their profit maximizing strategies of producing in growing economies that offer cheap wages and resources in order to avoid increased protectionism?

Will the Canada-United States-Mexico Agreement (CUSMA) known as the NEW NAFTA, which until today has not been put into effect, be renegotiated to allow for production measures for securing domestic necessities in the form of higher tariffs, trade quotas, and state subsidies?

Are advanced capitalist economies willing to create State-Owned Industries to produce domestic products for what it deems necessities?

Many other trade policy variations and options focused on protectionism are possible which could lead to the creation of domestic monopolies. Furthermore, any return to protected national production networks will reduce consumer welfare and eventually impede technological advancements that result from competition. 

Divergence between free trade agreements and competition policy in a new era of protectionism.

For the past 30 years, national competition laws and policies have increasingly become an integrated part of free trade agreements, albeit in the form of soft competition law language, making references to the parties’ respective competition laws, and the need for transparency, procedural fairness in enforcement, and cooperation.

Similarly, free trade objectives and frameworks have become part of the design and implementation of competition legislation and, subsequently, case law. Both of which are intended to encourage competitive market systems and efficiency, an implied by-product of open markets.

In that regard, the competition legal framework in Canada, the Competition Act, seeks to maintain and strengthen competitive market forces by encouraging maximum efficiency in the use of economic resources. Provisions to determine the level of competitiveness in the market consider barriers to entry, among them, tariff and non-tariff barriers to international trade. These provisions further direct adjudicators to examine free trade agreements currently in force and their role in facilitating the current or future possibility of an international incumbent entering the market to preserve or increase competition. And it goes further to also assess the extent of an increase in the real value of exports, or substitution of domestic products for imported products.

It is evident in the design of free trade agreements and competition legislation that efficiency, competition in price, and diversification of products is to be achieved by access to imported goods and by encouraging the creation of global competitive suppliers.

Therefore, the re-emergence of protectionist nationalistic measures in international trade will result in a divergence between competition laws and free trade agreements. Such setbacks would leave competition enforcers, administrators, and adjudicators grappling with the conflict between the economic principles set out in competition law and the policy objectives that could be stipulated in future trade agreements. 

The challenge ahead facing governments and industries is how to correct for the cracks in the current globalized competitive supply networks that have been revealed during this crisis without falling into a trap of nationalism and protectionism.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramsi Woodcock, (Assistant Professor of Law, University of Kentucky; Assistant Professor of Management, Gatton College of Business and Economics).]

Specialists know that the antitrust courses taught in law schools and economics departments have an alter ego in business curricula: the course on business strategy. The two courses cover the same material, but from opposite perspectives. Antitrust courses teach how to end monopolies; strategy courses teach how to construct and maintain them.

Strategy students go off and run businesses, and antitrust students go off and make government policy. That is probably the proper arrangement if the policy the antimonopolists make is domestic. We want the domestic economy to run efficiently, and so we want domestic policymakers to think about monopoly—and its allocative inefficiencies—as something to be discouraged.

The coronavirus, and the shortages it has caused, have shown us that putting the antimonopolists in charge of international policy is, by contrast, a very big mistake.

Because we do not yet have a world government. America’s position, in relation to the rest of the world, is therefore more akin to that of a business navigating a free market than it is to a government seeking to promote efficient interactions among the firms that it governs. To flourish, America must engage in international trade with a view to creating and maintaining monopoly positions for itself, rather than eschewing them in the interest of realizing efficiencies in the global economy. Which is to say: we need strategists, not antimonopolists.

For the global economy is not America, and there is no guarantee that competitive efficiencies will redound to America’s benefit, rather than to those of her competitors. Absent a world government, other countries will pursue monopoly regardless what America does, and unless America acts strategically to build and maintain economic power, America will eventually occupy a position of commercial weakness, with all of the consequences for national security that implies.

When Antimonopolists Make Trade Policy

The free traders who have run American economic policy for more than a generation are antimonopolists playing on a bigger stage. Like their counterparts in domestic policy, they are loyal in the first instance only to the efficiency of the market, not to any particular trader. They are content to establish rules of competitive trading—the antitrust laws in the domestic context, the World Trade Organization in the international context—and then to let the chips fall where they may, even if that means allowing present or future adversaries to, through legitimate means, build up competitive advantages that the United States is unable to overcome.

Strategy is consistent with competition when markets are filled with traders of atomic size, for then no amount of strategy can deliver a competitive advantage to any trader. But global markets, more even than domestic markets, are filled with traders of macroscopic size. Strategy then requires that each trader seek to gain and maintain advantages, undermining competition. The only way antimonopolists could induce the trading behemoth that is America to behave competitively, and to let the chips fall where they may, was to convince America voluntarily to give up strategy, to sacrifice self-interest on the altar of efficient markets.

And so they did.

Thus when the question arose whether to permit American corporations to move their manufacturing operations overseas, or to permit foreign companies to leverage their efficiencies to dominate a domestic industry and ensure that 90% of domestic supply would be imported from overseas, the answer the antimonopolists gave was: “yes.” Because it is efficient. Labor abroad is cheaper than labor at home, and transportation costs low, so efficiency requires that production move overseas, and our own resources be reallocated to more competitive uses.

This is the impeccable logic of static efficiency, of general equilibrium models allocating resources optimally. But it is instructive to recall that the men who perfected this model were not trying to describe a free market, much less international trade. They were trying to create a model that a central planner could use to allocate resources to a state’s subjects. What mattered to them in building the model was the good of the whole, not any particular part. And yet it is to a particular part of the global whole that the United States government is dedicated.

The Strategic Trader

Students of strategy would have taken a very different approach to international trade. Strategy teaches that markets are dynamic, and that businesses must make decisions based not only on the market signals that exist today, but on those that can be made to exist in the future. For the successful strategist, unlike the antimonopolist, identifying a product for which consumers are willing to pay the costs of production is not alone enough to justify bringing the product to market. The strategist must be able to secure a source of supply, or a distribution channel, that competitors cannot easily duplicate, before the strategist will enter.

Why? Because without an advantage in supply, or distribution, competitors will duplicate the product, compete away any markups, and leave the strategist no better off than if he had never undertaken the project at all. Indeed, he may be left bankrupt, if he has sunk costs that competition prevents him from recovering. Unlike the economist, the strategist is interested in survival, because he is a partisan of a part of the market—himself—not the market entire. The strategist understands that survival requires power, and all power rests, to a greater or lesser degree, on monopoly.

The strategist is not therefore a free trader in the international arena, at least not as a matter of principle. The strategist understands that trading from a position of strength can enrich, and trading from a position of weakness can impoverish. And to occupy that position of strength, America must, like any monopolist, control supply. Moreover, in the constantly-innovating markets that characterize industrial economies, markets in which innovation emerges from learning by doing, control over physical supply translates into control over the supply of inventions itself.

The strategist does not permit domestic corporations to offshore manufacturing in any market in which the strategist wishes to participate, because that is unsafe: foreign countries could use control over that supply to extract rents from America, to drive domestic firms to bankruptcy, and to gain control over the supply of inventions.

And, as the new trade theorists belatedly discovered, offshoring prevents the development of the dense, geographically-contiguous, supply networks that confer power over whole product categories, such as the electronics hub in Zhengzhou, where iPhone-maker Foxconn is located.

Or the pharmaceutical hub in Hubei.

Coronavirus and the Failure of Free Trade

Today, America is unprepared for the coming wave of coronavirus cases because the antimonopolists running our trade policy do not understand the importance of controlling supply. There is a shortage of masks, because China makes half of the world’s masks, and the Chinese have cut off supply, the state having forbidden even non-Chinese companies that offshored mask production from shipping home masks for which American customers have paid. Not only that, but in January China bought up most of the world’s existing supply of masks, with free-trade-obsessed governments standing idly by as the clock ticked down to their own domestic outbreaks.  

New York State, which lies at the epicenter of the crisis, has agreed to pay five times the market price for foreign supply. That’s not because the cost of making masks has risen, but because sellers are rationing with price. Which is to say: using their control over supply to beggar the state. Moreover, domestic mask makers report that they cannot ramp up production because of a lack of supply of raw materials, some of which are actually made in Wuhan, China. That’s the kind of problem that does not arise when restrictions on offshoring allow manufacturing hubs to develop domestically.

But a shortage of masks is just the beginning. Once a vaccine is developed, the race will be on to manufacture it, and America controls less than 30% of the manufacturing facilities that supply pharmaceuticals to American markets. Indeed, just about the only virus-relevant industries in which we do not have a real capacity shortage today are food and toilet paper, panic buying notwithstanding. Because fortunately for us antimonopolists could not find a way to offshore California and Oregon. If they could have, they surely would have, since both agriculture and timber are labor-intensive industries.

President Trump’s failed attempt to buy a German drug company working on a coronavirus vaccine shows just how damaging free market ideology has been to national security: as Trump should have anticipated given his resistance to the antimonopolists’ approach to trade, the German government nipped the deal in the bud. When an economic agent has market power, the agent can pick its prices, or refuse to sell at all. Only in general equilibrium fantasy is everything for sale, and at a competitive price to boot.

The trouble is: American policymakers, perhaps more than those in any other part of the world, continue to act as though that fantasy were real.

Failures Left and Right

America’s coronavirus predicament is rich with intellectual irony.

Progressives resist free trade ideology, largely out of concern for the effects of trade on American workers. But they seem not to have realized that in doing so they are actually embracing strategy, at least for the benefit of labor.

As a result, progressives simultaneously reject the approach to industrial organization economics that underpins strategic thinking in business: Joseph Schumpeter’s theory of creative destruction, which holds that strategic behavior by firms seeking to achieve and maintain monopolies is ultimately good for society, because it leads to a technological arms race as firms strive to improve supply, distribution, and indeed product quality, in ways that competitors cannot reproduce.

Even if progressives choose to reject Schumpeter’s argument that strategy makes society better off—a proposition that is particularly suspect at the international level, where the availability of tanks ensures that the creative destruction is not always creative—they have much to learn from his focus on the economics of survival.

By the same token, conservatives embrace Schumpeter in arguing for less antitrust enforcement in domestic markets, all the while advocating free trade at the international level and savaging governments for using dumping and tariffs—which is to say, the tools of monopoly—to strengthen their trading positions. It is deeply peculiar to watch the coronavirus expose conservative economists as pie-in-the-sky internationalists. And yet as the global market for coronavirus necessities seizes up, the ideology that urged us to dispense with producing these goods ourselves, out of faith that we might always somehow rely on the support of the rest of the world, provided through the medium of markets, looks pathetically naive.

The cynic might say that inconsistency has snuck up on both progressives and conservatives because each remains too sympathetic to a different domestic constituency.

Dodging a Bullet

America is lucky that a mere virus exposed the bankruptcy of free trade ideology. Because war could have done that instead. It is difficult to imagine how a country that cannot make medical masks—much less a Macbook—would be able to respond effectively to a sustained military attack from one of the many nations that are closing the technological gap long enjoyed by the United States.

The lesson of the coronavirus is: strategy, not antitrust.