Archives For Ireland

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (“CJEU”) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.

In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.

The IDPC’s decision employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then standard contractual clauses—an alternative mechanism for firms for transferring data—are incapable of satisfying the requirements of EU law.

The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms practically incapable doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.

But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.

It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.

After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.

That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.

  1. U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
  2. The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.

As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.

Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.

Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.

(note: edited to clarify that the Irish High Court is not reviewing SCC’s directly and that the CLOUD Act would not impose legal barriers for firms, but practical ones).

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas W. Hazlett is the H.H. Macaulay Endowed Professor of Economics at Clemson University.]

Disclosure: The one time I met Ajit Pai was when he presented a comment on my book, “The Political Spectrum,” at a Cato Institute forum in 2018. He was gracious, thorough, and complimentary. He said that while he had enjoyed the volume, he hoped not to appear in upcoming editions. I took that to imply that he read the book as harshly critical of the Federal Communications Commission. Well, when merited, I concede. But it left me to wonder if he had followed my story to its end, as I document the success of reforms launched in recent decades and advocate their extension. Inclusion in a future edition might work out well for a chairman’s legacy. Or…

While my comment here focuses on radio-spectrum allocation, there was a notable reform achieved during the Pai FCC that touches on the subject, even if far more general in scope. In January 2018, the commission voted to initiate an Office of Economics and Analytics.[1] The organizational change was expeditiously instituted that same year, with the new unit stood up under the leadership of FCC economist Giulia McHenry.[2]  I long proposed an FCC “Office of Economic Analysis” on the grounds that it had a reasonable prospect of improving evidence-based policymaking, allowing cost-benefit calculations to be made in a more professional, independent, and less political context.[3]  I welcome this initiative by the Pai FCC and look forward to the empirical test now underway.[4] 

Big Picture

Spectrum policy had notable triumphs under Chairman Pai but was—as President Carter dubbed the Vietnam War—an “incomplete success.” The main cause for celebration was the campaign to push spectrum-access rights into the marketplace. Pai’s public position was straightforward: “Our spectrum strategy calls for making low-band, mid-band, and high-band airwaves available for flexible use,” he wrote in an FCC blog post on June 19, 2018. But the means used by regulators to pursue that policy agenda repeatedly, historically prove determinative. The Pai FCC traveled pathways both effective and ineffective, and we should learn from either. The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models. The traditional spectrum-allocation approach is to permit exactly what the FCC finds to be the best use of spectrum, but this assumes knowledge about the value of alternatives the regulator does not possess. Moreover, it assumes away the costs of regulators imposing their solutions over and above a competitive process that might have less direction but more freedom. In a 2017 notice, the FCC displayed the progress we have made in departing from administrative control, when it sought guidance from private sector commenters this way:

“Are there opportunities to incentivize relocation or repacking of incumbent licensees to make spectrum available for flexible broadband use?

We seek comment on whether auctions … could be used to increase the availability of flexible use spectrum?”

By focusing on how rights—not markets—should be structured, the FCC may side-step useless food fights and let social progress flow.[5]

Progress

Spectrum-allocation results were realized. Indeed, when one looks at the pattern in licensed and unlicensed allocations for “flexible use” under 10 GHz, the recent four-year interval coincides with generous increases, both absolutely and from trend. See Figure 1. These data feature expansions in bandwidth via liberal licenses that include 70 MHz for CBRS (3.5 GHz band), with rights assigned in Auction 105 (2020), and 280 MHz (3.7 – 3.98 GHz) assigned in Auction 107 (2020-21, soon to conclude). The 70 MHz added via Auction 1002 (600 MHz) in 2017 was accounted for during the previous FCC, but substantial bandwidth in Auctions 101, 102, and 103 was added in the millimeter wave bands (not shown in Figure 1, which focuses on low- and mid-band rights).[6]  Meanwhile, multiple increments of unlicensed spectrum allocations were made in 2020: 30 MHz shifted from the Intelligent Transportation Services set-aside (5.9 GHz) in 2020, 80 MHz of CBRS in 2020, and 1,200 MHz (6 GHz) dedicated to Wi-Fi type services in 2020.[7]  Substantial millimeter wave frequency space was previously set aside for unlicensed operations in 2016.[8]

Source: FCC and author’s calculations.

First, that’s not the elephant in the room. Auction 107 has assigned licenses allocated 280 MHz of flexible-use mid-band spectrum, producing at least $94 billion in gross bids (of which about $13 billion will be paid to incumbent satellite licensees to reconfigure their operations so as to occupy just 200 MHz, rather than 500 MHz, of the 3.7 – 4.2 GHz band).[9]  This crushes previous FCC sales; indeed, it constitutes about 42% of all auction receipts:

  • FCC auction receipts, 1994-2019: $117 billion[10]
  • FCC auction receipts, 2020 (Auctions 103 and 105): $12.1 billion
  • FCC auction winning bids, 2020 (Auction 107): $94 billion (gross bids including relocation costs, incentive payments, and before Assignment Phase payments)

The addition of the 280 MHz to existing flexible-use spectrum suitable for mobile (aka, Commercial Mobile Radio Services – CMRS) is the largest increment ever released. It will compose about one-fourth of the low- and mid-band frequencies available via liberal licenses. This constitutes a huge advance with respect to 5G deployments, but going much further—promoting competition, innovation in apps and devices, the Internet of Things, and pushing the technological envelope toward 6G and beyond. Notably, the U.S. has uniquely led this foray to a new frontier in spectrum allocation.

The FCC deserves praise for pushing this proceeding to fruition. So, here it is. The C-Band is a very big deal and a major policy success. And more: in Auction 107, the commission very wisely sold overlay rights. It did not wait for administrative procedures to reconfigure wireless use, tightly supervising new “sharing” of the band, but (a) accepted the incumbents’ basic strategy for reallocation, (b) sold new prospective rights to high bidders, subject to protection of incumbents, (c) used a fraction of proceeds to fund incumbents cooperating with the reallocation, plussing-up payments when hitting deadlines, and (d) implicitly relied on the new licensees to push the relocation process forward.

Challenges

It is interesting that the FCC sort of articulated this useful model, and sort of did not:

For a successful public auction of overlay licenses in the 3.7-3.98 GHz band, bidders need to know before an auction commences when they will get access to that currently occupied spectrum as well as the costs they will incur as a condition of their overlay license. (FCC C-Band Order [Feb. 7, 2020], par. 110)

A germ of truth, but note: Auction 107 also demonstrated just the reverse. Rights were sold prior to clearing the airwaves and bidders—while liable for “incentive payments”—do not know with certainty when the frequencies will be available for their use. Risk is embedded, as it is widely in financial assets (corporate equity shares are efficiently traded despite wide disagreement on future earnings), and yet markets perform. Indeed, the “certainty” approach touted by the FCC in their language about a “successful public auction” has long deterred efficient reallocations, as the incumbents’ exiting process holds up arrival of the entrants. The central feature of the C-Band reallocation was not to create certainty, but to embed an overlay approach into the process. This draws incumbents and entrants together into positive-sum transactions (mediated by the FCC are party-to-party) where they cooperate to create new productive opportunities, sharing the gains.  

The inspiration for the C-Band reallocation of satellite spectrum was bottom-up. As with so much of the radio spectrum, the band devoted to satellite distribution of video (relays to and from an array of broadcast and cable TV systems and networks) was old and tired. For decades, applications and systems were locked in by law. They consumed lots of bandwidth while ignoring the emergence of newer technologies like fiber optics (emphasis to underscore that products launched in the 1980s are still cutting-edge challenges for 2021 Spectrum Policy). Spying this mismatch, and seeking gains from trade, creative risk-takers petitioned the FCC.

In a mid-2017 request, computer chipmaker Intel and C-Band satellite carrier Intelsat (no corporate relationship) joined forces to ask for permission to expand the scope of satellite licenses. The proffered plan was for license holders to invest in spectrum economies by upgrading satellites and earth stations—magically creating new, unoccupied channels in prime mid-band frequencies perfect for highly valuable 5G services. All existing video transport services would continue, while society would enjoy way more advanced wireless broadband. All regulators had to do was allow “change of use” in existing licenses. Markets would do the rest: satellite operators would make efficient multi-billion-dollar investments, coordinating with each other and their customers, and then take bids from new users itching to access the prime 4 GHz spectrum. The transition to bold, new, more valuable applications would compensate legacy customers and service providers.

This “spectrum sharing” can spin gold – seizing on capitalist discovery and demand revelation in market bargains. Voila, the 21st century, delivered.

Well, yes and no. At first, the FCC filing was a yawner, the standard bureaucratic response. But this one took off took off when Chairman Pai—alertly, and in the public interest—embraced the proposal, putting it on the July 12, 2018 FCC meeting agenda. Intelsat’s market cap jumped from about $500 million to over $4.5 billion—the value of the spectrum it was using was worth far more than the service it was providing, and the prospect that it might realize some substantial fraction of the resource revaluation was visible evidence.[11] 

While the Pai FCC leaned in the proper policy direction, politics soon blew the process down. Congress denounced the “private auction” as a “windfall,” bellowing against the unfairness of allowing corporations (some foreign-owned!) to cash out. The populist message was upside-down. The social damage created by mismanagement of spectrum—millions of Americans paying more and getting less from wireless than otherwise, robbing ordinary citizens of vast consumer surplus—was being fixed by entrepreneurial initiative. Moreover, the public gains (lower prices plus innovation externalities spun off from liberated bandwidth) was undoubtedly far greater than any rents captured by the incumbent licensees. And a great bonus to spur future progress: rewards for those parties initiating and securing efficiency-enhancing rights will unleash vastly more productive activity.

But the populist winds—gale force and bipartisan—spun the FCC.

It was legally correct that Intelsat and its rival satellite carriers did not own the spectrum allocated to the C-Band. Indeed, that was root of the problem. And here’s a fatal catch: in applying for broader spectrum property rights, they revealed a valuable discovery. The FCC, posing as referee, turned competitor and appropriated the proffered business plan on behalf of its client (the U.S. government), and then auctioned it to bidders. Regulators did tip the incumbents, whose help was still needed in reorganizing the C-Band, setting $3.3 billion as a fair price for “moving costs” (changing out technology to reduce their transmission footprints) and dangled another $9.7 billion in “incentive payments” not to dilly dally. In total, carriers have bid some $93.9 billion, or $1.02 per MHz-Pop.[12] This is 4.7 times the price paid for the Priority Access Licenses (PALs) allocated 70 MHz in Auction 105 earlier in 2020.

The TOTM assignment was not to evaluate Ajit Pai but to evaluate the Pai FCC and its spectrum policies. On that scale, great value was delivered by the Intel-Intelsat proposal, and the FCC’s alert endorsement, offset in some measure by the long-term losses that will likely flow from the dirigiste retreat to fossilized spectrum rights controlled by diktat.

Sharing Nicely

And that takes us to 2020’s Auction 105 (Citizens Broadband Radio Services, CBRS). The U.S. has lagged much of the world in allocating flexible-use spectrum rights in the 3.5 GHz band. Ireland auctioned rights to use 350 MHz in May 2017 and many countries did likewise between then and 2020, distributing far more than the 70 MHz allocated to the Priority Access Licenses (PALs); 150 MHz to 390 MHz is the range. The Pai FCC can plausibly assign the lag to “preexisting conditions.” Here, however, I will stress that the Pai FCC did not substantially further our understanding of the costs of “spectrum sharing” under coordinating devices imposed by the FCC.

All commercially valuable spectrum bands are shared. The most intensely shared, in the relevant economic sense, are those bands curated by mobile carriers. These frequencies are complemented by extensive network capital supplied by investors, and permit millions of users—including international roamers—to gain seamless connectivity. Unlicensed bands, alternatively, tend to separate users spatially, powering down devices to localize footprints. These limits work better in situations where users desire short transmissions, like a Bluetooth link from iPhone to headphone or when bits can be handed off to a wide area network by hopping 60 feet to a local “hot spot.” The application of “spectrum sharing” to imply a non-exclusive (or unlicensed) rights regime is, at best, highly misleading. Whenever conditions of scarcity exist, meaning that not all uses can be accommodated without conflict, some rationing follows. It is commonly done by price, behavioral restriction, or both.

In CBRS, the FCC has imposed three layers of “priority” access across the 3550-3700 MHz band. Certain government radars are assumed to be fixed and must be protected. When in use, these systems demand other wireless services stay silent on particular channels. Next in line are PAL owners, parties which have paid for exclusivity, but which are not guaranteed access to a given channel. These rights, which sold for about $4.5 billion, are allocated dynamically by a controller (a Spectrum Access System, or SAS). The radios and networks used automatically and continuously check in to obtain spectrum space permissions. Seven PALs, allocated 10 MHz each, have been assigned, 70 MHz in total. Finally, General Access Authorizations (GAA) are given without limit or exclusivity to radio devices across the 80 MHz remaining in the band plus any PALs not in use. Some 5G phones are already equipped to use such bands on an unlicensed basis.

We shall see how the U.S. system works in comparison to alternatives. What is important to note is that the particular form of “spectrum sharing” is neither necessary nor free. As is standard outside the U.S., exclusive rights analogous to CMRS licenses could have been auctioned here, with U.S. government radars given vested rights.

One point that is routinely missed is that the decision to have the U.S. government partition the rights in three layers immediately conceded that U.S. government priority applications (for radar) would never shift. That is asserted as though it is a proposition that needs no justification, but it is precisely the sort of impediment to efficiency that has plagued spectrum reallocations for decades. It was, for instance, the 2002 assumption behind TV “white spaces”—that 402 MHz of TV Band frequencies was fixed in place, that the unused channels could never be repackaged and sold as exclusive rights and diverted to higher-valued uses. That unexamined assertion was boldly run then, as seen in the reduction of the band from 402 MHz to 235 MHz following Auctions 73 (2008) and 1001/1002 (2016-17), as well as in the clear possibility that remaining TV broadcasts could today be entirely transferred to cable, satellite, and OTT broadband (as they have already, effectively, been). The problem in CBRS is that the rights now distributed for the 80 MHz of unlicensed, with its protections of certain priority services, does not sprinkle the proper rights into the market such that positive-sum transitions can be negotiated. We’re stuck with whatever inefficiencies this “preexisting condition” of the 3.5 GHz might endow, unless another decadelong FCC spectrum allocation can move things forward.[13]

Already visible is that the rights sold as PALs in CBRS are only about 20% of the value of rights sold in the C-Band. This differential reflects the power restrictions and overhead costs embedded in the FCC’s sharing rules for CBRS (involving dynamic allocation of the exclusive access rights conveyed in PALs) but avoided in C-Band. In the latter, the sharing arrangements are delegated to the licensees. Their owners reveal that they see these rights as more productive, with opportunities to host more services.

There should be greater recognition of the relevant trade-offs in imposing coexistence rules. Yet, the Pai FCC succumbed in 5.9 GHz and in the 6 GHz bands to the tried-and-true options of Regulation Past. This was hugely ironic in the former, where the FCC had in 1999 imposed unlicensed access under rules that favored specific automotive informatics—Dedicated Short-Range Communications (DSRC)—that proved a 20-year bust. In diagnosing this policy blunder, the FCC then repeated it, splitting off a 45 MHz band with Wi-Fi-friendly unlicensed rules, and leaving 30 MHz to continue as the 1999 set-aside for DSRC. A liberalization of rights that would have allowed for a “private auction” to change the use of the band would have been the preferred approach. Instead, we are left with a partition of the band into rival rule regimes again established by administrative fiat.

This approach was then again imposed in the large 1.2 GHz unlicensed allocation surrounding 6 GHz, making a big 2020 splash. The FCC here assumed, categorically, that unlicensed rules are the best way to sponsor spectrum coordination. It ignores the costs of that coordination. And the commission appears to forget the progress it has made with innovative policy solutions, pulling in market forces through “overlay” licenses. These useful devices were used, in one form or another, to reallocate spectrum in for 2G in Auction 4, AWS in Auction 66, millimeter bands in Auctions 102 and 103, the “TV Incentive Auction,” the satellite C-Band in Auction 107, and have recently appeared as star players in the January 2021 FCC plan to rationalize the complex mix of rights scattered around the 2.5 GHz band.[14]  Too complicated for administrators to figure out, it could be transactionally more efficient to let market competitors figure this out.

The Future

The re-allocations in 5.9 GHz and the 6 GHz bands may yet host productive services. One can hope. But how will regulators know that the options allowed, and taken, are superior to what alternatives—suppressed by law for the next five, 10, 20 years—might have emerged had competitors had the right to test business models or technologies disfavored by the regulators best laid plans. That is the thinking that locked in the TV band, the C-Band for Satellites, and the ITS Band. It’s what we learned to be problematic throughout the Political Radio Spectrum. We shall see, as Chairman Pai speculated, what future chapters these decisions leave for future editions.


[1]   https://www.fcc.gov/document/fcc-votes-establish-office-economics-analytics-0

[2]   https://www.fcc.gov/document/fcc-opens-office-economics-and-analytics

[3]   Thomas Hazlett, Economic Analysis at the Federal Communications Commission: A Simple Proposal to Atone for Past Sins, Resources for the Future Discussion Paper 11-23(May 2011);David Honig, FCC Reorganization: How Replacing Silos with Functional Organization Would Advance Civil Rights, 3 University of Pennsylvania Journal of Law and Public Affairs 18 (Aug. 2018). 

[4] It is with great sadness that Jerry Ellig, the 2017-18 FCC Chief Economist who might well offer the most careful analysis of such a structural reform, will not be available for the task – one which he had already begun, writing this recent essay with two other FCC Chief Economists: Babette Boliek, Jerry Ellig and Jeff Prince, Improved economic analysis should be lasting part of Pai’s FCC legacy, The Hill (Dec. 29, 2020).  Jerry’s sudden passing, on January 21, 2021, is a deep tragedy.  Our family weeps for his wonderful wife, Sandy, and his precious daughter, Kat. 

[5]  As argued in: Thomas Hazlett, “The best way for the FCC to enable a 5G future,” Reuters (Jan. 17, 2018).

[6]  In 2018-19, FCC Auctions 101 and 102 offered licenses allocated 1,550 MHz of bandwidth in the 24 GHz and 28 GHz bands, although some of the bandwidth had previously been assigned and post-auction confusion over interference with adjacent frequency uses (in 24 GHz) has impeded some deployments.  In 2020, Auction 103 allowed competitive bidding for licenses to use 37, 39, and 47 GHz frequencies, 3400 MHz in aggregate.  Net proceeds to the FCC in 101, 102 and 103 were:  $700.3 million, $2.02 billion, and $7.56 billion, respectively.

[7]   I estimate that some 70 MHz of unlicensed bandwidth, allocated for television white space devices, was reduced pursuant to the Incentive Auction in 2017.  This, however, was baked into spectrum policy prior to the Pai FCC.

[8]   Notably, 64-71 GHz was allocated for unlicensed radio operations in the Spectrum Frontiers proceeding, adjacent to the 57-64 GHz unlicensed bands.  See Use of Spectrum Bands Above 24 GHz For Mobile Radio Services, et al., Report and Order and Further Notice of Proposed Rulemaking, 31 FCC Rcd 8014 (2016), 8064-65, para. 130.

[9]   The revenues reflect bids made in the Clock phase of Auction 107.  An Assignment Phase has yet to occur as of this writing.

[10]  The 2021 FCC Budget request, p. 34: “As of December 2019, the total amount collected for broader government use and deficit reduction since 1994 exceeds $117 billion.” 

[11]   Kerrisdale Management issued a June 2018 report that tied the proceeding to a dubious source: “to the market-oriented perspective on spectrum regulation – as articulated, for instance, by the recently published book The Political Spectrum by former FCC chief economist Thomas Winslow Hazlett – [that] the original sin of the FCC was attempting to dictate from on high what licensees should or shouldn’t do with their spectrum. By locking certain bands into certain uses, with no simple mechanism for change or renegotiation, the agency guaranteed that, as soon as technological and commercial realities shifted – as they do constantly – spectrum use would become inefficient.” 

[12]   Net proceeds will be reduced to reflect bidding credits extended small businesses, but additional bids will be received in the Assignment Phase of Auction 107, still to be held. Likely totals will remain somewhere around current levels. 

[13]  The CBRS band is composed of frequencies at 3550-3700 MHz.  The top 50 MHz of that band was officially allocated in 2005 in a proceeding that started years earlier.  It was then curious that the adjacent 100 MHz was not included. 

[14] FCC Seeks Comment on Procedures for 2.5 GHz Reallocation (Jan. 13, 2021).

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, ICLE); Eric Fruits (Chief Economist, ICLE; Adjunct Professor of Economics, Portland State University); and Kristian Stout (Associate Director, ICLE

The COVID-19 pandemic is changing the way consumers shop and the way businesses sell. These shifts in behavior, designed to “flatten the curve” of infection through social distancing, are happening across many (if not all) markets. But in many cases, it’s impossible to know now whether these new habits are actually achieving the desired effect. 

Take a seemingly silly example from Oregon. The state is one of only two in the U.S. that prohibits self-serve gas. In response to COVID-19, the state fire marshall announced it would temporarily suspend its enforcement of the prohibition. Public opinion fell into two broad groups. Those who want the option to pump their own gas argue that self-serve reduces the interaction between station attendants and consumers, thereby potentially reducing the spread of coronavirus. On the other hand, those who support the prohibition on self-serve have blasted the fire marshall’s announcement, arguing that all those dirty fingers pressing keypads and all those grubby hands on fuel pumps will likely increase the spread of the virus. 

Both groups may be right, but no one yet knows the net effect. We can only speculate. This picture becomes even more complex when considering other, alternative policies. For instance, would it be more effective for the state of Oregon to curtail gas station visits by forcing the closure of stations? Probably not. Would it be more effective to reduce visits through some form of rationing? Maybe. Maybe not. 

Policymakers will certainly struggle to efficiently decide how firms and consumers should minimize the spread of COVID-19. That struggle is an extension of Hayek’s knowledge problem: policymakers don’t have adequate knowledge of alternatives, preferences, and the associated risks. 

A Hayekian approach — relying on bottom-up rather than top-down solutions to the problem — may be the most appropriate solution. Allowing firms to experiment and iteratively find solutions that work for their consumers and employees (potentially adjusting prices and wages in the process) may be the best that policymakers can do.

The case of online retail platforms

One area where these complex tradeoffs are particularly acute is that of online retail. In response to the pandemic, many firms have significantly boosted their online retail capacity. 

These initiatives have been met with a mix of enthusiasm and disapproval. On the one hand online retail enables consumers to purchase “essential” goods with a significantly reduced risk of COVID-19 contamination. It also allows “non-essential” goods to be sold, despite the closure of their brick and mortar stores. At first blush, this seems like a win-win situation for both consumers and retailers of all sizes, with large retailers ramping up their online operations and independent retailers switching to online platforms such as Amazon.

But there is a potential downside. Even contactless deliveries do present some danger, notably for warehouse workers who run the risk of being infected and subsequently passing the virus on to others. This risk is amplified by the fact that many major retailers, including Walmart, Kroger, CVS, and Albertsons, are hiring more warehouse and delivery workers to meet an increase in online orders. 

This has led some to question whether sales of “non-essential” goods (though the term is almost impossible to define) should be halted. The reasoning is that continuing to supply such goods needlessly puts lives at risk and reduces overall efforts to slow the virus.

Once again, these are incredibly complex questions. It is hard to gauge the overall risk of infection that is produced by the online retail industry’s warehousing and distribution infrastructure. In particular, it is not clear how effective social distancing policies, widely imposed within these workplaces, will be at achieving distancing and, in turn, reducing infections. 

More fundamentally, whatever this risk turns out to be, it is almost impossible to weigh it against an appropriate counterfactual. 

Online retail is not the only area where this complex tradeoff arises. An analogous reasoning could, for instance, also be applied to food delivery platforms. Ordering a meal on UberEats does carry some risk, but so does repeated trips to the grocery store. And there are legitimate concerns about the safety of food handlers working in close proximity to each other.  These considerations make it hard for policymakers to strike the appropriate balance. 

The good news: at least some COVID-related risks are being internalized

But there is also some good news. Firms, consumers and employees all have some incentive to mitigate these risks. 

Consumers want to purchase goods without getting contaminated; employees want to work in safe environments; and firms need to attract both consumers and employees, while minimizing potential liability. These (partially) aligned incentives will almost certainly cause these economic agents to take at least some steps that mitigate the spread of COVID-19. This might notably explain why many firms imposed social distancing measures well before governments started to take notice (here, here, and here). 

For example, one first-order effect of COVID-19 is that it has become more expensive for firms to hire warehouse workers. Not only have firms moved up along the supply curve (by hiring more workers), but the curve itself has likely shifted upwards reflecting the increased opportunity cost of warehouse work. Predictably, this has resulted in higher wages for workers. For example, Amazon and Walmart recently increased the wages they were paying warehouse workers, as have brick and mortar retailers, such as Kroger, who have implemented similar policies.

Along similar lines, firms and employees will predictably bargain — through various channels — over the appropriate level of protection for those workers who must continue to work in-person.

For example, some companies have found ways to reduce risk while continuing operations:

  • CNBC reports Tyson Foods is using walk-through infrared body temperature scanners to check employees’ temperatures as they enter three of the company’s meat processing plants. Other companies planning to use scanners include Goldman Sachs, UPS, Ford, and Carnival Cruise Lines.
  • Kroger’s Fred Meyer chain of supermarkets is limiting the number of customers in each of its stores to half the occupancy allowed under international building codes. Kroger will use infrared sensors and predictive analytics to monitor the new capacity limits. The company already uses the technology to estimate how many checkout lanes are needed at any given time.
  • Trader Joe’s limits occupancy in its store. Customers waiting to enter are asked to stand six feet apart using marked off Trader Joe’s logos on the sidewalk. Shopping carts are separated into groups of “sanitized” and “to be cleaned.” Each cart is thoroughly sprayed with disinfectant and wiped down with a clean cloth.

In other cases, bargaining over the right level of risk-mitigation has been pursued through more coercive channels, such as litigation and lobbying:

  • A recently filed lawsuit alleges that managers at an Illinois Walmart store failed to alert workers after several employees began showing symptoms of COVID-19. The suit claims Walmart “had a duty to exercise reasonable care in keeping the store in a safe and healthy environment and, in particular, to protect employees, customers and other individuals within the store from contracting COVID-19 when it knew or should have known that individuals at the store were at a very high risk of infection and exposure.” 
  • According to CNBC, a group of legislators, unions and Amazon employees in New York wrote a letter to CEO Jeff Bezos calling on him to enact greater protections for warehouse employees who continue to work during the coronavirus outbreak. The Financial Times reports worker protests at Amazon warehouse in the US, France, and Italy. Worker protests have been reported at a Barnes & Noble warehouse. Several McDonald’s locations have been hit with strikes.
  • In many cases, worker concerns about health and safety have been conflated with long-simmering issues of unionization, minimum wage, flexible scheduling, and paid time-off. For example, several McDonald’s strikes were reported to have been organized by “Fight for $15.”

Sometimes, there is simply no mutually-advantageous solution. And businesses are thus left with no other option than temporarily suspending their activities: 

  • For instance, McDonalds and Burger King have spontaneously closed their restaurants — including drive-thru and deliveries — in many European countries (here and here).
  • In Portland, Oregon, ChefStable a restaurant group behind some of the city’s best-known restaurants, closed all 20 of its bars and restaurants for at least four weeks. In what he called a “crisis of conscience,” owner Kurt Huffman concluded it would be impossible to maintain safe social distancing for customers and staff.

This is certainly not to say that all is perfect. Employers, employees and consumers may have very strong disagreements about what constitutes the appropriate level of risk mitigation.

Moreover, the questions of balancing worker health and safety with that of consumers become all the more complex when we recognize that consumers and businesses are operating in a dynamic environment, making sometimes fundamental changes to reduce risk at many levels of the supply chain.

Likewise, not all businesses will be able to implement measures that mitigate the risk of COVID-19. For instance, “Big Business” might be in a better position to reduce risks to its workforce than smaller businesses. 

Larger firms tend to have the resources and economies of scale to make capital investments in temperature scanners or sensors. They have larger workforces where employees can, say, shift from stocking shelves to sanitizing shopping carts. Several large employers, including Amazon, Kroger, and CVS have offered higher wages to employees who are more likely to be exposed to the coronavirus. Smaller firms are less likely to have the resources to offer such wage premiums.

For example, Amazon recently announced that it would implement mandatory temperature checks, that it would provide employees with protective equipment, and that it would increase the frequency and intensity of cleaning for all its sites. And, as already mentioned above, Tyson Foods announced that they would install temperature scanners at a number of sites. It is not clear whether smaller businesses are in a position to implement similar measures. 

That’s not to say that small businesses can’t adjust. It’s just more difficult. For example, a small paint-your-own ceramics shop, Mimosa Studios, had to stop offering painting parties because of government mandated social distancing. One way it’s mitigating the loss of business is with a paint-at-home package. Customers place an order online, and the studio delivers the ceramic piece, paints, and loaner brushes. When the customer is finished painting, Mimosa picks up the piece, fires it, and delivers the finished product. The approach doesn’t solve the problem, but it helps mitigate the losses.

Conclusion

In all likelihood, we can’t actually avoid all bad outcomes. There is, of course, some risk associated with even well-resourced large businesses continuing to operate, even though some of them play a crucial role in coronavirus-related lockdowns. 

Currently, market actors are working within the broad outlines of lockdowns deemed necessary by policymakers. Given the intensely complicated risk calculation necessary to determine if any given individual truly needs an “essential” (or even a “nonessential”) good or service, the best thing that lawmakers can do for now is let properly motivated private actors continue to seek optimal outcomes together within the imposed constraints. 

So far, most individuals and the firms serving them are at least partially internalizing Covid-related risks. The right approach for lawmakers would be to watch this process and determine where it breaks down. Measures targeted to fix those breaches will almost inevitably outperform interventionist planning to determine exactly what is essential, what is nonessential, and who should be allowed to serve consumers in their time of need.

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

Ursula von der Leyen has just announced the composition of the next European Commission. For tech firms, the headline is that Margrethe Vestager will not only retain her job as the head of DG Competition, she will also oversee the EU’s entire digital markets policy in her new role as Vice-President in charge of digital policy. Her promotion within the Commission as well as her track record at DG Competition both suggest that the digital economy will continue to be the fulcrum of European competition and regulatory intervention for the next five years.

The regulation (or not) of digital markets is an extremely important topic. Not only do we spend vast swaths of both our professional and personal lives online, but firms operating in digital markets will likely employ an ever-increasing share of the labor force in the near future

Likely recognizing the growing importance of the digital economy, the previous EU Commission intervened heavily in the digital sphere over the past five years. This resulted in a series of high-profile regulations (including the GDPR, the platform-to-business regulation, and the reform of EU copyright) and competition law decisions (most notably the Google cases). 

Lauded by supporters of the administrative state, these interventions have drawn flak from numerous corners. This includes foreign politicians (especially  Americans) who see in these measures an attempt to protect the EU’s tech industry from its foreign rivals, as well as free market enthusiasts who argue that the old continent has moved further in the direction of digital paternalism. 

Vestager’s increased role within the new Commission, the EU’s heavy regulation of digital markets over the past five years, and early pronouncements from Ursula von der Leyen all suggest that the EU is in for five more years of significant government intervention in the digital sphere.

Vestager the slayer of Big Tech

During her five years as Commissioner for competition, Margrethe Vestager has repeatedly been called the most powerful woman in Brussels (see here and here), and it is easy to see why. Yielding the heavy hammer of European competition and state aid enforcement, she has relentlessly attacked the world’s largest firms, especially American’s so-called “Tech Giants”. 

The record-breaking fines imposed on Google were probably her most high-profile victory. When Vestager entered office, in 2014, the EU’s case against Google had all but stalled. The Commission and Google had spent the best part of four years haggling over a potential remedy that was ultimately thrown out. Grabbing the bull by the horns, Margrethe Vestager made the case her own. 

Five years, three infringement decisions, and 8.25 billion euros later, Google probably wishes it had managed to keep the 2014 settlement alive. While Vestager’s supporters claim that justice was served, Barack Obama and Donald Trump, among others, branded her a protectionist (although, as Geoffrey Manne and I have noted, the evidence for this is decidedly mixed). Critics also argued that her decisions would harm innovation and penalize consumers (see here and here). Regardless, the case propelled Vestager into the public eye. It turned her into one of the most important political forces in Brussels. Cynics might even suggest that this was her plan all along.

But Google is not the only tech firm to have squared off with Vestager. Under her watch, Qualcomm was slapped with a total of €1.239 Billion in fines. The Commission also opened an investigation into Amazon’s operation of its online marketplace. If previous cases are anything to go by, the probe will most probably end with a headline-grabbing fine. The Commission even launched a probe into Facebook’s planned Libra cryptocurrency, even though it has yet to be launched, and recent talk suggests it may never be. Finally, in the area of state aid enforcement, the Commission ordered Ireland to recover €13 Billion in allegedly undue tax benefits from Apple.   

Margrethe Vestager also initiated a large-scale consultation on competition in the digital economy. The ensuing report concluded that the answer was more competition enforcement. Its findings will likely be cited by the Commission as further justification to ramp up its already significant competition investigations in the digital sphere.

Outside of the tech sector, Vestager has shown that she is not afraid to adopt controversial decisions. Blocking the proposed merger between Siemens and Alstom notably drew the ire of Angela Merkel and Emmanuel Macron, as the deal would have created a European champion in the rail industry (a key political demand in Germany and France). 

These numerous interventions all but guarantee that Vestager will not be pushing for light touch regulation in her new role as Vice-President in charge of digital policy. Vestager is also unlikely to put a halt to some of the “Big Tech” investigations that she herself launched during her previous spell at DG Competition. Finally, given her evident political capital in Brussels, it’s a safe bet that she will be given significant leeway to push forward landmark initiatives of her choosing. 

Vestager the prophet

Beneath these attempts to rein-in “Big Tech” lies a deeper agenda that is symptomatic of the EU’s current zeitgeist. Over the past couple of years, the EU has been steadily blazing a trail in digital market regulation (although much less so in digital market entrepreneurship and innovation). Underlying this push is a worldview that sees consumers and small startups as the uninformed victims of gigantic tech firms. True to form, the EU’s solution to this problem is more regulation and government intervention. This is unlikely to change given the Commission’s new (old) leadership.

If digital paternalism is the dogma, then Margrethe Vestager is its prophet. As Thibault Schrepel has shown, her speeches routinely call for digital firms to act “fairly”, and for policymakers to curb their “power”. According to her, it is our democracy that is at stake. In her own words, “you can’t sensibly talk about democracy today, without appreciating the enormous power of digital technology”. And yet, if history tells us one thing, it is that heavy-handed government intervention is anathema to liberal democracy. 

The Commission’s Google decisions neatly illustrate this worldview. For instance, in Google Shopping, the Commission concluded that Google was coercing consumers into using its own services, to the detriment of competition. But the Google Shopping decision focused entirely on competitors, and offered no evidence showing actual harm to consumers (see here). Could it be that users choose Google’s products because they actually prefer them? Rightly or wrongly, the Commission went to great lengths to dismiss evidence that arguably pointed in this direction (see here, §506-538).

Other European forays into the digital space are similarly paternalistic. The General Data Protection Regulation (GDPR) assumes that consumers are ill-equipped to decide what personal information they share with online platforms. Queue a deluge of time-consuming consent forms and cookie-related pop-ups. The jury is still out on whether the GDPR has improved users’ privacy. But it has been extremely costly for businesses — American S&P 500 companies and UK FTSE 350 companies alone spent an estimated total of $9 billion to comply with the GDPR — and has at least temporarily slowed venture capital investment in Europe. 

Likewise, the recently adopted Regulation on platform-to-business relations operates under the assumption that small firms routinely fall prey to powerful digital platforms: 

Given that increasing dependence, the providers of those services [i.e. digital platforms] often have superior bargaining power, which enables them to, in effect, behave unilaterally in a way that can be unfair and that can be harmful to the legitimate interests of their businesses users and, indirectly, also of consumers in the Union. For instance, they might unilaterally impose on business users practices which grossly deviate from good commercial conduct, or are contrary to good faith and fair dealing. 

But the platform-to-business Regulation conveniently overlooks the fact that economic opportunism is a two-way street. Small startups are equally capable of behaving in ways that greatly harm the reputation and profitability of much larger platforms. The Cambridge Analytica leak springs to mind. And what’s “unfair” to one small business may offer massive benefits to other businesses and consumers

Make what you will about the underlying merits of these individual policies, we should at least recognize that they are part of a greater whole, where Brussels is regulating ever greater aspects of our online lives — and not clearly for the benefit of consumers. 

With Margrethe Vestager now overseeing even more of these regulatory initiatives, readers should expect more of the same. The Mission Letter she received from Ursula von der Leyen is particularly enlightening in that respect: 

I want you to coordinate the work on upgrading our liability and safety rules for digital platforms, services and products as part of a new Digital Services Act…. 

I want you to focus on strengthening competition enforcement in all sectors. 

A hard rain’s a gonna fall… on Big Tech

Today’s announcements all but confirm that the EU will stay its current course in digital markets. This is unfortunate.

Digital firms currently provide consumers with tremendous benefits at no direct charge. A recent study shows that median users would need to be paid €15,875 to give up search engines for a year. They would also require €536 in order to forgo WhatsApp for a month, €97 for Facebook, and €59 to drop digital maps for the same duration. 

By continuing to heap ever more regulations on successful firms, the EU risks killing the goose that laid the golden egg. This is not just a theoretical possibility. The EU’s policies have already put technology firms under huge stress, and it is not clear that this has always been outweighed by benefits to consumers. The GDPR has notably caused numerous foreign firms to stop offering their services in Europe. And the EU’s Google decisions have forced it to start charging manufacturers for some of its apps. Are these really victories for European consumers?

It is also worth asking why there are so few European leaders in the digital economy. Not so long ago, European firms such as Nokia and Ericsson were at the forefront of the digital revolution. Today, with the possible exception of Spotify, the EU has fallen further down the global pecking order in the digital economy. 

The EU knows this, and plans to invest €100 Billion in order to boost European tech startups. But these sums will be all but wasted if excessive regulation threatens the long-term competitiveness of European startups. 

So if more of the same government intervention isn’t the answer, then what is? Recognizing that consumers have agency and are responsible for their own decisions might be a start. If you don’t like Facebook, close your account. Want a search engine that protects your privacy? Try DuckDuckGo. If YouTube and Spotify’s suggestions don’t appeal to you, create your own playlists and turn off the autoplay functions. The digital world has given us more choice than we could ever have dreamt of; but this comes with responsibility. Both Margrethe Vestager and the European institutions have often seemed oblivious to this reality. 

If the EU wants to turn itself into a digital economy powerhouse, it will have to switch towards light-touch regulation that allows firms to experiment with disruptive services, flexible employment options, and novel monetization strategies. But getting there requires a fundamental rethink — one that the EU’s previous leadership refused to contemplate. Margrethe Vestager’s dual role within the next Commission suggests that change isn’t coming any time soon.

The Economist takes on “sin taxes” in a recent article, “‘Sin’ taxes—eg, on tobacco—are less efficient than they look.” The article has several lessons for policy makers eyeing taxes on e-cigarettes and other vapor products.

Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:

Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.

The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.

Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.

Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.

Economist-Sin-Taxes

 

Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:

After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.

Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.

According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

On the other hand, The Economist points out:

Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.

Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”

Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.

  • The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
  • The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
  • Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.

 

With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.

The terms of the United Kingdom’s (UK) exit from the European Union (EU) – “Brexit” – are of great significance not just to UK and EU citizens, but for those in the United States and around the world who value economic liberty (see my Heritage Foundation memorandum giving the reasons why, here).

If Brexit is to promote economic freedom and enhanced economic welfare, Brexit negotiations between the UK and the EU must not limit the ability of the United Kingdom to pursue (1) efficiency-enhancing regulatory reform and (2) trade liberalizing agreements with non-EU nations.  These points are expounded upon in a recent economic study (The Brexit Inflection Point) by the non-profit UK think tank the Legatum Institute, which has produced an impressive body of research on the benefits of Brexit, if implemented in a procompetitive, economically desirable fashion.  (As a matter of full disclosure, I am a member of Legatum’s “Special Trade Commission,” which “seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  Members of the Special Trade Commission are unpaid – they serve on a voluntary pro bono basis.)

Unfortunately, however, leading UK press commentators have urged the UK Government to accede to a full harmonization of UK domestic regulations and trade policy with the EU.  Such a deal would be disastrous.  It would prevent the UK from entering into mutually beneficial trade liberalization pacts with other nations or groups of nations (e.g., with the U.S. and with the members of the Transpacific Partnership (TPP) trade agreement), because such arrangements by necessity would lead to a divergence with EU trade strictures.  It would also preclude the UK from unilaterally reducing harmful regulatory burdens that are a byproduct of economically inefficient and excessive EU rules.  In short, it would be antithetical to economic freedom and economic welfare.

Notably, in a November 30 article (Six Impossible Notions About “Global Britain”), a well-known business journalist, Martin Wolf of the Financial Times, sharply criticized The Brexit Inflection Point’s recommendation that the UK should pursue trade and regulatory policies that would diverge from EU standards.  Notably, Wolf characterized as an “impossible thing” Legatum’s point that the UK should not “’allow itself to be bound by the EU’s negotiating mandate.’  We all now know this is infeasible.  The EU holds the cards and it knows it holds the cards. The Legatum authors still do not.”

Shanker Singham, Director of Economic Policy and Prosperity Studies at Legatum, brilliantly responded to Wolf’s critique in a December 4 article (published online by CAPX) entitled A Narrow-Minded Brexit Is Doomed to Fail.  Singham’s trenchant analysis merits being set forth in its entirety (by permission of the author):

“Last week, the Financial Times’s chief economics commentator, Martin Wolf, dedicated his column to criticising The Brexit Inflection Point, a report for the Legatum Institute in which Victoria Hewson, Radomir Tylecote and I discuss what would constitute a good end state for the UK as it seeks to exercise an independent trade and regulatory policy post Brexit, and how we get from here to there.

We write these reports to advance ideas that we think will help policymakers as they tackle the single biggest challenge this country has faced since the Second World War. We believe in a market place of ideas, and we welcome challenge. . . .

[W]e are thankful that Martin Wolf, an eminent economist, has chosen to engage with the substance of our arguments. However, his article misunderstands the nature of modern international trade negotiations, as well as the reality of the European Union’s regulatory system – and so his claim that, like the White Queen, we “believe in impossible things” simply doesn’t stack up.

Mr Wolf claims there are six impossible things that we argue. We will address his rebuttals in turn.

But first, in discussions about the UK’s trade policy, it is important to bear in mind that the British government is currently discussing the manner in which it will retake its independent WTO membership. This includes agricultural import quotas, and its WTO rectification processes with other WTO members.

If other countries believe that the UK will adopt the position of maintaining regulatory alignment with the EU, as advocated by Mr Wolf and others, the UK’s negotiating strategy would be substantially weaker. It would quite wrongly suggest that the UK will be unable to lower trade barriers and offer the kind of liberalisation that our trading partners seek and that would work best for the UK economy. This could negatively impact both the UK and the EU’s ongoing discussions in the WTO.

Has the EU’s trading system constrained growth in the World?

The first impossible thing Mr Wolf claims we argue is that the EU system of protectionism and harmonised regulation has constrained economic growth for Britain and the world. He is right to point out that the volume of world trade has increased, and the UK has, of course, experienced GDP growth while a member of the EU.

However, as our report points out, the EU’s prescriptive approach to regulation, especially in the recent past (for example, its approach on data protection, audio-visual regulation, the restrictive application of the precautionary principle, REACH chemicals regulation, and financial services regulations to name just a few) has led to an increase in anti-competitive regulation and market distortions that are wealth destructive.

As the OECD notes in various reports on regulatory reform, regulation can act as a behind-the-border barrier to trade and impede market openness for trade and investment. Inefficient regulation imposes unnecessary burdens on firms, increases barriers to entry, impacts on competition and incentives for innovation, and ultimately hurts productivity. The General Data Protection Regulation (GDPR) is an example of regulation that is disproportionate to its objectives; it is highly prescriptive and imposes substantial compliance costs for business that want to use data to innovate.

Rapid growth during the post-war period is in part thanks to the progressive elimination of border trade barriers. But, in terms of wealth creation, we are no longer growing at that rate. Since before the financial crisis, measures of actual wealth creation (not GDP which includes consumer and government spending) such as industrial output have stalled, and the number of behind-the-border regulatory barriers has been increasing.

The global trading system is in difficulty. The lack of negotiation of a global trade round since the Uruguay Round, the lack of serious services liberalisation in either the built-in agenda of the WTO or sectorally following on from the Basic Telecoms Agreement and its Reference Paper on Competition Safeguards in 1997 has led to an increase in behind-the-border barriers and anti-competitive distortions and regulation all over the world. This stasis in international trade negotiations is an important contributory factor to what many economists have talked about as a “new normal” of limited growth, and a global decline in innovation.

Meanwhile the EU has sought to force its regulatory system on the rest of the world (the GDPR is an example of this). If it succeeds, the result would be the kind of wealth destruction that pushes more people into poverty. It is against this backdrop that the UK is negotiating with both the EU and the rest of the world.

The question is whether an independent UK, the world’s sixth biggest economy and second biggest exporter of services, is able to contribute to improving the dynamics of the global economic architecture, which means further trade liberalisation. The EU is protectionist against outside countries, which is antithetical to the overall objectives of the WTO. This is true in agriculture and beyond. For example, the EU imposes tariffs on cars at four times the rate applied by the US, while another large auto manufacturing country, Japan, has unilaterally removed its auto tariffs.

In addition, the EU27 represents a declining share of UK exports, which is rather counter-intuitive for a Customs Union and single market. In 1999, the EU represented 55 per cent of UK exports, and by 2016, this was 43 per cent. That said, the EU will remain an important, albeit declining, market for the UK, which is why we advocate a comprehensive free trade agreement with it.

Can the UK secure meaningful regulatory recognition from the EU without being identical to it?

Second, Mr Wolf suggests that regulatory recognition between the UK and EU is possible only if there is harmonisation or identical regulation between the UK and EU.

This is at odds with WTO practice, stretching back to its rules on domestic laws and regulation as encapsulated in Article III of the GATT and Article VI of the GATS, and as expressed in the Technical Barriers to Trade (TBT) and Sanitary and Phytosanitary (SPS) agreements.

This is the critical issue. The direction of travel of international trade thinking is towards countries recognising each other’s regulatory systems if they achieve the same ultimate goal of regulation, even if the underlying regulation differs, and to regulate in ways that are least distortive to international trade and competition. There will be areas where this level of recognition will not be possible, in which case UK exports into the EU will of course have to satisfy the standards of the EU. But even here we can mitigate the trade costs to some extent by Mutual Recognition Agreements on conformity assessment and market surveillance.

Had the US taken the view that it would not receive regulatory recognition unless their regulatory systems were the same, the recent agreement on prudential measures in insurance and reinsurance services between the EU and US would not exist. In fact this point highlights the crucial issue which the UK must successfully negotiate, and one in which its interests are aligned with other countries and with the direction of travel of the WTO itself. The TBT and SPS agreements broadly provide that mutual recognition should not be denied where regulatory goals are aligned but technical regulation differs.

Global trade and regulatory policy increasingly looks for regulation that promotes competition. The EU is on a different track, as the GDPR demonstrates. This is the reason that both the Canada-EU agreement (CETA) and the EU offer in the Trade in Services agreement (TiSA) does not include new services. If GDPR were to become the global standard, trade in data would be severely constrained, slowing the development of big data solutions, the fourth industrial revolution, and new services trade generally.

As many firms recognise, this would be extremely damaging to global prosperity. In arguing that regulatory recognition is only available if the UK is fully harmonised with the EU, Mr Wolf may be in harmony with the EU approach to regulation. But that is exactly the approach that is damaging the global trading environment.

Can the UK exercise trade policy leadership?

Third, Mr Wolf suggests that other countries do not, and will not, look to the UK for trade leadership. He cites the US’s withdrawal from the trade negotiating space as an example. But surely the absence of the world’s biggest services exporter means that the world’s second biggest exporter of services will be expected to advocate for its own interests, and argue for greater services liberalisation.

Mr Wolf believes that the UK is a second-rank power in decline. We take a different view of the world’s sixth biggest economy, the financial capital of the world and the second biggest exporter of services. As former New Zealand High Commissioner, Sir Lockwood Smith, has said, the rest of the world does not see the UK as the UK too often seems to see itself.

The global companies that have their headquarters in the UK do not see things the same way as Mr Wolf. In fact, the lack of trade leadership since 1997 means that a country with significant services exports would be expected to show some leadership.

Mr Wolf’s point is that far from seeking to grandiosely lead global trade negotiations, the UK should stick to its current knitting, which consists of its WTO rectification, and includes the negotiation of its agricultural import quotas and production subsidies in agriculture. This is perhaps the most concerning part of his argument. Yes, the UK must rectify its tariff schedules, but for that process to be successful, especially on agricultural import quotas, it must be able to demonstrate to its partners that it will be able to grant further liberalisation in the near term future. If it can’t, then its trading partners will have no choice but to demand as much liberalisation as they can secure right now in the rectification process.

This will complicate that process, and cause damage to the UK as it takes up its independent WTO membership. Those WTO partners who see the UK as vulnerable on this point will no doubt see validation in Mr Wolf’s article and assume it means that no real liberalisation will be possible from the UK. The EU should note that complicating this process for the UK will not help the EU in its own WTO processes, where it is vulnerable.

Trade negotiations are dynamic not static and the UK must act quickly

Fourth, Mr Wolf suggests that the UK is not under time pressure to “escape from the EU”.  This statement does not account for how international trade negotiations work in practice. In order for countries to cooperate with the UK on its WTO rectification, and its TRQ negotiations, as well to seriously negotiate with it, they have to believe that the UK will have control over tariff schedules and regulatory autonomy from day one of Brexit (even if we may choose not to make changes to it for an implementation period).

If non-EU countries think that the UK will not be able to exercise its freedom for several years, they will simply demand their pound of flesh in the negotiations now, and get on with the rest of their trade policy agenda. Trade negotiations are not static. The US executive could lose trade-negotiating authority in the summer of next year if the NAFTA renegotiation is not going well. Other countries will seek to accede to the Trans Pacific Partnership (TPP). China is moving forward with its Regional Cooperation and Economic Partnership, which does not meaningfully touch on domestic regulatory barriers. Much as we might criticise Donald Trump, his administration has expressed strong political will for a UK-US agreement, and in that regard has broken with traditional US trade policy thinking. The UK has an opportunity to strike and must take it.

The UK should prevail on the EU to allow Customs Agencies to be inter-operable from day one

Fifth, with respect to the challenges raised on customs agencies working together, our report argued that UK customs and the customs agencies of the EU member states should discuss customs arrangements at a practical and technical level now. What stands in the way of this is the EU’s stubbornness. Customs agencies are in regular contact on a business-as-usual basis, so the inability of UK and member-state customs agencies to talk to each other about the critical issue of new arrangements would seem to border on negligence. Of course, the EU should allow member states to have these critical conversations now.  Given the importance of customs agencies interoperating smoothly from day one, the UK Government must press its case with the European Commission to allow such conversations to start happening as a matter of urgency.

Does the EU hold all the cards?

Sixth, Mr Wolf argues that the EU holds all the cards and knows it holds all the cards, and therefore disagrees with our claim that the the UK should “not allow itself to be bound by the EU’s negotiating mandate”. As with his other claims, Mr Wolf finds himself agreeing with the EU’s negotiators. But that does not make him right.

While absence of a trade deal will of course damage UK industries, the cost to EU industries is also very significant. Beef and dairy in Ireland, cars and dairy in Bavaria, cars in Catalonia, textiles and dairy in Northern Italy – all over Europe (and in politically sensitive areas), industries stands to lose billions of Euros and thousands of jobs. This is without considering the impact of no financial services deal, which would increase the cost of capital in the EU, aborting corporate transactions and raising the cost of the supply chain. The EU has chosen a mandate that risks neither party getting what it wants.

The notion that the EU is a masterful negotiator, while the UK’s negotiators are hopeless is not the global view of the EU and the UK. Far from it. The EU in international trade negotiations has a reputation for being slow moving, lacking in creative vision, and unable to conclude agreements. Indeed, others have generally gone to the UK when they have been met with intransigence in Brussels.

What do we do now?

Mr Wolf’s argument amounts to a claim that the UK is not capable of the kind of further and deeper liberalisation that its economy would suggest is both possible and highly desirable both for the UK and the rest of the world. According to Mr Wolf, the UK can only consign itself to a highly aligned regulatory orbit around the EU, unable to realise any other agreements, and unable to influence the regulatory system around which it revolves, even as that system becomes ever more prescriptive and anti-competitive. Such a position is at odds with the facts and would guarantee a poor result for the UK and also cause opportunities to be lost for the rest of the world.

In all of our [Legatum Brexit-related] papers, we have started from the assumption that the British people have voted to leave the EU, and the government is implementing that outcome. We have then sought to produce policy recommendations based on what would constitute a good outcome as a result of that decision. This can be achieved only if we maximise the opportunities and minimise the disruptions.

We all recognise that the UK has embarked on a very difficult process. But there is a difference between difficult and impossible. There is also a difference between tasks that must be done and take time, and genuine negotiation points. We welcome the debate that comes from constructive challenge of our proposals; and we ask in turn that those who criticise us suggest alternative plans that might achieve positive outcomes. We look forward to the opportunity of a broader debate so that collectively the country can find the best path forward.”

 

U.S. international trade law has various statutory mechanisms to deal with unfair competition.  Regrettably, American trade law (and, for that matter, the trade laws of other nations) has a history of being deployed in a mercantilist fashion to further the interests of American producer interests, rather than consumer interests and aggregate economic welfare.  That need not, however, necessarily be the case.

For example, instead of penalizing more efficient imports, American antidumping law could be reoriented to deal only with true predatory pricing, thereby promoting free market interests (see my proposal here).  And section 337 of the Tariff Act, directed at “unfair methods of competition” in import trade, could be employed in a non-protectionist manner that enhances market efficiency by focusing exclusively on foreign harm to U.S. intellectual property (IP) rights (see my proposal here).

Countervailing duty (CVD) law, which applies tariffs to counteract foreign government subsidies, could be a force for eliminating government-imposed competitive distortions – and for discouraging governments from conferring subsidies to favored industries or firms in the first place.  In practice, however, significant distortive government subsidies to key industries have persisted in the face of CVD statutes.  The application of countervailing duties and the raising of CVD disputes to the World Trade Organization have proven to be inadequate in curbing governments’ persistent efforts to subsidize corporate favorites, while preventing trading partners from bestowing similar largesse on their national champions.

Among the beneficiaries of major subsidies that lead to international trade disputes, the commercial aircraft sector, dominated by the longstanding Boeing and Airbus duopoly, stands out.  A recent article by trade law expert Shanker Singham, Director of Economic Policy and Prosperity Studies at the United Kingdom’s Legatum Institute, highlights the economic deficiencies revealed by the most recent battle in the ongoing commercial aircraft “subsidies war” saga.

Specifically, Singham suggests reforming countervailable subsidies with a “trade remedy law based on evaluating distortions and their effects”.  Singham’s article, “America’s Protectionism Is Damaging British Interests,” is worth a careful read:

Theresa May was recently in Canada meeting the Canadian PM, Justin Trudeau, to discuss how theyshould react to a trade case that Boeing has brought against Bombardier, Canada’s aerospace manufacturer. The case could affect 4,000 jobs in Bombardier’s Belfast facility. From Belfast, this might look like the vagaries of international trade, but the real story runs deeper.

Competition among producers of aircraft has been fierce, and has also been often accompanied by complaints about state subsidies and other trade distortions. Civil aviation is a sector that has been plagued by government interventions all over the world, and to say that the playing field is not level is an understatement.

While Airbus subsidies are its usual target, Boeing has recently turned its fire onto Bombardier, claiming that the Canadian jet manufacturer has dumped product into the US market. Boeing is citing US trade remedy laws, the price-based focus of which makes them prone to this sort of protectionist abuse.

The UK has been dragged in to the row because jobs in Belfast depend on the production of key inputs into the Bombardier plane. So just as the people of Northern Ireland are struggling with Brexit, they face a fresh concern not of their own making.

Our recently released [Legatum Institute] paper on Northern Ireland discusses the need to find ways of promoting economic activity in Northern Ireland using Special Economic Zones, among other ways of minimising the costs of Brexit. And one idea is that the people of Northern Ireland should benefit from UK-US trade agreements as we set out in our Transatlantic partnership paper.

But allowing the abuse of notoriously protectionist trade remedy laws in the US to have a completely unjustifiable and knock-on effect in Northern Ireland would not indicate the good UK-US trade relations that the Trump administration has promised.

The Prime Minister has recognised the danger, and raised the issue in a call with President Trump, as well with Trudeau this week. Voices within her own party, and the media, are calling for her to take a tougher line against Boeing to protect those jobs in Northern Ireland, and others in the supply chain across the UK.

But what could the Prime Minister do?

The case highlights the trade barrier that the trade remedies themselves pose and shows why reform is necessary. Given that new UK trade remedy laws must be developed as a result of Brexit, and the US-UK agreement, here is an excellent opportunity to deal with those government interventions that distort trade by focusing on the source of the problem – and not on pricing (as current trade remedy laws do).

For trade to be fair, we need to make sure that distortions are reduced in all our markets, and that any trade remedies we use are designed to deal with these distortions.

In the case of the production of aircraft (large-body), Boeing and Airbus have been at each other’s throats, each maintaining that the other is subsidised or supported by governments. Recently, Airbus lost a case in the WTO where it was arguing that Boeing’s Washington state incentives violated WTO rules on subsidies. That case was in response to a series of cases which Boeing had brought against Airbus. It highlights the problem of the WTO’s approach to subsidies and government support in general.

Whether the government privilege or grant is given federally or through a state, what matters is whether the cost of production has been reduced by ordinary business processes and efficiency, or whether in fact it has been reduced through government action. Viewed through this lens, very few aircraft manufacturers have clean hands.

However, while these distortions abound, bringing trade remedy cases that ignore the complainants’ own network of distortions and subsidies is patently unfair. The Boeing case has effects in Canada, but because we are in a world of competing global supply chains, these effects reverberate around the world.

All the suppliers to Bombardier, including those based in Northern Ireland, are adversely affected when US firms use the US trade remedy laws to damage trade between nations. These laws were written at a time when we did not live in a world of global supply chains, but rather a world where firms produced products in country A and sold them in country B.  They do not fit within our new world of complex supply chains.

It is high time that countries around the world ensured that their domestic policies and their external trade policies lined up. Countries such as the US cannot argue that they intend to do trade deals with the UK, if their domestic measures damage the interests of that trading partner.

In fact, the UK and the US have an opportunity here to use trade remedy measures to attack products from companies whose costs are artificially lowered as a result of government distortion, as opposed to being more competitive. Boeing’s case, however, does not differentiate between the two – which is why it is flawed.

In the UK, there is talk of using a public interest test in the application of trade remedy laws. Such a test could look at the impact of the use of these remedies on international trade and on consumers.

Theresa May has argued for industrial strategy in ways that give those of us who believe in the power of free trade and free markets pause. But in this case, the most basic industrial strategy has to be to defend UK production, such as the plant in Belfast, from the effects of distortions in other markets, and the abuse of trade remedy laws.

A trade remedy law based on evaluating distortions and their effects would prevent this. It is something the UK and US may be able to agree, and it is certainly something that the UK could lead on by example.

If the UK government seeks to protect its workers in this case, this should not be seen as a protectionist gesture.  It would be a necessary response to US protectionism. As long as countries maintain laws on their books that allow consumers to be damaged, and supply chains to be adversely affected, countries may seek to use other means to retaliate against the offender.

World trade is under sufficient threat, at the moment not to freight it down with additional and quite unnecessary challenges, such as over-vigorous use of anti-dumping laws. The UK has a great opportunity to lead this debate as it formulates its own independent trade policy.

As Singham’s article suggests, U.S.-UK free trade negotiations made possible by Brexit create the possibility for the reformulation of American CVD law to focus on actual distortions of competition.  CVD assessments calibrated precisely to the amount of the foreign government’s distortionary subsidy, applied first in the context of US-UK trade, could serve as a model for the more general reform of American (and UK) CVD law.  This in turn might serve as a template for more general CVD reform, through bilateral or plurilateral deals – and perhaps eventually a global deal under the auspices of the World Trade Organization.  Think big.

 

Government subsidies that selectively favor a particular firm or firms may substantially distort competition within an industry, thereby skewing trading terms, reducing efficiency, and harming consumer welfare.  To its credit, the European Union (EU) seeks to stamp out distortive state aid, as explained by the EU’s administrative and law enforcement arm, the European Commission (EC):

A company which receives government support gains an advantage over its competitors. Therefore, the Treaty [governing the EU] generally prohibits State aid unless it is justified by reasons of general economic development.  To ensure that this prohibition is respected and exemptions are applied equally across the European Union, the European Commission is in charge of ensuring that State aid complies with EU rules. . . .

State aid is defined as an advantage in any form whatsoever conferred on a selective basis to undertakings [businesses] by national public authorities.  Therefore, subsidies granted to individuals or general measures open to all enterprises are not covered by this prohibition and do not constitute State aid (examples include general taxation measures or employment legislation).

A nation’s tax preferences that selectively advantage a specific firm or firms may constitute a form of state aid, and in recent years the EC has challenged various member states’ corporate tax rules that allegedly have such a preferential effect.   Particular attention has focused on an August 2016 EC finding that Apple, Inc. owed roughly $14.5 billion in back taxes to Ireland, due to an Irish tax ruling that granted the company a preferential corporate tax rate in violation of EC state aid principles.

This EC finding, which is opposed by the Irish and U.S Governments and has been appealed to the European courts, is the subject of an April 27 Heritage Foundation “Backgrounder” essay, co-authored by Heritage Senior Fellow David Burton and me.  In our essay, we point out that, whatever the legal merits of this particular holding, the EC’s recent “crusade” against low corporate taxes achieved through various national preferences raises the broader issue of “tax competition” among jurisdictions that may beneficially constrain the size of government.  Our article’s findings and policy recommendations are as follows:

High taxes, especially high marginal income tax rates, have an adverse impact on economic growth, and tax competition among governments imposes a limit on how high governments can raise tax rates and burden the private sector.  Efforts to suppress tax competition or to harmonize taxes are generally an effort to create a “tax cartel” among likeminded governments to keep taxes high. The European Union’s Apple ruling, similar to other recent EU investigations of tax reductions, may have the effect of discouraging beneficial tax competition among European nations.  The United States should reject calls by the Organisation for Economic Co-operation and Development and other multinational bodies to promote “tax harmonization,” which tends to promote overly high tax burdens that discourage economic growth.   The United States also should lead by example, reducing its economically harmful tax burdens and encouraging other countries to do likewise. 

Since Brussels has ordered Ireland to recover 13€ billion from Apple, much ink has been spilled on the European Commission’s (EC) alleged misuse of power and breach of the “rule of law.” In the Irish Times, Professor Liza Lovdahl-Gormsen wrote that the EC has been “bending” competition law to pursue a corporate taxation agenda in disguise. Former European Commissioner Neelie Kroes went so far as to suggest that the EC was attempting to rewrite international tax rules.

Conspiracy stories sell well, all the more so when the EC administration is on display. Yet, the claim that the Apple case is not a genuine competition case is a trick often used to deride enforcement — one that papers over an old lesson of mainstream economics: that monopolists are particularly good at “acquiring” public interest legislation. Nobel Prize winner George Stigler once wrote that “the most obvious contribution that a group may seek of the government is a direct subsidy of money.”  

While this basic economic teaching is not the narrative behind the EC decision against Ireland, there are clear signs that Apple is a textbook monopolist, and that rent-seeking theory could thus assist the EC in the forthcoming appeal. Let us look closer. Year after year, Apple sits atop the rankings as the most successful company of the 21st century. It has been the world’s largest company by market capitalization for some time. It is also the most profitable company in the history of the modern economy. Its flagship product, the iPhone, is the most expensive mass-market smartphone ever sold. On each device, Apple’s earns a 69% gross margin. Last year, industry analysts were taken aback when Apple outsold Samsung.

Granted, high prices and large profits do not a monopolist make. So let us consider other metrics: among tech’s frightful five, Apple is the slacker when it comes to investing in innovation: It spent about 3.5% of its revenue on research and development in 2016. By way of comparison, Alphabet (Google) spent 16%, Microsoft spent 14%, and Facebook spent a whopping 27%. Apple didn’t even feature in the EU ranking of the top 50 highest R&D-intensive companies, trailing behind a host of less-glitzy manufacturers of telecoms infrastructure equipment like Nokia and Ericsson and even “mundane” suppliers of cars, chemicals, and agricultural products. At such low levels of R&D investment, it is even questionable that Apple can be called a “high tech” company (the minimum to be part of that league is 5-7.5%). 

Apple also features as the world champ payer of dividends and purchaser of its own shares in financial analysts’ recommendations. Instead of retaining earnings to devote to internal R&D projects as a patient capitalist, Apple returns comparatively more profits to shareholders than any of its peers. It also sits atop a mountain of unproductive capital.

Beyond financial numbers, Apple’s body language also denotes behavioural signs of monopoly power. In his best seller, “Zero to One,” Peter Thiel writes that “monopolists lie to protect themselves.” Apple is a grandmaster at this game. In a bid to reduce the prices it pays for certain inputs, Apple has routinely claimed to be an antitrust victim in proceedings in the US, the EU, and Asia, accusing upstream component suppliers and innovators such as Qualcomm and Nokia, but also rivals such as Samsung, of unlawful monopolization. To assist it, Apple enlisted the help of a former European Commission official who spent over ten years spearheading the EU’s assaults on Intel, Microsoft, Google and other high-tech firms. To the trained observer, this should come as no surprise. For monopolists, the ends justify the means – including efforts to instrumentalise the regulatory process. 

With such facts in mind, it is now much less obvious that the EC Apple tax case is not plain vanilla competition policy, and much more clear that Apple behaved as a textbook rent-seeking monopolist when it secured 13€ billion from the Irish Government. 

That monopolists expend vast resources in rent-seeking, unproductive activities aimed at capturing rents from governments is a fundamental teaching of modern economic theory.  Like theft, corruption or bribery, those resources – and those invested by governments to counter rent-seeking strategies – are pure waste; they generate no socially valuable production. The EC would be well advised to keep this narrative in mind when defending its case against allegations of unlawful tax harmonization before the EU courts. As I often tell my students, forget the legalese; go for the big picture.

Terry Calvani is a former FTC Commissioner and Member of the Governing Board of the of the Competition Authority of Ireland. He is  currently Of Counsel at Freshfields Bruckhaus Deringer. Angela Diveley is an Associate at Freshfields Bruckhaus Deringer.

We welcome Commissioner Wright’s contribution in making the important point that the Commission’s unfair methods of competition (UMC) jurisdiction under Section 5 of the FTCA should be subject to limiting principles.  We make two observations about the policy statement and a more general observation about the FTC in light of its upcoming 100th anniversary.  The first is that injury to competition has long played a role in the debate concerning the appropriate scope of Section 5.  The second is that it is not yet clear what role efficiencies should play in a Section 5 claim.  Finally, we observe that Section 5 is one of a number of aspects of the FTC’s enforcement mandate that is ripe for reconsideration as we approach the centennial anniversary of both the statute and the agency.

Injury to Competition

It is now uncontroversial that the sine qua non of a violation of the antitrust laws is injury to competition.  Yet, the Commission has been struggling with what this assertion means for decades.  In its 1984 General Motors Corp. decision, the Commission declined to adopt the “spirit theory” and find a Section 5 violation where Complaint Counsel did not claim competition was harmed.  The case was brought under Section 2(d) of the Robinson-Patman Act, which prohibits the discriminatory payment of advertising allowances in connection with the resale of goods.  GM was accused of making advertising payments to GMC dealers that leased and rented cars they bought from GM while declining to make such payments to other leasing and rental companies.  The Robinson-Patman Act claim failed because the conduct at issue involved the leasing of cars rather than the resale, a necessary element of the claim.  Complaint Counsel proffered that the Commission should find a Section 5 violation because, although the conduct did not violate the letter of the Robinson-Patman Act, it violated the spirit of the Act.  The Commission in General Motors stated that it would “decline to apply [Section 5] in cases . . . where there has been no demonstration of an anticompetitive impact.”

Commissioner Wright’s proposal finds the General Motors decision to be too restrictive.  Similar to the lease/rental conduct described above, an invitation to collude falls short of a requisite element—an agreement—of a Section 1 claim.  However, many, including Commissioner Wright, would agree that failed invitations to collude should fall squarely within the boundaries of Section 5, even though they do not actually produce anticompetitive effects.  The Commission’s invitation to collude cases, as well as Commissioner Wright’s policy statement thus add to General Motors the ability to establish a Section 5 violation where the effect of the conduct is to “create[] a substantial risk of competitive harm.”  We do not disagree, but observe that this “gap filling” is likely quite small since the Department of Justice prosecutes most such cases as wire or mail fraud.  The universe of cases not involving these media, and thus otherwise unenforced, is likely very small.

Efficiencies

In an attempt to create more certainty for the business community, Commissioner Wright’s policy statement precludes the application of Section 5 where a respondent can proffer any efficiencies.  Commissioner Ohlhausen, on the other hand, has indicated her support of a “disproportionate harm test,” which would allow a Section 5 claim in the face of efficiencies but where the harm substantially outweighs any procompetitive benefits.  Commissioner Wright’s test, while providing certainty to the business community, risks torpedoing claims where substantial competitive harm is present.  Commissioner Ohlhausen’s test would allow for such claims, but risks uncertainty in determining what exactly constitutes disproportionate harm.

Commissioner Wright has explained that the Commission has a poor track record of balancing pro- and anticompetitive effects in a way that provides guidance to the business community.  Moreover, he points out, the limited application of Section 5 does not deprive the FTC of its ability to challenge conduct under the traditional antitrust laws.  He therefore has set forth a clear limitation on the applicability of Section 5 to utilize it in a way that he believes will allow the FTC to best enhance consumer welfare.

Commissioner Ohlhausen’s addition of the disproportionality test is somewhat more expansive in application than Commissioner Wright’s test.  She explains it would avoid the challenges associated with the precise balancing of pro- and anticompetitive effects.  She also states that the disproportionality test is consistent with Commission advocacy and Professor Hovenkamp’s preferred definition of exclusion in the context of Section 2.

Both of these positions have their merits, and we believe they have established the boundaries for the continuing discussion of the appropriate application of Section 5 in its “gap filling” role.

Conclusion

As we approach the FTC’s 100th anniversary, it is important to look at the boundaries of the appropriate utilization of Section 5 in the antitrust context.  Commissioner Wright’s proposed Section 5 policy statement is a timely contribution to the debate.

In light of the milestone anniversary, it is appropriate also to think about the procedural aspects of the FTC’s enforcement mandate.  There has been substantial criticism of the European Commission for its role as judge, jury, and prosecutor; this criticism also applies to the FTC’s Part 3 proceedings, under which the Commission both initiates cases and then acts as the ultimate fact finder.  That said, Part 3 has procedural protections that the EC does not, for example, impartial administrative law judges.  Nevertheless, we believe it important at this juncture to rethink whether the adjudicative process at the Commission is the best practice.