Archives For truth on the market

The terms of the United Kingdom’s (UK) exit from the European Union (EU) – “Brexit” – are of great significance not just to UK and EU citizens, but for those in the United States and around the world who value economic liberty (see my Heritage Foundation memorandum giving the reasons why, here).

If Brexit is to promote economic freedom and enhanced economic welfare, Brexit negotiations between the UK and the EU must not limit the ability of the United Kingdom to pursue (1) efficiency-enhancing regulatory reform and (2) trade liberalizing agreements with non-EU nations.  These points are expounded upon in a recent economic study (The Brexit Inflection Point) by the non-profit UK think tank the Legatum Institute, which has produced an impressive body of research on the benefits of Brexit, if implemented in a procompetitive, economically desirable fashion.  (As a matter of full disclosure, I am a member of Legatum’s “Special Trade Commission,” which “seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  Members of the Special Trade Commission are unpaid – they serve on a voluntary pro bono basis.)

Unfortunately, however, leading UK press commentators have urged the UK Government to accede to a full harmonization of UK domestic regulations and trade policy with the EU.  Such a deal would be disastrous.  It would prevent the UK from entering into mutually beneficial trade liberalization pacts with other nations or groups of nations (e.g., with the U.S. and with the members of the Transpacific Partnership (TPP) trade agreement), because such arrangements by necessity would lead to a divergence with EU trade strictures.  It would also preclude the UK from unilaterally reducing harmful regulatory burdens that are a byproduct of economically inefficient and excessive EU rules.  In short, it would be antithetical to economic freedom and economic welfare.

Notably, in a November 30 article (Six Impossible Notions About “Global Britain”), a well-known business journalist, Martin Wolf of the Financial Times, sharply criticized The Brexit Inflection Point’s recommendation that the UK should pursue trade and regulatory policies that would diverge from EU standards.  Notably, Wolf characterized as an “impossible thing” Legatum’s point that the UK should not “’allow itself to be bound by the EU’s negotiating mandate.’  We all now know this is infeasible.  The EU holds the cards and it knows it holds the cards. The Legatum authors still do not.”

Shanker Singham, Director of Economic Policy and Prosperity Studies at Legatum, brilliantly responded to Wolf’s critique in a December 4 article (published online by CAPX) entitled A Narrow-Minded Brexit Is Doomed to Fail.  Singham’s trenchant analysis merits being set forth in its entirety (by permission of the author):

“Last week, the Financial Times’s chief economics commentator, Martin Wolf, dedicated his column to criticising The Brexit Inflection Point, a report for the Legatum Institute in which Victoria Hewson, Radomir Tylecote and I discuss what would constitute a good end state for the UK as it seeks to exercise an independent trade and regulatory policy post Brexit, and how we get from here to there.

We write these reports to advance ideas that we think will help policymakers as they tackle the single biggest challenge this country has faced since the Second World War. We believe in a market place of ideas, and we welcome challenge. . . .

[W]e are thankful that Martin Wolf, an eminent economist, has chosen to engage with the substance of our arguments. However, his article misunderstands the nature of modern international trade negotiations, as well as the reality of the European Union’s regulatory system – and so his claim that, like the White Queen, we “believe in impossible things” simply doesn’t stack up.

Mr Wolf claims there are six impossible things that we argue. We will address his rebuttals in turn.

But first, in discussions about the UK’s trade policy, it is important to bear in mind that the British government is currently discussing the manner in which it will retake its independent WTO membership. This includes agricultural import quotas, and its WTO rectification processes with other WTO members.

If other countries believe that the UK will adopt the position of maintaining regulatory alignment with the EU, as advocated by Mr Wolf and others, the UK’s negotiating strategy would be substantially weaker. It would quite wrongly suggest that the UK will be unable to lower trade barriers and offer the kind of liberalisation that our trading partners seek and that would work best for the UK economy. This could negatively impact both the UK and the EU’s ongoing discussions in the WTO.

Has the EU’s trading system constrained growth in the World?

The first impossible thing Mr Wolf claims we argue is that the EU system of protectionism and harmonised regulation has constrained economic growth for Britain and the world. He is right to point out that the volume of world trade has increased, and the UK has, of course, experienced GDP growth while a member of the EU.

However, as our report points out, the EU’s prescriptive approach to regulation, especially in the recent past (for example, its approach on data protection, audio-visual regulation, the restrictive application of the precautionary principle, REACH chemicals regulation, and financial services regulations to name just a few) has led to an increase in anti-competitive regulation and market distortions that are wealth destructive.

As the OECD notes in various reports on regulatory reform, regulation can act as a behind-the-border barrier to trade and impede market openness for trade and investment. Inefficient regulation imposes unnecessary burdens on firms, increases barriers to entry, impacts on competition and incentives for innovation, and ultimately hurts productivity. The General Data Protection Regulation (GDPR) is an example of regulation that is disproportionate to its objectives; it is highly prescriptive and imposes substantial compliance costs for business that want to use data to innovate.

Rapid growth during the post-war period is in part thanks to the progressive elimination of border trade barriers. But, in terms of wealth creation, we are no longer growing at that rate. Since before the financial crisis, measures of actual wealth creation (not GDP which includes consumer and government spending) such as industrial output have stalled, and the number of behind-the-border regulatory barriers has been increasing.

The global trading system is in difficulty. The lack of negotiation of a global trade round since the Uruguay Round, the lack of serious services liberalisation in either the built-in agenda of the WTO or sectorally following on from the Basic Telecoms Agreement and its Reference Paper on Competition Safeguards in 1997 has led to an increase in behind-the-border barriers and anti-competitive distortions and regulation all over the world. This stasis in international trade negotiations is an important contributory factor to what many economists have talked about as a “new normal” of limited growth, and a global decline in innovation.

Meanwhile the EU has sought to force its regulatory system on the rest of the world (the GDPR is an example of this). If it succeeds, the result would be the kind of wealth destruction that pushes more people into poverty. It is against this backdrop that the UK is negotiating with both the EU and the rest of the world.

The question is whether an independent UK, the world’s sixth biggest economy and second biggest exporter of services, is able to contribute to improving the dynamics of the global economic architecture, which means further trade liberalisation. The EU is protectionist against outside countries, which is antithetical to the overall objectives of the WTO. This is true in agriculture and beyond. For example, the EU imposes tariffs on cars at four times the rate applied by the US, while another large auto manufacturing country, Japan, has unilaterally removed its auto tariffs.

In addition, the EU27 represents a declining share of UK exports, which is rather counter-intuitive for a Customs Union and single market. In 1999, the EU represented 55 per cent of UK exports, and by 2016, this was 43 per cent. That said, the EU will remain an important, albeit declining, market for the UK, which is why we advocate a comprehensive free trade agreement with it.

Can the UK secure meaningful regulatory recognition from the EU without being identical to it?

Second, Mr Wolf suggests that regulatory recognition between the UK and EU is possible only if there is harmonisation or identical regulation between the UK and EU.

This is at odds with WTO practice, stretching back to its rules on domestic laws and regulation as encapsulated in Article III of the GATT and Article VI of the GATS, and as expressed in the Technical Barriers to Trade (TBT) and Sanitary and Phytosanitary (SPS) agreements.

This is the critical issue. The direction of travel of international trade thinking is towards countries recognising each other’s regulatory systems if they achieve the same ultimate goal of regulation, even if the underlying regulation differs, and to regulate in ways that are least distortive to international trade and competition. There will be areas where this level of recognition will not be possible, in which case UK exports into the EU will of course have to satisfy the standards of the EU. But even here we can mitigate the trade costs to some extent by Mutual Recognition Agreements on conformity assessment and market surveillance.

Had the US taken the view that it would not receive regulatory recognition unless their regulatory systems were the same, the recent agreement on prudential measures in insurance and reinsurance services between the EU and US would not exist. In fact this point highlights the crucial issue which the UK must successfully negotiate, and one in which its interests are aligned with other countries and with the direction of travel of the WTO itself. The TBT and SPS agreements broadly provide that mutual recognition should not be denied where regulatory goals are aligned but technical regulation differs.

Global trade and regulatory policy increasingly looks for regulation that promotes competition. The EU is on a different track, as the GDPR demonstrates. This is the reason that both the Canada-EU agreement (CETA) and the EU offer in the Trade in Services agreement (TiSA) does not include new services. If GDPR were to become the global standard, trade in data would be severely constrained, slowing the development of big data solutions, the fourth industrial revolution, and new services trade generally.

As many firms recognise, this would be extremely damaging to global prosperity. In arguing that regulatory recognition is only available if the UK is fully harmonised with the EU, Mr Wolf may be in harmony with the EU approach to regulation. But that is exactly the approach that is damaging the global trading environment.

Can the UK exercise trade policy leadership?

Third, Mr Wolf suggests that other countries do not, and will not, look to the UK for trade leadership. He cites the US’s withdrawal from the trade negotiating space as an example. But surely the absence of the world’s biggest services exporter means that the world’s second biggest exporter of services will be expected to advocate for its own interests, and argue for greater services liberalisation.

Mr Wolf believes that the UK is a second-rank power in decline. We take a different view of the world’s sixth biggest economy, the financial capital of the world and the second biggest exporter of services. As former New Zealand High Commissioner, Sir Lockwood Smith, has said, the rest of the world does not see the UK as the UK too often seems to see itself.

The global companies that have their headquarters in the UK do not see things the same way as Mr Wolf. In fact, the lack of trade leadership since 1997 means that a country with significant services exports would be expected to show some leadership.

Mr Wolf’s point is that far from seeking to grandiosely lead global trade negotiations, the UK should stick to its current knitting, which consists of its WTO rectification, and includes the negotiation of its agricultural import quotas and production subsidies in agriculture. This is perhaps the most concerning part of his argument. Yes, the UK must rectify its tariff schedules, but for that process to be successful, especially on agricultural import quotas, it must be able to demonstrate to its partners that it will be able to grant further liberalisation in the near term future. If it can’t, then its trading partners will have no choice but to demand as much liberalisation as they can secure right now in the rectification process.

This will complicate that process, and cause damage to the UK as it takes up its independent WTO membership. Those WTO partners who see the UK as vulnerable on this point will no doubt see validation in Mr Wolf’s article and assume it means that no real liberalisation will be possible from the UK. The EU should note that complicating this process for the UK will not help the EU in its own WTO processes, where it is vulnerable.

Trade negotiations are dynamic not static and the UK must act quickly

Fourth, Mr Wolf suggests that the UK is not under time pressure to “escape from the EU”.  This statement does not account for how international trade negotiations work in practice. In order for countries to cooperate with the UK on its WTO rectification, and its TRQ negotiations, as well to seriously negotiate with it, they have to believe that the UK will have control over tariff schedules and regulatory autonomy from day one of Brexit (even if we may choose not to make changes to it for an implementation period).

If non-EU countries think that the UK will not be able to exercise its freedom for several years, they will simply demand their pound of flesh in the negotiations now, and get on with the rest of their trade policy agenda. Trade negotiations are not static. The US executive could lose trade-negotiating authority in the summer of next year if the NAFTA renegotiation is not going well. Other countries will seek to accede to the Trans Pacific Partnership (TPP). China is moving forward with its Regional Cooperation and Economic Partnership, which does not meaningfully touch on domestic regulatory barriers. Much as we might criticise Donald Trump, his administration has expressed strong political will for a UK-US agreement, and in that regard has broken with traditional US trade policy thinking. The UK has an opportunity to strike and must take it.

The UK should prevail on the EU to allow Customs Agencies to be inter-operable from day one

Fifth, with respect to the challenges raised on customs agencies working together, our report argued that UK customs and the customs agencies of the EU member states should discuss customs arrangements at a practical and technical level now. What stands in the way of this is the EU’s stubbornness. Customs agencies are in regular contact on a business-as-usual basis, so the inability of UK and member-state customs agencies to talk to each other about the critical issue of new arrangements would seem to border on negligence. Of course, the EU should allow member states to have these critical conversations now.  Given the importance of customs agencies interoperating smoothly from day one, the UK Government must press its case with the European Commission to allow such conversations to start happening as a matter of urgency.

Does the EU hold all the cards?

Sixth, Mr Wolf argues that the EU holds all the cards and knows it holds all the cards, and therefore disagrees with our claim that the the UK should “not allow itself to be bound by the EU’s negotiating mandate”. As with his other claims, Mr Wolf finds himself agreeing with the EU’s negotiators. But that does not make him right.

While absence of a trade deal will of course damage UK industries, the cost to EU industries is also very significant. Beef and dairy in Ireland, cars and dairy in Bavaria, cars in Catalonia, textiles and dairy in Northern Italy – all over Europe (and in politically sensitive areas), industries stands to lose billions of Euros and thousands of jobs. This is without considering the impact of no financial services deal, which would increase the cost of capital in the EU, aborting corporate transactions and raising the cost of the supply chain. The EU has chosen a mandate that risks neither party getting what it wants.

The notion that the EU is a masterful negotiator, while the UK’s negotiators are hopeless is not the global view of the EU and the UK. Far from it. The EU in international trade negotiations has a reputation for being slow moving, lacking in creative vision, and unable to conclude agreements. Indeed, others have generally gone to the UK when they have been met with intransigence in Brussels.

What do we do now?

Mr Wolf’s argument amounts to a claim that the UK is not capable of the kind of further and deeper liberalisation that its economy would suggest is both possible and highly desirable both for the UK and the rest of the world. According to Mr Wolf, the UK can only consign itself to a highly aligned regulatory orbit around the EU, unable to realise any other agreements, and unable to influence the regulatory system around which it revolves, even as that system becomes ever more prescriptive and anti-competitive. Such a position is at odds with the facts and would guarantee a poor result for the UK and also cause opportunities to be lost for the rest of the world.

In all of our [Legatum Brexit-related] papers, we have started from the assumption that the British people have voted to leave the EU, and the government is implementing that outcome. We have then sought to produce policy recommendations based on what would constitute a good outcome as a result of that decision. This can be achieved only if we maximise the opportunities and minimise the disruptions.

We all recognise that the UK has embarked on a very difficult process. But there is a difference between difficult and impossible. There is also a difference between tasks that must be done and take time, and genuine negotiation points. We welcome the debate that comes from constructive challenge of our proposals; and we ask in turn that those who criticise us suggest alternative plans that might achieve positive outcomes. We look forward to the opportunity of a broader debate so that collectively the country can find the best path forward.”

 

As the Federal Communications (FCC) prepares to revoke its economically harmful “net neutrality” order and replace it with a free market-oriented “Restoring Internet Freedom Order,” the FCC and the Federal Trade Commission (FTC) commendably have announced a joint policy for cooperation on online consumer protection.  According to a December 11 FTC press release:

The Federal Trade Commission and Federal Communications Commission (FCC) announced their intent to enter into a Memorandum of Understanding (MOU) under which the two agencies would coordinate online consumer protection efforts following the adoption of the Restoring Internet Freedom Order.

“The Memorandum of Understanding will be a critical benefit for online consumers because it outlines the robust process by which the FCC and FTC will safeguard the public interest,” said FCC Chairman Ajit Pai. “Instead of saddling the Internet with heavy-handed regulations, we will work together to take targeted action against bad actors. This approach protected a free and open Internet for many years prior to the FCC’s 2015 Title II Order and it will once again following the adoption of the Restoring Internet Freedom Order.”

“The FTC is committed to ensuring that Internet service providers live up to the promises they make to consumers,” said Acting FTC Chairman Maureen K. Ohlhausen. “The MOU we are developing with the FCC, in addition to the decades of FTC law enforcement experience in this area, will help us carry out this important work.”

The draft MOU, which is being released today, outlines a number of ways in which the FCC and FTC will work together to protect consumers, including:

The FCC will review informal complaints concerning the compliance of Internet service providers (ISPs) with the disclosure obligations set forth in the new transparency rule. Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization, and congestion management. Should an ISP fail to make the required disclosures—either in whole or in part—the FCC will take enforcement action.

The FTC will investigate and take enforcement action as appropriate against ISPs concerning the accuracy of those disclosures, as well as other deceptive or unfair acts or practices involving their broadband services.

The FCC and the FTC will broadly share legal and technical expertise, including the secure sharing of informal complaints regarding the subject matter of the Restoring Internet Freedom Order. The two agencies also will collaborate on consumer and industry outreach and education.

The FCC’s proposed Restoring Internet Freedom Order, which the agency is expected to vote on at its December 14 meeting, would reverse a 2015 agency decision to reclassify broadband Internet access service as a Title II common carrier service. This previous decision stripped the FTC of its authority to protect consumers and promote competition with respect to Internet service providers because the FTC does not have jurisdiction over common carrier activities.

The FCC’s Restoring Internet Freedom Order would return jurisdiction to the FTC to police the conduct of ISPs, including with respect to their privacy practices. Once adopted, the order will also require broadband Internet access service providers to disclose their network management practices, performance, and commercial terms of service. As the nation’s top consumer protection agency, the FTC will be responsible for holding these providers to the promises they make to consumers.

Particularly noteworthy is the suggestion that the FCC and FTC will work to curb regulatory duplication and competitive empire building – a boon to Internet-related businesses that would be harmed by regulatory excess and uncertainty.  Stay tuned for future developments.

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

Unexpectedly, on the day that the white copy of the upcoming repeal of the 2015 Open Internet Order was published, a mobile operator in Portugal with about 7.5 million subscribers is garnering a lot of attention. Curiously, it’s not because Portugal is a beautiful country (Iker Casillas’ Instagram feed is dope) nor because Portuguese is a beautiful romance language.

Rather it’s because old-fashioned misinformation is being peddled to perpetuate doomsday images that Portuguese ISPs have carved the Internet into pieces — and if the repeal of the 2015 Open Internet Order passes, the same butchery is coming to an AT&T store near you.

Much ado about data

This tempest in the teacup is about mobile data plans, specifically the ability of mobile subscribers to supplement their data plan (typically ranging from 200 MB to 3 GB per month) with additional 10 GB data packages containing specific bundles of apps – messaging apps, social apps, video apps, music apps, and email and cloud apps. Each additional 10 GB data package costs EUR 6.99 per month and Meo (the mobile operator) also offers its own zero rated apps. Similar plans have been offered in Portugal since at least 2012.

Screen Shot 2017-11-22 at 3.39.21 PM

These data packages are a clear win for mobile subscribers, especially pre-paid subscribers who tend to be at a lower income level than post-paid subscribers. They allow consumers to customize their plan beyond their mobile broadband subscription, enabling them to consume data in ways that are better attuned to their preferences. Without access to these data packages, consuming an additional 10 GB of data would cost each user an additional EUR 26 per month and require her to enter into a two year contract.

These discounted data packages also facilitate product differentiation among mobile operators that offer a variety of plans. Keeping with the Portugal example, Vodafone Portugal offers 20 GB of additional data for certain apps (Facebook, Instagram, SnapChat, and Skype, among others) with the purchase of a 3 GB mobile data plan. Consumers can pick which operator offers the best plan for them.

In addition, data packages like the ones in question here tend to increase the overall consumption of content, reduce users’ cost of obtaining information, and allow for consumers to experiment with new, less familiar apps. In short, they are overwhelmingly pro-consumer.

Even if Portugal actually didn’t have net neutrality rules, this would be the furthest thing from the apocalypse critics make it out to be.

Screen Shot 2017-11-22 at 6.51.36 PM

Net Neutrality in Portugal

But, contrary to activists’ misinformation, Portugal does have net neutrality rules. The EU implemented its net neutrality framework in November 2015 as a regulation, meaning that the regulation became the law of the EU when it was enacted, and national governments, including Portugal, did not need to transpose it into national legislation.

While the regulation was automatically enacted in Portugal, the regulation and the 2016 EC guidelines left the decision of whether to allow sponsored data and zero rating plans (the Regulation likely classifies data packages at issue here to be zero rated plans because they give users a lot of data for a low price) in the hands of national regulators. While Portugal is still formulating the standard it will use to evaluate sponsored data and zero rating under the EU’s framework, there is little reason to think that this common practice would be disallowed in Portugal.

On average, in fact, despite its strong net neutrality regulation, the EU appears to be softening its stance toward zero rating. This was evident in a recent EC competition policy authority (DG-Comp) study concluding that there is little reason to believe that such data practices raise concerns.

The activists’ willful misunderstanding of clearly pro-consumer data plans and purposeful mischaracterization of Portugal as not having net neutrality rules are inflammatory and deceitful. Even more puzzling for activists (but great for consumers) is their position given there is nothing in the 2015 Open Internet Order that would prevent these types of data packages from being offered in the US so long as ISPs are transparent with consumers.

On November 27, the U.S. Supreme Court will turn once again to patent law, hearing cases addressing the constitutionality of Patent Trial and Appeal Board (PTAB) “inter partes” review (Oil States Energy v. Greene), and whether PTAB must issue a final written decision as to every claim challenged by the petitioner in an inter partes review (SAS Institute v. Matal).

As the Justices peruse the bench memos and amicus curiae briefs concerning these cases, their minds will, of course, be focused on legal questions of statutory and constitutional interpretation.  Lurking in the background of these and other patent cases, however, is an overarching economic policy issue – have recent statutory changes and case law interpretations weakened U.S. patent protection in a manner that seriously threatens future American economic growth and innovation?  In a recent Heritage Foundation Legal Memorandum, I responded in the affirmative to this question, and argued that significant statutory reforms are needed to restore the American patent system to a position of global leadership that is key to U.S. economic prosperity.  (Among other things, I noted severe constitutional problems raised by PTAB’s actions, and urged that Congress consider passing legislation to reform PTAB, if the Supreme Court upholds the constitutionality of inter partes review.)

A timely opinion article published yesterday in the Wall Street Journal emphasizes that the decline in American patent protection also has profound negative consequences for American international economic competitiveness.  Journalist David Kline, author of the commentary (“Fear American Complacency, Not China”), succinctly contrasts unfortunate U.S. patent policy developments with the recent strengthening of the Chinese patent system (a matter of high priority to the Chinese Government):

China’s entrepreneurs have been fueled by reforms in recent years that strengthened intellectual property rights—ironic for a country long accused of stealing trade secrets and ignoring IP protections. Today Chinese companies are filing for more patents than American ones. The patent application and examination process has been streamlined, and China has established specialized intellectual property courts and tribunals to adjudicate lawsuits and issue injunctions against infringers. “IP infringers will pay a heavy price,” President Xi Jinping warned this summer. . . .

In the U.S., by contrast, a series of legislative actions and Supreme Court rulings have weakened patent rights, especially for startups. A new way of challenging patents called “inter partes review” results in at least one patent claim being thrown out in roughly 80% of cases, according to an analysis by Adam Mossoff, a law professor at George Mason University. Unsurprisingly, many of these cases were brought by defendants facing patent infringement lawsuits in federal court.

This does not bode well for America’s global competitiveness. The U.S. used to rank first among nations in the strength of its intellectual property rights. But the 2017 edition of the Global IP Index places the U.S. 10th—tied with Hungary.

The Supreme Court may not be able to take judicial notice of this policy reality (although strong purely legal arguments would support a holding that PTAB inter partes review is unconstitutional), but Congress certainly can take legislative notice of it.  Let us hope that Congress acts decisively to strengthen the American patent system – in the interests of a strong, innovative, and internationally competitive American economy.

The latest rankings of trade freedom around the world will be set forth and assessed in the 24th annual edition of the Heritage Foundation annual Index of Economic Freedom (Index), which will be published in January 2018.  Today Heritage published a sneak preview of the 2018 Index’s analysis of freedom to trade, which merits public attention.  First, though, a bit of background on the Index’s philosophy and methodology is appropriate.

The nature and measurement of economic freedom are explained in the 2017 Index:

Economic freedom is the fundamental right of every human to control his or her own labor and property. In an economically free society, individuals are free to work, produce, consume, and invest in any way they please. In economically free societies, governments allow labor, capital, and goods to move freely, and refrain from coercion or constraint of liberty beyond the extent necessary to protect and maintain liberty itself. . . .  

[The Freedom Index] measure[s] economic freedom based on 12 quantitative and qualitative factors, grouped into four broad categories, or pillars, of economic freedom:

  1. Rule of Law (property rights, government integrity, judicial effectiveness)
  2. Government Size (government spending, tax burden, fiscal health)
  3. Regulatory Efficiency (business freedom, labor freedom, monetary freedom)
  4. Open Markets (trade freedom, investment freedom, financial freedom)

Each of the twelve economic freedoms within these categories is graded on a scale of 0 to 100. A country’s overall score is derived by averaging these twelve economic freedoms, with equal weight being given to each. More information on the grading and methodology can be found in the appendix.

As was the case in previous versions, the 2018 Index explores various aspects of economic freedom in several essays that accompany its rankings.  In particular, with respect to international trade, the 2018 Index demonstrates that citizens of countries that embrace free trade are better off than those in countries that do not.  The data show a strong correlation between trade freedom and a variety of positive indicators, including economic prosperity, unpolluted environments, food security, gross national income per capita, and the absence of politically motivated violence or unrest.  Reducing trade barriers remains a proven recipe for prosperity that a majority of Americans support.

The 2018 Index’s three key trade-related takeaways are:

  1. A comparison of economic performance and trade scores in the 2018 Index shows how trade freedom increases prosperity and overall well-being.
  2. Countries with the most trade freedom have much higher per capita incomes, greater food security, cleaner environments, and less politically motivated violence.
  3. Free trade policies also encourage freedom in general. Most Americans support free trade, and believe its benefits outweigh any disadvantages.

Follow this space for further updates on the 2018 Index.

My new book, How to Regulate: A Guide for Policymakers, is now available on Amazon.  Inform Santa!

The book, published by Cambridge University Press, attempts to fill what I think is a huge hole in legal education:  It focuses on the substance of regulation and sets forth principles for designing regulatory approaches that will maximize social welfare.

Lawyers and law professors obsess over process.  (If you doubt that, sit in on a law school faculty meeting sometime!) That obsession may be appropriate; process often determines substance.  Rarely, though, do lawyers receive training in how to design the substance of a rule or standard to address some welfare-reducing defect in private ordering.  That’s a shame, because lawyers frequently take the lead in crafting regulatory approaches.  They need to understand (1) why the unfortunate situation is occurring, (2) what options are available for addressing it, and (3) what are the downsides to each of the options.

Economists, of course, study those things.  But economists have their own blind spots.  Being unfamiliar with legal and regulatory processes, they often fail to comprehend how (1) government officials’ informational constraints and (2) special interests’ tendency to manipulate government power for private ends can impair a regulatory approach’s success.  (Economists affiliated with the Austrian and Public Choice schools are more attuned to those matters, but their insights are often ignored by the economists advising on regulatory approaches — see, e.g., the fine work of the Affordable Care Act architects.)

Enter How to Regulate.  The book endeavors to provide economic training to the lawyers writing rules and a sense of the “limits of law” to the economists advising them.

The book begins by setting forth an overarching goal for regulation (minimize the sum of error and decision costs) and a general plan for achieving that goal (think like a physician–identify the adverse symptom, diagnose the disease, consider the range of available remedies, and assess the side effects of each).  It then marches through six major bases for regulating: externalities, public goods, market power, information asymmetry, agency costs, and the cognitive and volitional quirks observed by behavioral economists.  For each of those bases for regulation, the book considers the symptoms that might justify a regulatory approach, the disease causing those symptoms (i.e., the underlying economics), the range of available remedies (the policy tools available), and the side effects of each (e.g., public choice concerns, mistakes from knowledge limitations).

I have been teaching How to Regulate this semester, and it’s been a blast.  Unfortunately, all of my students are in their last year of law school.  The book would be most meaningful, I think, to an upcoming second-year student.  It really lays out the basis for a number of areas of law beyond the common law:  environmental law, antitrust, corporate law, securities regulation, food labeling laws, consumer protection statutes, etc.

I was heartened to receive endorsements from a couple of very fine thinkers on regulation, both of whom have headed the Office of Information and Regulatory Affairs (the White House’s chief regulatory review body).  They also happen to occupy different spots on the ideological spectrum.

Judge Douglas Ginsburg of the D.C. Circuit wrote that the book “will be valuable for all policy wonks, not just policymakers.  It provides an organized and rigorous framework for analyzing whether and how inevitably imperfect regulation is likely to improve upon inevitably imperfect market outcomes.”

Harvard Law School’s Cass Sunstein wrote:  “This may well be the best guide, ever, to the regulatory state.  It’s brilliant, sharp, witty, and even-handed — and it’s so full of insights that it counts as a major contribution to both theory and practice.  Indispensable reading for policymakers all over the world, and also for teachers, students, and all those interested in what the shouting is really about.”

Bottom line:  There’s something for everybody in this book.  I wrote it because I think the ideas are important and under-studied.  And I really tried to make it as accessible (and occasionally funny!) as possible.

If you’re a professor and would be interested in a review copy for potential use in a class, or if you’re a potential reviewer, shoot me an email and I’ll request a review copy for you.

Canada’s large merchants have called on the government to impose price controls on interchange fees, claiming this would benefit not only merchants but also consumers. But experience elsewhere contradicts this claim.

In a recently released Macdonald Laurier Institute report, Julian Morris, Geoffrey A. Manne, Ian Lee, and Todd J. Zywicki detail how price controls on credit card interchange fees would result in reduced reward earnings and higher annual fees on credit cards, with adverse effects on consumers, many merchants and the economy as a whole.

This study draws on the experience with fee caps imposed in other jurisdictions, highlighting in particular the effects in Australia, where interchange fees were capped in 2003. There, the caps resulted in a significant decrease in the rewards earned per dollar spent and an increase in annual card fees. If similar restrictions were imposed in Canada, resulting in a 40 percent reduction in interchange fees, the authors of the report anticipate that:

  1. On average, each adult Canadian would be worse off to the tune of between $89 and $250 per year due to a loss of rewards and increase in annual card fees:
    1. For an individual or household earning $40,000, the net loss would be $66 to $187; and
    2. for an individual or household earning $90,000, the net loss would be $199 to $562.
  2. Spending at merchants in aggregate would decline by between $1.6 billion and $4.7 billion, resulting in a net loss to merchants of between $1.6 billion and $2.8 billion.
  3. GDP would fall by between 0.12 percent and 0.19 percent per year.
  4. Federal government revenue would fall by between 0.14 percent and 0.40 percent.

Moreover, tighter fee caps would “have a more dramatic negative effect on middle class households and the economy as a whole.”

You can read the full report here.

On November 10, at the University of Southern California Law School, Assistant Attorney General for Antitrust Makan Delrahim delivered an extremely important policy address on the antitrust treatment of standard setting organizations (SSOs).  Delrahim’s remarks outlined a dramatic shift in the Antitrust Division’s approach to controversies concerning the licensing of standard essential patents (SEPs, patents that “read on” SSO technical standards) that are often subject to “fair, reasonable, and non-discriminatory” (FRAND) licensing obligations imposed by SSOs.  In particular, while Delrahim noted the theoretical concerns of possible “holdups” by SEP holders (when SEP holders threaten to delay licensing until their royalty demands are met), he cogently explained why the problem of “holdouts” by implementers of SEP technologies (when implementers threaten to under-invest in the implementation of a standard, or threaten not to take a license at all, until their royalty demands are met) is a far more serious antitrust concern.  More generally, Delrahim stressed the centrality of patents as property rights, and the need for enforcers not to interfere with the legitimate unilateral exploitation of those rights (whether through licensing, refusals to license, or the filing of injunctive actions).  Underlying Delrahim’s commentary is the understanding that innovation is vitally important to the American economy, and the concern that antitrust enforcers’ efforts in recent years have threatened to undermine innovation by inappropriately interfering in free market licensing negotiations between patentees and licensees.

Important “takeaways” from Delrahim’s speech (with key quotations) are set forth below.

  • Thumb on the scale in favor of implementers: “In particular, I worry that we as enforcers have strayed too far in the direction of accommodating the concerns of technology implementers who participate in standard setting bodies, and perhaps risk undermining incentives for IP creators, who are entitled to an appropriate reward for developing break-through technologies.”
  • Striking the right balance through market forces (as opposed to government-issued best practices): “The dueling interests of innovators and implementers always are in tension, and the tension is resolved through the free market, typically in the form of freely negotiated licensing agreements for royalties or reciprocal licenses.”
  • Holdup as theoretical concern with no evidence that it’s a systemic or widespread problem: He praises Professor Carl Shapiro for his theoretical model of holdup, but stresses that “many of the proposed [antitrust] ‘solutions’ to the hold-up problem are often anathema to the policies underlying the intellectual property system envisioned by our forefathers.”
  • Rejects prior position that antitrust is only concerned with the patent-holder side of the holdup equation, stating that he’s more concerned with holdout given the nature of investments: “Too often lost in the debate over the hold-up problem is recognition of a more serious risk:  the hold-out problem. . . . I view the collective hold-out problem as a more serious impediment to innovation.  Here is why: most importantly, the hold-up and hold-out problems are not symmetric.  What do I mean by that?  It is important to recognize that innovators make an investment before they know whether that investment will ever pay off.  If the implementers hold out, the innovator has no recourse, even if the innovation is successful.  In contrast, the implementer has some buffer against the risk of hold-up because at least some of its investments occur after royalty rates for new technology could have been determined.  Because this asymmetry exists, under-investment by the innovator should be of greater concern than under-investment by the implementer.”
  • What’s at stake: “Every incremental shift in bargaining leverage toward implementers of new technologies acting in concert can undermine incentives to innovate.  I therefore view policy proposals with a one-sided focus on the hold-up issue with great skepticism because they can pose a serious threat to the innovative process.”
  • Breach of FRAND as primarily a contract or fraud, not antitrust issue: “There is a growing trend supporting what I would view as a misuse of antitrust or competition law, purportedly motivated by the fear of so-called patent hold-up, to police private commitments that IP holders make in order to be considered for inclusion in a standard.  This trend is troublesome.  If a patent holder violates its commitments to an SSO, the first and best line of defense, I submit, is the SSO itself and its participants. . . . If a patent holder is alleged to have violated a commitment to a standard setting organization, that action may have some impact on competition.  But, I respectfully submit, that does not mean the heavy hand of antitrust necessarily is the appropriate remedy for the would-be licensee—or the enforcement agency.  There are perfectly adequate and more appropriate common law and statutory remedies available to the SSO or its members.”
  • Recommends that unilateral refusals to license should be per se lawful: “The enforcement of valid patent rights should not be a violation of antitrust law.  A patent holder cannot violate the antitrust laws by properly exercising the rights patents confer, such as seeking an injunction or refusing to license such a patent.  Set aside whether taking these actions might violate the common law.  Under the antitrust laws, I humbly submit that a unilateral refusal to license a valid patent should be per se legal.  Indeed, just this Monday, Chief Judge Diane Wood, a former Deputy Assistant Attorney General at the Antitrust Division, stated that “[e]ven monopolists are almost never required to assist their competitors.”
  • Intent to investigate buyers’ cartel behavior in SSOs: “The prospect of hold-out offers implementers a crucial bargaining chip.  Unlike the unilateral hold-up problem, implementers can impose this leverage before they make significant investments in new technology.  . . . The Antitrust Division will carefully scrutinize what appears to be cartel-like anticompetitive behavior among SSO participants, either on the innovator or implementer side.  The old notion that ‘openness’ alone is sufficient to guard against cartel-like behavior in SSOs may be outdated, given the evolution of SSOs beyond strictly objective technical endeavors. . . . I likewise urge SSOs to be proactive in evaluating their own rules, both at the inception of the organization, and routinely thereafter.  In fact, SSOs would be well advised to implement and maintain internal antitrust compliance programs and regularly assess whether their rules, or the application of those rules, are or may become anticompetitive.”
  • Basing royalties on the “smallest salable component” as a requirement by a concerted agreement of implementers is a possible antitrust violation: “If an SSO pegs its definition of “reasonable” royalties to a single Georgia-Pacific factor that heavily favors either implementers or innovators, then the process that led to such a rule deserves close antitrust scrutiny.  While the so-called ‘smallest salable component’ rule may be a useful tool among many in determining patent infringement damages for multi-component products, its use as a requirement by a concerted agreement of implementers as the exclusive determinant of patent royalties may very well warrant antitrust scrutiny.”
  • Right to Injunctive Relief and holdout incentives: “Patents are a form of property, and the right to exclude is one of the most fundamental bargaining rights a property owner possesses.  Rules that deprive a patent holder from exercising this right—whether imposed by an SSO or by a court—undermine the incentive to innovate and worsen the problem of hold-out.  After all, without the threat of an injunction, the implementer can proceed to infringe without a license, knowing that it is only on the hook only for reasonable royalties.”
  • Seeking or Enforcing Injunctive Relief Generally a Contract Not Antitrust Issue: “It is just as important to recognize that a violation by a patent holder of an SSO rule that restricts a patent-holder’s right to seek injunctive relief should be appropriately the subject of a contract or fraud action, and rarely if ever should be an antitrust violation.”
  • FRAND is Not a Compulsory Licensing Scheme: “We should not transform commitments to license on FRAND terms into a compulsory licensing scheme.  Indeed, we have had strong policies against compulsory licensing, which effectively devalues intellectual property rights, including in most of our trade agreements, such as the TRIPS agreement of the WTO.  If an SSO requires innovators to submit to such a scheme as a condition for inclusion in a standard, we should view the SSO’s rule and the process leading to it with suspicion, and certainly not condemn the use of such injunctive relief as an antitrust violation where a contract remedy is perfectly adequate.”

I didn’t know Fred as well as most of the others who have provided such fine tributes here.  As they have attested, he was a first-rate scholar, an inspiring teaching, and a devoted friend.  From my own experience with him, I can add that he was deliberate about investing in the next generation of market-oriented scholars.  I’m the beneficiary of that investment.

My first encounter with Fred came in 1994, when I was fresh out of college and working as a research fellow at Washington University’s Center for the Study of American Business.  I was trying to assess the common law’s effectiveness at dealing with the externalities that are now addressed through complex environmental statutes and regulations.  My longtime mentor, P.J. Hill, recommended that I call Fred for help.  Fred was happy to drop what he was doing in order to explain to an ignorant 22 year-old how the common law’s property rights-based doctrines could address a great many environmental problems.

After completing law school and a judicial clerkship, I took a one-year Olin Fellowship at Northwestern, where Fred was teaching.  Once again, he took time to help a newbie formulate ideas for articles and structure arguments.  But for the publications I produced at Northwestern, I probably couldn’t have landed a job teaching law.  And without Fred’s help, those publications wouldn’t have been nearly as strong.

A few years ago, Fred invited me to join as co-author of the fifth edition of his excellent antitrust casebook (co-authored with the magnificent Charlie Goetz).  How excited was I!  My initial excitement was over the opportunity to attach my name to two giants in the field.  What I didn’t realize at the time was how much I would learn from Fred and Charlie, both brilliant thinkers and lucid writers.

Fred and Charlie’s casebook continually emphasizes the decision-theoretic approach to antitrust – i.e., the view that antitrust rules and standards should be crafted so as to minimize the sum of error and decision costs.  As I worked on the casebook, my understanding of that regulatory approach deepened.  My recently published book, How to Regulate, extends the approach outside the antitrust context.

But for the experience working with Fred and Charlie on their casebook, I may never have recognized the broad applicability of the error cost approach to regulation, and I may never have completed How to Regulate.

In real life, people don’t get the sort of experience George Bailey had in It’s a Wonderful Life.  We never learn what people would have been like had we not influenced them.  I know for sure, though, that I would not be where I am today without Fred McChesney’s willingness to help me along the way.  I am most grateful.