Archives For international politics

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

I have previously written at this site (see here, here, and here) and elsewhere (see here, here, and here) about the problem of anticompetitive market distortions (ACMDs), government-supported (typically crony capitalist) rules that weaken the competitive process, undermine free trade, slow economic growth, and harm consumers.  On May 17, the Heritage Foundation hosted a presentation by Shanker Singham of the Legatum Institute (a London think tank) and me on recent research and projects aimed at combatting ACMDs.

Singham began his remarks by noting that from the late 1940s to the early 1990s, trade negotiations under the auspices of the General Agreement on Tariffs and Trade (GATT) (succeeded by the World Trade Organization (WTO)), were highly successful in reducing tariffs and certain non-tariff barriers, and in promoting agreements to deal with trade-related aspects of such areas as government procurement, services, investment, and intellectual property, among others.  Regrettably, however, liberalization of trade restraints at the border was not matched by procompetitive regulatory reform inside borders.  Indeed, to the contrary, ACMDs have continued to proliferate, harming competition, consumers, and economic welfare.  As Singham further explained, the problem is particularly acute in developing countries:  “Because of the failure of early [regulatory] reform in the 1990s which empowered oligarchs and created vested interests in the whole of the developing world, national level reform is extremely difficult.”

To highlight the seriousness of the ACMD problem, Singham and several colleagues have developed a proprietary “Productivity Simulator,” that focuses on potential national economic output based on measures of the effectiveness of domestic competition, international competition, and property rights protections within individual nations.  (The stronger the protections, the greater the potential of the free market to create wealth.)   The Productivity Simulator is able to show, with a regressed accuracy of 90%, the potential gains of reducing distortions in a given country.  Every country has its own curve in the Productivity Simulator – it is a curve because the gains are exponential as one moves to the most difficult reforms.  If all distortions in the world were eliminated (aka, the ceiling of human potential), the Simulator predicts global GDP would rise by 1100% (a conservative estimate, because the Simulator could not be applied to certain very regulatorily-distorted economies for which data were unavailable).   By illustrating the huge “dollars and cents” magnitude of economic losses due to anticompetitive distortions, the Simulator could make the ACMD problem more concrete and thereby help invigorate reform efforts.

Singham also has adapted his Simulator technique to demonstrate the potential for economic growth in proposed “Enterprise Cities” (“e-Cities”), free-market oriented zones within a country that avoid ACMDs and provide strong property rights and rule of law protections.  (Existing city states such as Hong Kong, Singapore, and Dubai already possess e-City characteristics.)  Individual e-City laws, regulations, and dispute-resolution mechanisms are negotiated between individual governments and entrepreneurial project teams headed by Singham.  (Already, potential e-cities are under consideration in Morocco, Saudi Arabia, Saudi Arabia, Bosnia & Herzegovina, and Somalia.)  Private investors would be attracted to e-Cities due to their free market regulatory climate and legal protections.  To the extent that e-Cities are launched and thrive, they may serve as “demonstration projects” for the welfare benefits of dismantling ACMDs.

Following Singham’s presentation, I discussed analyses of the ACMD problem carried out in recent years by major international organizations, including the World Bank, the Organization for Economic Cooperation and Development (OECD, an economic think tank funded by developed countries), and the International Competition Network (a network of national competition agencies and experts legal and economic advisers that produces non-binding “best practices” recommendations dealing with competition law and policy).  The OECD’s  “Competition Assessment Toolkit” is a how-to manual for ferreting out ACMDs – it “helps governments to eliminate barriers to competition by providing a method for identifying unnecessary restraints on market activities and developing alternative, less restrictive measures that still achieve government policy objectives.”  The OECD has used the Toolkit to demonstrate the huge economic cost to the Greek economy (5.2 billion euros) of just a very small subset of anticompetitive regulations.  The ICN has drawn on Toolkit principles in developing “Recommended Practices on Competition Assessment” that national competition agencies can apply in opposing ACMDs.  In a related vein, the ICN has also produced a “Competition Culture Project Report” that provides useful survey-based analysis competition agencies could draw upon to generate public support for dismantling ACMDs.  The World Bank has cooperated with ICN advocacy efforts.  It has sponsored annual World Bank forums featuring industry-specific studies of the costs of regulatory restrictions, held in conjunction with ICN annual conferences, and (beginning in 2015).  It also has joined with the ICN in supporting annual “competition advocacy contests” in which national competition agencies are able to highlight economic improvements due to specific regulatory reform successes.  Developed countries also suffer from ACMDs.  For example, occupational licensing restrictions in the United States affect over a quarter of the work force, and, according to a 2015 White House Report, “licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines.”  Moreover, the multibillion dollar cost burden of federal regulations continues to grow rapidly, as documented by the Heritage Foundation’s annual “Red Tape Rising” reports.

I closed my presentation by noting that statutory international trade law reforms operating at the border could complement efforts to reduce regulatory burdens operating inside the border.  In particular, I cited my 2015 Heritage study recommending that United States antidumping law be revised to adopt a procompetitive antitrust-based standard (in contrast to the current approach that serves as an unjustified tax on certain imports).  I also noted the importance of ensuring that trade laws protect against imports that violate intellectual property rights, because such imports undermine competition on the merits.

In sum, the effort to reduce the burdens of ACMDs continue to be pursued and to be highlighted in research, proposed demonstration projects, and efforts to spur regulatory reform.  This is a long-term initiative very much worth pursuing, even though its near-term successes may prove minor at best.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

Earlier this week Senators Orrin Hatch and Ron Wyden and Representative Paul Ryan introduced bipartisan, bicameral legislation, the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 (otherwise known as Trade Promotion Authority or “fast track” negotiating authority). The bill would enable the Administration to negotiate free trade agreements subject to appropriate Congressional review.

Nothing bridges partisan divides like free trade.

Top presidential economic advisors from both parties support TPA. And the legislation was greeted with enthusiastic support from the business community. Indeed, a letter supporting the bill was signed by 269 of the country’s largest and most significant companies, including Apple, General Electric, Intel, and Microsoft.

Among other things, the legislation includes language calling on trading partners to respect and protect intellectual property. That language in particular was (not surprisingly) widely cheered in a letter to Congress signed by a coalition of sixteen technology, content, manufacturing and pharmaceutical trade associations, representing industries accounting for (according to the letter) “approximately 35 percent of U.S. GDP, more than one quarter of U.S. jobs, and 60 percent of U.S. exports.”

Strong IP protections also enjoy bipartisan support in much of the broader policy community. Indeed, ICLE recently joined sixty-seven think tanks, scholars, advocacy groups and stakeholders on a letter to Congress expressing support for strong IP protections, including in free trade agreements.

Despite this overwhelming support for the bill, the Internet Association (a trade association representing 34 Internet companies including giants like Google and Amazon, but mostly smaller companies like coinbase and okcupid) expressed concern with the intellectual property language in TPA legislation, asserting that “[i]t fails to adopt a balanced approach, including the recognition that limitations and exceptions in copyright law are necessary to promote the success of Internet platforms both at home and abroad.”

But the proposed TPA bill does recognize “limitations and exceptions in copyright law,” as the Internet Association is presumably well aware. Among other things, the bill supports “ensuring accelerated and full implementation of the Agreement on Trade-Related Aspects of Intellectual Property Rights,” which specifically mentions exceptions and limitations on copyright, and it advocates “ensuring that the provisions of any trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law,” which also recognizes copyright exceptions and limitations.

What the bill doesn’t do — and wisely so — is advocate for the inclusion of mandatory fair use language in U.S. free trade agreements.

Fair use is an exception under U.S. copyright law to the normal rule that one must obtain permission from the copyright owner before exercising any of the exclusive rights in Section 106 of the Copyright Act.

Including such language in TPA would require U.S. negotiators to demand that trading partners enact U.S.-style fair use language. But as ICLE discussed in a recent White Paper, if broad, U.S.-style fair use exceptions are infused into trade agreements they could actually increase piracy and discourage artistic creation and innovation — particularly in nations without a strong legal tradition implementing such provisions.

All trade agreements entered into by the U.S. since 1994 include a mechanism for trading partners to enact copyright exceptions and limitations, including fair use, should they so choose. These copyright exceptions and limitations must conform to a global standard — the so-called “three-step test,” — established under the auspices of the 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement, and with roots going back to the 1967 amendments to the 1886 Berne Convention.

According to that standard,

Members shall confine limitations or exceptions to exclusive rights to

  1. certain special cases, which
  2. do not conflict with a normal exploitation of the work and
  3. do not unreasonably prejudice the legitimate interests of the right holder.

This three-step test provides a workable standard for balancing copyright protections with other public interests. Most important, it sets flexible (but by no means unlimited) boundaries, so, rather than squeezing every jurisdiction into the same box, it accommodates a wide range of exceptions and limitations to copyright protection, ranging from the U.S.’ fair use approach to the fair dealing exception in other common law countries to the various statutory exceptions adopted in civil law jurisdictions.

Fair use is an inherently common law concept, developed by case-by-case analysis and a system of binding precedent. In the U.S. it has been codified by statute, but only after two centuries of common law development. Even as codified, fair use takes the form of guidance to judicial decision-makers assessing whether any particular use of a copyrighted work merits the exception; it is not a prescriptive statement, and judicial interpretation continues to define and evolve the doctrine.

Most countries in the world, on the other hand, have civil law systems that spell out specific exceptions to copyright protection, that don’t rely on judicial precedent, and that are thus incompatible with the common law, fair use approach. The importance of this legal flexibility can’t be understated: Only four countries out of the 166 signatories to the Berne Convention have adopted fair use since 1967.

Additionally, from an economic perspective the rationale for fair use would seem to be receding, not expanding, further eroding the justification for its mandatory adoption via free trade agreements.

As digital distribution, the Internet and a host of other technological advances have reduced transaction costs, it’s easier and cheaper for users to license copyrighted content. As a result, the need to rely on fair use to facilitate some socially valuable uses of content that otherwise wouldn’t occur because of prohibitive costs of contracting is diminished. Indeed, it’s even possible that the existence of fair use exceptions may inhibit the development of these sorts of mechanisms for simple, low-cost agreements between owners and users of content – with consequences beyond the material that is subject to the exceptions. While, indeed, some socially valuable uses, like parody, may merit exceptions because of rights holders’ unwillingness, rather than inability, to license, U.S.-style fair use is in no way necessary to facilitate such exceptions. In short, the boundaries of copyright exceptions should be contracting, not expanding.

It’s also worth noting that simple marketplace observations seem to undermine assertions by Internet companies that they can’t thrive without fair use. Google Search, for example, has grown big enough to attract the (misguided) attention of EU antitrust regulators, despite no European country having enacted a U.S-style fair use law. Indeed, European regulators claim that the company has a 90% share of the market — without fair use.

Meanwhile, companies like Netflix contend that their ability to cache temporary copies of video content in order to improve streaming quality would be imperiled without fair use. But it’s impossible to see how Netflix is able to negotiate extensive, complex contracts with copyright holders to actually show their content, but yet is somehow unable to negotiate an additional clause or two in those contracts to ensure the quality of those performances without fair use.

Properly bounded exceptions and limitations are an important aspect of any copyright regime. But given the mix of legal regimes among current prospective trading partners, as well as other countries with whom the U.S. might at some stage develop new FTAs, it’s highly likely that the introduction of U.S.-style fair use rules would be misinterpreted and misapplied in certain jurisdictions and could result in excessively lax copyright protection, undermining incentives to create and innovate. Of course for the self-described consumer advocates pushing for fair use, this is surely the goal. Further, mandating the inclusion of fair use in trade agreements through TPA legislation would, in essence, force the U.S. to ignore the legal regimes of its trading partners and weaken the protection of copyright in trade agreements, again undermining the incentive to create and innovate.

There is no principled reason, in short, for TPA to mandate adoption of U.S-style fair use in free trade agreements. Congress should pass TPA legislation as introduced, and resist any rent-seeking attempts to include fair use language.

Last week, the George Washington University Center for Regulatory Studies convened a Conference (GW Conference) on the Status of Transatlantic Trade and Investment Partnership (TTIP) Negotiations between the European Union (EU) and the United States (U.S.), which were launched in 2013 and will continue for an indefinite period of time. In launching TTIP, the Obama Administration claimed that this pact would raise economic welfare in the U.S. and the EU through stimulating investment and lowering non-tariff barriers between the two jurisdictions, by, among other measures, “significantly cut[ting] the cost of differences in [European Union and United States] regulation and standards by promoting greater compatibility, transparency, and cooperation.

Whether TTIP, if enacted, would actually raise economic welfare in the United States is an open question, however. As a recent Heritage Foundation analysis of TTIP explained, a TTIP focus on “harmonizing” regulations could actually lower economic freedom (and welfare) by “regulating upward” through acceptance of the more intrusive approach, and by precluding future competition among alternative regulatory models that could lead to welfare-enhancing regulatory improvements. Thus, the Heritage study recommended that “[a]ny [TTIP] agreement should be based on mutual recognition, not harmonization, of regulations.”

Unfortunately, discussion at the GW Conference indicated that the welfare-superior mutual recognition approach has been rejected by negotiators – at least as of now. In response to a question I posed on the benefits of mutual recognition, an EU official responded that such an “academic” approach is not “realistic,” while a senior U.S. TTIP negotiator indicated that mutual recognition could prove difficult where regulatory approaches differ. I read those diplomatically couched responses as signaling that both sides opposed the mutual recognition approach. This is a real problem. As part of TTIP, U.S. and EU sector-specific regulators are actively engaged in discussing regulatory particulars. There is the distinct possibility that the regulators may agree on measures that raise regulatory burdens for the sectors covered – particularly given the oft-repeated motto at the GW Conference that TTIP must not reduce existing levels of “protection” for health, safety, and the environment. (Those blandishments eschew any cost-benefit calculus to justify existing protection levels.) This conclusion is further supported by public choice theory, which suggests that regulators may be expected to focus on expanding the size and scope of their regulatory domains, not on contracting them. To make things worse, TTIP raises the possibility that the highly successful U.S. tradition of reliance on private sector-led voluntary consensus standards, as opposed to the EU’s preference for heavy government involvement in standard-setting policies, may be undermined. Any move toward greater direct government influence on U.S. standard setting as part of a TTIP bargain would further undermine the vibrancy, competition, and innovation that have led to the great international success of U.S.-developed technical standards.

As a practical matter, however, is there time for a change in direction in TTIP negotiations regarding regulation and standards? Yes, there is. The TTIP negotiators face no true deadline. Moreover, as a matter of political reality, the eventual U.S. statutory adoption of TTIP measures may require the passage by Congress of “fast-track” trade promotion authority (TPA), which provides for congressional up-or-down votes (without possibility of amendment) on legislation embodying trade deals that have been negotiated by the Executive Branch. Given the political sensitivity of trade deals, they cannot easily be renegotiated if they are altered by congressional amendments. (Indeed, in recent decades all major trade agreements requiring implementing legislation have proceeded under TPA.)

If the Obama Administration decides that it wants to advance TTIP, it must rely on a Republican-controlled Congress to obtain TPA. Before it grants such authority, Congress should conduct hearings and demand that Administration officials testify about key aspects of the Administration’s TTIP negotiating philosophy, and, in particular, on how U.S. TTIP negotiators are approaching regulatory differences between the U.S. and the EU. Congress should make it a prerequisite to the grant of TPA that the final TTIP agreement embody welfare-enhancing mutual recognition of regulations and standards, rather than welfare-reducing harmonization. It should vote down any TTIP negotiated deal that fails to satisfy this requirement.

I thank Truth on the Market (and especially Geoff Manne) for adding me as a regular TOTM blogger, writing on antitrust, IP, and regulatory policy. I am a newly minted Senior Legal Fellow at the Heritage Foundation, and alumnus of BlackBerry and the Federal Trade Commission.

Representatives of over 100 competition agencies from around the globe, joined by “non-governmental advisors” (NGAs) from think tanks, universities and the private sector, gathered in Marrakech two weeks ago for the 13th Annual Conference of the International Competition Network (ICN).

The ICN, founded in 2001, seeks to promote “soft convergence” in competition law and policy by releasing non-binding (but highly influential) recommended “best practices,” holding teleseminars and workshops, and disseminating educational and training materials for use by governments.  ICN members produce their output through flexible project-oriented and results-based working groups, dealing with mergers, unilateral conduct, cartels, competition advocacy, and agency effectiveness (how to improve agency performance).  (I have been involved in ICN work since 2006, as a U.S. Federal Trade Commission representative and an NGA.  The term “competition” is generally employed in lieu of “antitrust” in most foreign jurisdictions.)

The Marrakech Conference yielded two new sets of recommended practices, focused on competition assessment and predatory pricing.  (I will have more to say on predatory pricing in my next blog post.)  To the extent they are eventually implemented in the U.S., the competition assessment recommendations could lower the burden of government-imposed regulatory restrictions to the benefit of American consumers and American competitiveness.

As then FTC Chairman Tim Muris observed in 2003, in highlighting the importance of combating government-imposed competitive restraints,

[a]ttempting to protect competition by focusing solely on private restraints is like trying to stop the flow of water at a fork in a stream by blocking only one of the channels.  Unless you block both channels, you are not likely to even slow, much less stop, the flow. Eventually, all the water will flow toward the unblocked channel.

Indeed, anticompetitive government regulations that restrict entry, protect state-sponsored firms, and otherwise dampen the competitive process are legion, and widely viewed as imposing far greater harm to consumer welfare than the purely private restraints traditionally condemned by antitrust. Because they operate openly and are backed by the enforcement power of government, public restraints, unlike private restraints, cannot be undermined by market forces, and thus are far more likely to have sweeping and harmful long-term effects.

The FTC and other competition agencies have employed “competition advocacy” to argue against particular anticompetitive government restrictions, but those efforts historically have been limited in number, scope, and effectiveness.  Despite the huge potential welfare benefits from lifting anticompetitive restrictions, those restraints typically are the fruits of successful lobbying by private beneficiaries of competitive distortions, or by “public interest” groups that trust rule by government fiat over market forces.  Moreover, consumers at large are generally ill-informed about regulatory harms and the costs to organize in favor of reform efforts are prohibitive.

Recently, however, international organizations, including the OECD, UNCTAD, and the World Bank, have stepped forward to highlight the costs of public sector regulatory restraints and to help competition agencies spot and advocate against different sorts of restrictions.  Building on these initiatives (and in particular the OECD’s Competition Assessment Toolkit), the ICN’s Advocacy Working Group drafted Recommended Practices on Competition Assessment (RPCA) that the ICN adopted and released as a new consensus product in Marrakech.

The RPCA apply broadly to proposed and existing legislation, regulations, and policies that may restrict competition.  Recognizing that government competition agencies differ greatly in their capacities and ability to influence other government bodies, the RPCA note that competition assessments can take many forms, ranging from recommendations drawn from application of general economic theory to resource-intensive competition impact assessments, with many variations in between.  The RPCA stress that they are intended to provide guidance, not require particular assessments, and that government entities other than competition agencies can carry out valuable assessment work.

The RPCA provide a comprehensive “soup to nuts” template for agencies tasked with assessments, comprising both process-related and substantive elements:

  • A competition assessment should identify an existing or proposed policy that may unduly restrict competition and evaluate its likely impact on competition;
  • Competition agencies should advocate for a policymaking environment that promotes consideration of competition principles (including delineation of legal authority and openness to outside sources of advice);
  • A transparent process should be used to conduct assessments;
  • Agencies should focus assessments on types of restrictions that pose the greatest threat to competition, and design selection criteria (which are described) to prioritize competition assessment among other advocacy activities;
  • Agencies should consider institutional arrangements and relationships with policymakers in building assessment programs (practical advice designed to enhance the political viability of assessments);
  • Agencies should consider whether a competitive restriction is reasonably related to the goals of the policy under review and whether the policy goal could be achieved without harming competition or in a less restrictive manner;
  • A competition assessment should start by identifying and considering the goals and objectives of the policy in question and review prior work in the area;
  • Agencies should consider how a policy’s restrictions are likely to influence the market structure and behavior of firms and customers in the market or neighboring markets;
  • Once a restraint and its possible competitive effects have been identified, agencies should evaluate the likely competitive effects on the basis of sound economic theory, and, where feasible, on empirical evidence;
  • Agencies should carefully consider the form of competition assessment most appropriate for a particular situation (i.e., agencies should be free to issue a formal or informal opinion with flexibility as to the manner of delivery);
  • Agencies should seek to deliver a competition assessment in a timely fashion; and,
  • Agencies should engage with interested third parties (e.g., policy organizations and domestic peer agencies) to promote policymakers’ consideration of an assessment.

The RPCA shine particularly bright in providing a concise yet nuanced evaluation of the sorts of restraints that are most likely to undermine the competitive process, including a cogent discussion of barriers to entry, exit, or expansion within a market; of policies that control how firms are allowed to compete in a market; of policies that shield firms from competitive pressure; and of policies that control the choices available to consumers.  The RPCA also highlight the value of attempting, where feasible, to derive quantitative welfare estimates of the costs of particular restrictions, based on a neutral metric and other tools of economic analysis.  Over the next year further work will be done on cataloguing existing case studies that contain welfare estimates and on the derivation of a metric.

The RPCA are no short-term panacea, but rather a practical manifesto for long-run regulatory reform.  They shed a useful spotlight on categories of economically harmful regulations that occur in a wide range of countries – not just in historically state-dominated economies.  Rent-seeking is ubiquitous, and regulations too often reflect wealth-destructive competitive limitations masquerading in public interest dress in all sorts of jurisdictions, including the United States.  Given the recent rapid rise in U.S. regulatory activity, the identification of U.S. federal and state government rules that undermine competition surely will remain a target-rich zone for competition advocates.

Let’s hope that, over time, when the political tides yield greater support for economic liberty, the lessons of Marrakech will point the way to repealing welfare-destructive regulatory impositions across the globe.

The ridiculousness currently emanating from ICANN and the NTIA (see these excellent posts from Milton Mueller and Eli Dourado on the issue) over .AMAZON, .PATAGONIA and other “geographic”/commercial TLDs is precisely why ICANN (and, apparently, the NTIA) is a problematic entity as a regulator.

The NTIA’s response to ICANN’s Governmental Advisory Committee’s (GAC) objection to Amazon’s application for the .AMAZON TLD (along with similar applications from other businesses for other TLDs) is particularly troubling, as Mueller notes:

In other words, the US statement basically says “we think that the GAC is going to do the wrong thing; its most likely course of action has no basis in international law and is contrary to vital policy principles the US is supposed to uphold. But who cares? We are letting everyone know that we will refuse to use the main tool we have that could either stop GAC from doing the wrong thing or provide it with an incentive to moderate its stance.”

Competition/antitrust issues don’t seem to be the focus of this latest chapter in the gTLD story, but it is instructive on this score nonetheless. As Berin Szoka and I wrote in ICLE’s comment to ICANN on gTLDS:

Among the greatest threats to this new “land rush” of innovation is the idea that ICANN should become a competition regulator, deciding whether to approve a TLD application based on its own competition analysis. But ICANN is not a regulator. It is a coordinator. ICANN should exercise its coordinating function by applying the same sort of analysis that it already does in coordinating other applications for TLDs.

* * *

Moreover, the practical difficulties in enforcing different rules for generic TLDs as opposed to brand TLDs likely render any competition pre-clearance mechanism unworkable. ICANN has already determined that .brand TLDs can and should be operated as closed domains for obvious and good reasons. But differentiating between, say .amazon the brand and .amazon the generic or .delta the brand and .delta the generic will necessarily result in arbitrary decisions and costly errors.

Of most obvious salience: implicit in the GAC’s recommendation is the notion that somehow Amazon.com is sufficiently different than .AMAZON to deny Amazon’s ownership of the latter. But as Berin and I point out:

While closed gTLDs might seem to some to limit competition, that limitation would occur only within a particular, closed TLD. But it has every potential to be outweighed by the dramatic opening of competition among gTLDs, including, importantly, competition with .com.

In short, the market for TLDs and domain name registrations do not present particular competitive risks, and there is no a priori reason for ICANN to intervene prospectively.

In other words, treating Amazon.com and .AMAZON as different products, in different relevant markets, is a mistake. No doubt Amazon.com would, even if .AMAZON were owned by Amazon, remain for the foreseeable future the more relevant site. If Latin American governments are concerned with cultural and national identity protection, they should (not that I’m recommending this) focus their objections on Amazon.com. But the reality is that Amazon.com doesn’t compromise cultural identity, and neither would Amazon’s ownership of .AMAZON. Rather, the wide availability of new TLDs opens up an enormous range of new competitive TLD and SLD constraints on existing, dominant .COM SLDs, any number of which could be effective in promoting and preserving cultural and national identities.

By the way – Amazonia.com, Amazonbasin.com and Amazonrainforest.com, presumably among many others, look to be unused and probably available for purchase. Perhaps opponents of Amazon’s ownership of .AMAZON should set their sights on those or other SLDs and avoid engaging in the sort of politicking that will ultimately ruin the Internet.

China

Paul H. Rubin —  27 June 2011

There are many stories about unrest in China.  Many factors are blamed for this unrest, including low wages, poor working conditions, and political factors.  But there is one thing that is not generally mentioned:  demographics.  The one child policy coupled with a preference for males (due to both economic and cultural factors) means that there ar significant numbers of unmarried and probably unmarriageable males.  This leads to severe male-male competition.  However, it also means that there are large numbers of socially discontent men with little to lose.  Similar factors probably operated in the Arabic world.  In both cases, it may be difficult to maintain an open democratic society.  I discussed this in Darwinian Politics, beginning at page 118.  It is also the theme of the book Bare Branches by Valerie M. Hudson and Andrea M. den Boer.  Because of demographic factors relating both to a very peculiar age structure and the gender imbalance mentioned here, China is going to face serious difficulties in the future.  Those projecting increasing power for China do not always take these factors into account.

The New York Times has an interesting story about land markets in China.  In order to get married a man needs to own property and land prices are very high in China.  As it its habit, the Times blames “overeager developers who force residents out of old neighborhoods.”

In fact, the Times gets it backwards.  The information needed to understand the issue is in the story: “The marriage competition is fierce, and statistically, women hold the cards. Given the nation’s gender imbalance, an outgrowth of a cultural preference for boys and China’s stringent family-planning policies, as many as 24 million men could be perpetual bachelors by 2020, according to the report.”  So what is happening is that there is a shortage of marriageable women and it is competition for the land needed to attract these women that is driving up land prices.

This competition is one unfortunate side effect of the one child policy and the Chinese preference for boys.  These 24 million unmarriageable men are going to be a long term problem for China.  In my book Darwinian Politics I argue that a large core of perpetual bachelors makes a free and open society difficult because this core will lead to social instability; the argument is also forcefully made in Bare Branches: The Security Implications of Asia’s Surplus Male Population by  Valerie M. Hudson and Andrea M. Den Boer. 

Much has been written about the problem of China’s aging population but I don’t think we have paid enough attention to the issues of gender imbalance.  More generally, I think much of the course of world politics over the next century is going to be driven by major demographic trends, and I think these worthy of increased study.  Nicholas Eberstadt of AEI is doing this sort of work, but I think there is much more to be done.

In light of economic worries in Vietnam, the WSJ reports that the country is soon likely to impose a widespread set of price controls and restrictions on political activity after an encouraging move toward freer markets:

Carlyle Thayer, a veteran Vietnam watcher and professor at the Australian Defense Academy in Canberra, says conservative factions in the ruling Politburo are tightening their grip on the country as Vietnam’s economic worries—especially inflation and fallout from currency devaluations—grow. He says he expects more crackdowns and arrests to come in the run-up to the country’s 2011 Party Congress, a major political event that will aim to map out Vietnam’s political and economic direction for the following five years.  In turn, the crackdowns threaten to curtail investment and economic growth in the country…..

Now, the price-control unit of Vietnam’s Finance Ministry is drafting proposals that, if implemented by the government, would compel private and foreign-owned companies to report pricing structures, according to documents viewed by The Wall Street Journal and corroborated by Vietnamese officials.  In some cases, the proposed rules would allow the government to set prices on a wide range of privately made or imported goods, including petroleum products, fertilizers and milk to help contain inflation as Vietnam continues pumping money into its volatile economy. Typically, the government applies this kind of aggressive measure only to state-owned businesses, and it is unclear whether Vietnam will write the wider rules into law.

Somewhat relatedly, here is one of my favorite papers about the economics of contractual relationships and enforcement institutions in Vietnam (McMillan & Woodruff).