Archives For international politics

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.

Although not always front page news, International Trade Commission (“ITC”) decisions can have major impacts on trade policy and antitrust law. Scott Kieff, a former ITC Commissioner, recently published a thoughtful analysis of Certain Carbon and Alloy Steel Products — a potentially important ITC investigation that implicates the intersection of these two policy areas. Scott was on the ITC when the investigation was initiated in 2016, but left in 2017 before the decision was finally issued in March of this year.

Perhaps most important, the case highlights an uncomfortable truth:

Sometimes (often?) Congress writes really bad laws and promotes really bad policies, but administrative agencies can do more harm to the integrity of our legal system by abusing their authority in an effort to override those bad policies.

In this case, that “uncomfortable truth” plays out in the context of the ITC majority’s effort to override Section 337 of the Tariff Act of 1930 by limiting the ability of the ITC to investigate alleged violations of the Act rooted in antitrust.

While we’re all for limiting the ability of competitors to use antitrust claims in order to impede competition (as one of us has noted: “Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions”), it is inappropriate to make an end-run around valid and unambiguous legislation in order to do so — no matter how desirable the end result. (As the other of us has noted: “Attempts to [effect preferred policies] through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.”)

Brief background

Under Section 337, the ITC is empowered to, among other things, remedy

Unfair methods of competition and unfair acts in the importation of articles… into the United States… the threat or effect of which is to destroy or substantially injure an industry in the United States… or to restrain or monopolize trade and commerce in the United States.

In Certain Carbon and Alloy Steel Products, the ITC undertook an investigation — at the behest of U.S. Steel Corporation — into alleged violations of Section 337 by the Chinese steel industry. The complaint was based upon a number of claims, including allegations of price fixing.

As ALJ Lord succinctly summarizes in her Initial Determination:

For many years, the United States steel industry has complained of unfair trade practices by manufacturers of Chinese steel. While such practices have resulted in the imposition of high tariffs on certain Chinese steel products, U.S. Steel seeks additional remedies. The complaint by U.S. Steel in this case attempts to use section 337 of the Tariff Act of 1930 to block all Chinese carbon and alloy steel from coming into the United States. One of the grounds that U.S. Steel relies on is the allegation that the Chinese steel industry violates U.S. antitrust laws.

The ALJ dismissed the antitrust claims (alleging violations of the Sherman Act), however, concluding that they failed to allege antitrust injury as required by US courts deciding Sherman Act cases brought by private parties under the Clayton Act’s remedial provisions:

Under federal antitrust law, it is firmly established that a private complainant must show antitrust standing [by demonstrating antitrust injury]. U.S. Steel has not alleged that it has antitrust standing or the facts necessary to establish antitrust standing and erroneously contends it need not have antitrust standing to allege the unfair trade practice of restraining trade….

In its decision earlier this year, a majority of ITC commissioners agreed, and upheld the ALJ’s Initial Determination.

In comments filed with the ITC following the ALJ’s Initial Determination, we argued that the ALJ erred in her analysis:

Because antitrust injury is not an express requirement imposed by Congress, because ITC processes differ substantially from those of Article III courts, and because Section 337 is designed to serve different aims than private antitrust litigation, the Commission should reinstate the price fixing claims and allow the case to proceed.

Unfortunately, in upholding the Initial Determination, the Commission compounded this error, and also failed to properly understand the goals of the Tariff Act, and, by extension, its own role as arbiter of “unfair” trade practices.

A tale of two statutes

The case appears to turn on an arcane issue of adjudicative process in antitrust claims brought under the antitrust laws in federal court, on the one hand, versus antitrust claims brought under the Section 337 of the Tariff Act at the ITC, on the other. But it is actually about much more: the very purposes and structures of those laws.

The ALJ notes that

[The Chinese steel manufacturers contend that] under antitrust law as currently applied in federal courts, it has become very difficult for a private party like U.S. Steel to bring an antitrust suit against its competitors. Steel accepts this but says the law under section 337 should be different than in federal courts.

And as the ALJ further notes, this highlights the differences between the two regimes:

The dispute between U.S. Steel and the Chinese steel industry shows the conflict between section 337, which is intended to protect American industry from unfair competition, and U.S. antitrust laws, which are intended to promote competition for the benefit of consumers, even if such competition harms competitors.

Nevertheless, the ALJ (and the Commission) holds that antitrust laws must be applied in the same way in federal court as under Section 337 at the ITC.

It is this conclusion that is in error.

Judging from his article, it’s clear that Kieff agrees and would have dissented from the Commission’s decision. As he writes:

Unlike the focus in Section 16 of the Clayton Act on harm to the plaintiff, the provisions in the ITC’s statute — Section 337 — explicitly require the ITC to deal directly with harms to the industry or the market (rather than to the particular plaintiff)…. Where the statute protects the market rather than the individual complainant, the antitrust injury doctrine’s own internal logic does not compel the imposition of a burden to show harm to the particular private actor bringing the complaint. (Emphasis added)

Somewhat similar to the antitrust laws, the overall purpose of Section 337 focuses on broader, competitive harm — injury to “an industry in the United States” — not specific competitors. But unlike the Clayton Act, the Tariff Act does not accomplish this by providing a remedy for private parties alleging injury to themselves as a proxy for this broader, competitive harm.

As Kieff writes:

One stark difference between the two statutory regimes relates to the explicit goals that the statutes state for themselves…. [T]he Clayton Act explicitly states it is to remedy harm to only the plaintiff itself. This difference has particular significance for [the Commission’s decision in Certain Carbon and Alloy Steel Products] because the Supreme Court’s source of the private antitrust injury doctrine, its decision in Brunswick, explicitly tied the doctrine to this particular goal.

More particularly, much of the Court’s discussion in Brunswick focuses on the role the [antitrust injury] doctrine plays in mitigating the risk of unjustly enriching the plaintiff with damages awards beyond the amount of the particular antitrust harm that plaintiff actually suffered. The doctrine makes sense in the context of the Clayton Act proceedings in federal court because it keeps the cause of action focused on that statute’s stated goal of protecting a particular litigant only in so far as that party itself is a proxy for the harm to the market.

By contrast, since the goal of the ITC’s statute is to remedy for harm to the industry or to trade and commerce… there is no need to closely tie such broader harms to the market to the precise amounts of harms suffered by the particular complainant. (Emphasis and paragraph breaks added)

The mechanism by which the Clayton Act works is decidedly to remedy injury to competitors (including with treble damages). But because its larger goal is the promotion of competition, it cabins that remedy in order to ensure that it functions as an appropriate proxy for broader harms, and not simply a tool by which competitors may bludgeon each other. As Kieff writes:

The remedy provisions of the Clayton Act benefit much more than just the private plaintiff. They are designed to benefit the public, echoing the view that the private plaintiff is serving, indirectly, as a proxy for the market as a whole.

The larger purpose of Section 337 is somewhat different, and its remedial mechanism is decidedly different:

By contrast, the provisions in Section 337[] are much more direct in that they protect against injury to the industry or to trade and commerce more broadly. Harm to the particular complainant is essentially only relevant in so far as it shows harm to the industry or to trade and commerce more broadly. In turn, the remedies the ITC’s statute provides are more modest and direct in stopping any such broader harm that is determined to exist through a complete investigation.

The distinction between antitrust laws and trade laws is firmly established in the case law. And, in particular, trade laws not only focus on effects on industry rather than consumers or competition, per se, but they also contemplate a different kind of economic injury:

The “injury to industry” causation standard… focuses explicitly upon conditions in the U.S. industry…. In effect, Congress has made a judgment that causally related injury to the domestic industry may be severe enough to justify relief from less than fair value imports even if from another viewpoint the economy could be said to be better served by providing no relief. (Emphasis added)

Importantly, under Section 337 such harms to industry would ultimately have to be shown before a remedy would be imposed. In other words, demonstration of injury to competition is a constituent part of a case under Section 337. By contrast, such a demonstration is brought into an action under the antitrust laws by the antitrust injury doctrine as a function of establishing that the plaintiff has standing to sue as a proxy for broader harm to the market.

Finally, it should be noted, as ITC Commissioner Broadbent points out in her dissent from the Commission’s majority opinion, that U.S. Steel alleged in its complaint a violation of the Sherman Act, not the Clayton Act. Although its ability to enforce the Sherman Act arises from the remedial provisions of the Clayton Act, the substantive analysis of its claims is a Sherman Act matter. And the Sherman Act does not contain any explicit antitrust injury requirement. This is a crucial distinction because, as Commissioner Broadbent notes (quoting the Federal Circuit’s Tianrui case):

The “antitrust injury” standing requirement stems, not from the substantive antitrust statutes like the Sherman Act, but rather from the Supreme Court’s interpretation of the injury elements that must be proven under sections 4 and 16 of the Clayton Act.

* * *

Absent [] express Congressional limitation, restricting the Commission’s consideration of unfair methods of competition and unfair acts in international trade “would be inconsistent with the congressional purpose of protecting domestic commerce from unfair competition in importation….”

* * *

Where, as here, no such express limitation in the Sherman Act has been shown, I find no legal justification for imposing the insurmountable hurdle of demonstrating antitrust injury upon a typical U.S. company that is grappling with imports that benefit from the international unfair methods of competition that have been alleged in this case.

Section 337 is not a stand-in for other federal laws, even where it protects against similar conduct, and its aims diverge in important ways from those of other federal laws. It is, in other words, a trade protection provision, first and foremost, not an antitrust law, patent law, or even precisely a consumer protection statute.

The ITC hamstrings Itself

Kieff lays out a number of compelling points in his paper, including an argument that the ITC was statutorily designed as a convenient forum with broad powers in order to enable trade harms to be remedied without resort to expensive and protracted litigation in federal district court.

But, perhaps even more important, he points to a contradiction in the ITC’s decision that is directly related to its statutory design.

Under the Tariff Act, the Commission is entitled to self-initiate a Section 337 investigation identical to the one in Certain Alloy and Carbon Steel Products. And, as in this case, private parties are also entitled to file complaints with the Commission that can serve as the trigger for an investigation. In both instances, the ITC itself decides whether there is sufficient basis for proceeding, and, although an investigation unfolds much like litigation in federal court, it is, in fact, an investigation (and decision) undertaken by the ITC itself.

Although the Commission is statutorily mandated to initiate an investigation once a complaint is properly filed, this is subject to a provision requiring the Commission to “examine the complaint for sufficiency and compliance with the applicable sections of this Chapter.” Thus, the Commission conducts a preliminary investigation to determine if the complaint provides a sound basis for institution of an investigation, not unlike an assessment of standing and evaluation of the sufficiency of a complaint in federal court — all of which happens before an official investigation is initiated.

Yet despite the fact that, before an investigation begins, the ITC either 1) decides for itself that there is sufficient basis to initiate its own action, or else 2) evaluates the sufficiency of a private complaint to determine if the Commission should initiate an action, the logic of the decision in Certain Alloy and Carbon Steel Products would apply different standards in each case. Writes Kieff:

There appears to be broad consensus that the ITC can self-initiate an antitrust case under Section 337 and in such a proceeding would not be required to apply the antitrust injury doctrine to itself or to anyone else…. [I]t seems odd to make [this] legal distinction… After all, if it turned out there really were harm to a domestic industry or trade and commerce in this case, it would be strange for the ITC to have to dismiss this action and deprive itself of the benefit of the advance work and ongoing work of the private party [just because it was brought to the ITC’s attention by a private party complaint], only to either sit idle or expend the resources to — flying solo that time — reinitiate and proceed to completion.

Odd indeed, because, in the end, what is instituted is an investigation undertaken by the ITC — whether it originates from a private party or from its own initiative. The role of a complaining party before the ITC is quite distinct from that of a plaintiff in an Article III court.

In trade these days, it always comes down to China

We are hesitant to offer justifications for Congress’ decision to grant the ITC a sweeping administrative authority to prohibit the “unfair” importation of articles into the US, but there could be good reasons that Congress enacted the Tariff Act as a protectionist statute.

In a recent Law360 article, Kieff noted that analyzing anticompetitive behavior in the trade context is more complicated than in the domestic context. To take the current example: By limiting the complainant’s ability to initiate an ITC action based on a claim that foreign competitors are conspiring to keep prices artificially low, the ITC majority decision may be short-sighted insofar as keeping prices low might actually be part of a larger industrial and military policy for the Chinese government:

The overlooked problem is that, as the ITC petitioners claim, the Chinese government is using its control over many Chinese steel producers to accomplish full-spectrum coordination on both price and quantity. Mere allegations of course would have to be proven; but it’s not hard to imagine that such coordination could afford the Chinese government effective surveillance and control over  almost the entire worldwide supply chain for steel products.

This access would help the Chinese government run significant intelligence operations…. China is allegedly gaining immense access to practically every bid and ask up and down the supply chain across the global steel market in general, and our domestic market in particular. That much real-time visibility across steel markets can in turn give visibility into defense, critical infrastructure and finance.

Thus, by taking it upon itself to artificially narrow its scope of authority, the ITC could be undermining a valid congressional concern: that trade distortions not be used as a way to allow a foreign government to gain a more pervasive advantage over diplomatic and military operations.

No one seriously doubts that China is, at the very least, a supportive partner to much of its industry in a way that gives that industry some potential advantage over competitors operating in countries that receive relatively less assistance from national governments.

In certain industries — notably semiconductors and patent-intensive industries more broadly — the Chinese government regularly imposes onerous conditions (including mandatory IP licensing and joint ventures with Chinese firms, invasive audits, and obligatory software and hardware “backdoors”) on foreign tech companies doing business in China. It has long been an open secret that these efforts, ostensibly undertaken for the sake of national security, are actually aimed at protecting or bolstering China’s domestic industry.

And China could certainly leverage these partnerships to obtain information on a significant share of important industries and their participants throughout the world. After all, we are well familiar with this business model: cheap or highly subsidized access to a desired good or service in exchange for user data is the basic description of modern tech platform companies.

Only Congress can fix Congress

Stepping back from the ITC context, a key inquiry when examining antitrust through a trade lens is the extent to which countries will use antitrust as a non-tariff barrier to restrain trade. It is certainly the case that a sort of “mutually assured destruction” can arise where every country chooses to enforce its own ambiguously worded competition statute in a way that can favor its domestic producers to the detriment of importers. In the face of that concern, the impetus to try to apply procedural constraints on open-ended competition laws operating in the trade context is understandable.

And as a general matter, it also makes sense to be concerned when producers like U.S. Steel try to use our domestic antitrust laws to disadvantage Chinese competitors or keep them out of the market entirely.

But in this instance the analysis is more complicated. Like it or not, what amounts to injury in the international trade context, even with respect to anticompetitive conduct, is different than what’s contemplated under the antitrust laws. When the Tariff Act of 1922 was passed (which later became Section 337) the Senate Finance Committee Report that accompanied it described the scope of its unfair methods of competition authority as “broad enough to prevent every type and form of unfair practice” involving international trade. At the same time, Congress pretty clearly gave the ITC the discretion to proceed on a much less-constrained basis than that on which Article III courts operate.

If these are problems, Congress needs to fix them, not the ITC acting sua sponte.

Moreover, as Kieff’s paper (and our own comments in the Certain Alloy and Carbon Steel Products investigation) make clear, there are also a number of relevant, practical distinctions between enforcement of the antitrust laws in a federal court in a case brought by a private plaintiff and an investigation of alleged anticompetitive conduct by the ITC under Section 337. Every one of these cuts against importing an antitrust injury requirement from federal court into ITC adjudication.

Instead, understandable as its motivation may be, the ITC majority’s approach in Certain Alloy and Carbon Steel Products requires disregarding Congressional intent, and that’s simply not a tenable interpretive approach for administrative agencies to take.

Protectionism is a terrible idea, but if that’s how Congress wrote the Tariff Act, the ITC is legally obligated to enforce the protectionist law it is given.

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

I have previously written at this site (see here, here, and here) and elsewhere (see here, here, and here) about the problem of anticompetitive market distortions (ACMDs), government-supported (typically crony capitalist) rules that weaken the competitive process, undermine free trade, slow economic growth, and harm consumers.  On May 17, the Heritage Foundation hosted a presentation by Shanker Singham of the Legatum Institute (a London think tank) and me on recent research and projects aimed at combatting ACMDs.

Singham began his remarks by noting that from the late 1940s to the early 1990s, trade negotiations under the auspices of the General Agreement on Tariffs and Trade (GATT) (succeeded by the World Trade Organization (WTO)), were highly successful in reducing tariffs and certain non-tariff barriers, and in promoting agreements to deal with trade-related aspects of such areas as government procurement, services, investment, and intellectual property, among others.  Regrettably, however, liberalization of trade restraints at the border was not matched by procompetitive regulatory reform inside borders.  Indeed, to the contrary, ACMDs have continued to proliferate, harming competition, consumers, and economic welfare.  As Singham further explained, the problem is particularly acute in developing countries:  “Because of the failure of early [regulatory] reform in the 1990s which empowered oligarchs and created vested interests in the whole of the developing world, national level reform is extremely difficult.”

To highlight the seriousness of the ACMD problem, Singham and several colleagues have developed a proprietary “Productivity Simulator,” that focuses on potential national economic output based on measures of the effectiveness of domestic competition, international competition, and property rights protections within individual nations.  (The stronger the protections, the greater the potential of the free market to create wealth.)   The Productivity Simulator is able to show, with a regressed accuracy of 90%, the potential gains of reducing distortions in a given country.  Every country has its own curve in the Productivity Simulator – it is a curve because the gains are exponential as one moves to the most difficult reforms.  If all distortions in the world were eliminated (aka, the ceiling of human potential), the Simulator predicts global GDP would rise by 1100% (a conservative estimate, because the Simulator could not be applied to certain very regulatorily-distorted economies for which data were unavailable).   By illustrating the huge “dollars and cents” magnitude of economic losses due to anticompetitive distortions, the Simulator could make the ACMD problem more concrete and thereby help invigorate reform efforts.

Singham also has adapted his Simulator technique to demonstrate the potential for economic growth in proposed “Enterprise Cities” (“e-Cities”), free-market oriented zones within a country that avoid ACMDs and provide strong property rights and rule of law protections.  (Existing city states such as Hong Kong, Singapore, and Dubai already possess e-City characteristics.)  Individual e-City laws, regulations, and dispute-resolution mechanisms are negotiated between individual governments and entrepreneurial project teams headed by Singham.  (Already, potential e-cities are under consideration in Morocco, Saudi Arabia, Saudi Arabia, Bosnia & Herzegovina, and Somalia.)  Private investors would be attracted to e-Cities due to their free market regulatory climate and legal protections.  To the extent that e-Cities are launched and thrive, they may serve as “demonstration projects” for the welfare benefits of dismantling ACMDs.

Following Singham’s presentation, I discussed analyses of the ACMD problem carried out in recent years by major international organizations, including the World Bank, the Organization for Economic Cooperation and Development (OECD, an economic think tank funded by developed countries), and the International Competition Network (a network of national competition agencies and experts legal and economic advisers that produces non-binding “best practices” recommendations dealing with competition law and policy).  The OECD’s  “Competition Assessment Toolkit” is a how-to manual for ferreting out ACMDs – it “helps governments to eliminate barriers to competition by providing a method for identifying unnecessary restraints on market activities and developing alternative, less restrictive measures that still achieve government policy objectives.”  The OECD has used the Toolkit to demonstrate the huge economic cost to the Greek economy (5.2 billion euros) of just a very small subset of anticompetitive regulations.  The ICN has drawn on Toolkit principles in developing “Recommended Practices on Competition Assessment” that national competition agencies can apply in opposing ACMDs.  In a related vein, the ICN has also produced a “Competition Culture Project Report” that provides useful survey-based analysis competition agencies could draw upon to generate public support for dismantling ACMDs.  The World Bank has cooperated with ICN advocacy efforts.  It has sponsored annual World Bank forums featuring industry-specific studies of the costs of regulatory restrictions, held in conjunction with ICN annual conferences, and (beginning in 2015).  It also has joined with the ICN in supporting annual “competition advocacy contests” in which national competition agencies are able to highlight economic improvements due to specific regulatory reform successes.  Developed countries also suffer from ACMDs.  For example, occupational licensing restrictions in the United States affect over a quarter of the work force, and, according to a 2015 White House Report, “licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines.”  Moreover, the multibillion dollar cost burden of federal regulations continues to grow rapidly, as documented by the Heritage Foundation’s annual “Red Tape Rising” reports.

I closed my presentation by noting that statutory international trade law reforms operating at the border could complement efforts to reduce regulatory burdens operating inside the border.  In particular, I cited my 2015 Heritage study recommending that United States antidumping law be revised to adopt a procompetitive antitrust-based standard (in contrast to the current approach that serves as an unjustified tax on certain imports).  I also noted the importance of ensuring that trade laws protect against imports that violate intellectual property rights, because such imports undermine competition on the merits.

In sum, the effort to reduce the burdens of ACMDs continue to be pursued and to be highlighted in research, proposed demonstration projects, and efforts to spur regulatory reform.  This is a long-term initiative very much worth pursuing, even though its near-term successes may prove minor at best.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

Earlier this week Senators Orrin Hatch and Ron Wyden and Representative Paul Ryan introduced bipartisan, bicameral legislation, the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 (otherwise known as Trade Promotion Authority or “fast track” negotiating authority). The bill would enable the Administration to negotiate free trade agreements subject to appropriate Congressional review.

Nothing bridges partisan divides like free trade.

Top presidential economic advisors from both parties support TPA. And the legislation was greeted with enthusiastic support from the business community. Indeed, a letter supporting the bill was signed by 269 of the country’s largest and most significant companies, including Apple, General Electric, Intel, and Microsoft.

Among other things, the legislation includes language calling on trading partners to respect and protect intellectual property. That language in particular was (not surprisingly) widely cheered in a letter to Congress signed by a coalition of sixteen technology, content, manufacturing and pharmaceutical trade associations, representing industries accounting for (according to the letter) “approximately 35 percent of U.S. GDP, more than one quarter of U.S. jobs, and 60 percent of U.S. exports.”

Strong IP protections also enjoy bipartisan support in much of the broader policy community. Indeed, ICLE recently joined sixty-seven think tanks, scholars, advocacy groups and stakeholders on a letter to Congress expressing support for strong IP protections, including in free trade agreements.

Despite this overwhelming support for the bill, the Internet Association (a trade association representing 34 Internet companies including giants like Google and Amazon, but mostly smaller companies like coinbase and okcupid) expressed concern with the intellectual property language in TPA legislation, asserting that “[i]t fails to adopt a balanced approach, including the recognition that limitations and exceptions in copyright law are necessary to promote the success of Internet platforms both at home and abroad.”

But the proposed TPA bill does recognize “limitations and exceptions in copyright law,” as the Internet Association is presumably well aware. Among other things, the bill supports “ensuring accelerated and full implementation of the Agreement on Trade-Related Aspects of Intellectual Property Rights,” which specifically mentions exceptions and limitations on copyright, and it advocates “ensuring that the provisions of any trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law,” which also recognizes copyright exceptions and limitations.

What the bill doesn’t do — and wisely so — is advocate for the inclusion of mandatory fair use language in U.S. free trade agreements.

Fair use is an exception under U.S. copyright law to the normal rule that one must obtain permission from the copyright owner before exercising any of the exclusive rights in Section 106 of the Copyright Act.

Including such language in TPA would require U.S. negotiators to demand that trading partners enact U.S.-style fair use language. But as ICLE discussed in a recent White Paper, if broad, U.S.-style fair use exceptions are infused into trade agreements they could actually increase piracy and discourage artistic creation and innovation — particularly in nations without a strong legal tradition implementing such provisions.

All trade agreements entered into by the U.S. since 1994 include a mechanism for trading partners to enact copyright exceptions and limitations, including fair use, should they so choose. These copyright exceptions and limitations must conform to a global standard — the so-called “three-step test,” — established under the auspices of the 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement, and with roots going back to the 1967 amendments to the 1886 Berne Convention.

According to that standard,

Members shall confine limitations or exceptions to exclusive rights to

  1. certain special cases, which
  2. do not conflict with a normal exploitation of the work and
  3. do not unreasonably prejudice the legitimate interests of the right holder.

This three-step test provides a workable standard for balancing copyright protections with other public interests. Most important, it sets flexible (but by no means unlimited) boundaries, so, rather than squeezing every jurisdiction into the same box, it accommodates a wide range of exceptions and limitations to copyright protection, ranging from the U.S.’ fair use approach to the fair dealing exception in other common law countries to the various statutory exceptions adopted in civil law jurisdictions.

Fair use is an inherently common law concept, developed by case-by-case analysis and a system of binding precedent. In the U.S. it has been codified by statute, but only after two centuries of common law development. Even as codified, fair use takes the form of guidance to judicial decision-makers assessing whether any particular use of a copyrighted work merits the exception; it is not a prescriptive statement, and judicial interpretation continues to define and evolve the doctrine.

Most countries in the world, on the other hand, have civil law systems that spell out specific exceptions to copyright protection, that don’t rely on judicial precedent, and that are thus incompatible with the common law, fair use approach. The importance of this legal flexibility can’t be understated: Only four countries out of the 166 signatories to the Berne Convention have adopted fair use since 1967.

Additionally, from an economic perspective the rationale for fair use would seem to be receding, not expanding, further eroding the justification for its mandatory adoption via free trade agreements.

As digital distribution, the Internet and a host of other technological advances have reduced transaction costs, it’s easier and cheaper for users to license copyrighted content. As a result, the need to rely on fair use to facilitate some socially valuable uses of content that otherwise wouldn’t occur because of prohibitive costs of contracting is diminished. Indeed, it’s even possible that the existence of fair use exceptions may inhibit the development of these sorts of mechanisms for simple, low-cost agreements between owners and users of content – with consequences beyond the material that is subject to the exceptions. While, indeed, some socially valuable uses, like parody, may merit exceptions because of rights holders’ unwillingness, rather than inability, to license, U.S.-style fair use is in no way necessary to facilitate such exceptions. In short, the boundaries of copyright exceptions should be contracting, not expanding.

It’s also worth noting that simple marketplace observations seem to undermine assertions by Internet companies that they can’t thrive without fair use. Google Search, for example, has grown big enough to attract the (misguided) attention of EU antitrust regulators, despite no European country having enacted a U.S-style fair use law. Indeed, European regulators claim that the company has a 90% share of the market — without fair use.

Meanwhile, companies like Netflix contend that their ability to cache temporary copies of video content in order to improve streaming quality would be imperiled without fair use. But it’s impossible to see how Netflix is able to negotiate extensive, complex contracts with copyright holders to actually show their content, but yet is somehow unable to negotiate an additional clause or two in those contracts to ensure the quality of those performances without fair use.

Properly bounded exceptions and limitations are an important aspect of any copyright regime. But given the mix of legal regimes among current prospective trading partners, as well as other countries with whom the U.S. might at some stage develop new FTAs, it’s highly likely that the introduction of U.S.-style fair use rules would be misinterpreted and misapplied in certain jurisdictions and could result in excessively lax copyright protection, undermining incentives to create and innovate. Of course for the self-described consumer advocates pushing for fair use, this is surely the goal. Further, mandating the inclusion of fair use in trade agreements through TPA legislation would, in essence, force the U.S. to ignore the legal regimes of its trading partners and weaken the protection of copyright in trade agreements, again undermining the incentive to create and innovate.

There is no principled reason, in short, for TPA to mandate adoption of U.S-style fair use in free trade agreements. Congress should pass TPA legislation as introduced, and resist any rent-seeking attempts to include fair use language.

Last week, the George Washington University Center for Regulatory Studies convened a Conference (GW Conference) on the Status of Transatlantic Trade and Investment Partnership (TTIP) Negotiations between the European Union (EU) and the United States (U.S.), which were launched in 2013 and will continue for an indefinite period of time. In launching TTIP, the Obama Administration claimed that this pact would raise economic welfare in the U.S. and the EU through stimulating investment and lowering non-tariff barriers between the two jurisdictions, by, among other measures, “significantly cut[ting] the cost of differences in [European Union and United States] regulation and standards by promoting greater compatibility, transparency, and cooperation.

Whether TTIP, if enacted, would actually raise economic welfare in the United States is an open question, however. As a recent Heritage Foundation analysis of TTIP explained, a TTIP focus on “harmonizing” regulations could actually lower economic freedom (and welfare) by “regulating upward” through acceptance of the more intrusive approach, and by precluding future competition among alternative regulatory models that could lead to welfare-enhancing regulatory improvements. Thus, the Heritage study recommended that “[a]ny [TTIP] agreement should be based on mutual recognition, not harmonization, of regulations.”

Unfortunately, discussion at the GW Conference indicated that the welfare-superior mutual recognition approach has been rejected by negotiators – at least as of now. In response to a question I posed on the benefits of mutual recognition, an EU official responded that such an “academic” approach is not “realistic,” while a senior U.S. TTIP negotiator indicated that mutual recognition could prove difficult where regulatory approaches differ. I read those diplomatically couched responses as signaling that both sides opposed the mutual recognition approach. This is a real problem. As part of TTIP, U.S. and EU sector-specific regulators are actively engaged in discussing regulatory particulars. There is the distinct possibility that the regulators may agree on measures that raise regulatory burdens for the sectors covered – particularly given the oft-repeated motto at the GW Conference that TTIP must not reduce existing levels of “protection” for health, safety, and the environment. (Those blandishments eschew any cost-benefit calculus to justify existing protection levels.) This conclusion is further supported by public choice theory, which suggests that regulators may be expected to focus on expanding the size and scope of their regulatory domains, not on contracting them. To make things worse, TTIP raises the possibility that the highly successful U.S. tradition of reliance on private sector-led voluntary consensus standards, as opposed to the EU’s preference for heavy government involvement in standard-setting policies, may be undermined. Any move toward greater direct government influence on U.S. standard setting as part of a TTIP bargain would further undermine the vibrancy, competition, and innovation that have led to the great international success of U.S.-developed technical standards.

As a practical matter, however, is there time for a change in direction in TTIP negotiations regarding regulation and standards? Yes, there is. The TTIP negotiators face no true deadline. Moreover, as a matter of political reality, the eventual U.S. statutory adoption of TTIP measures may require the passage by Congress of “fast-track” trade promotion authority (TPA), which provides for congressional up-or-down votes (without possibility of amendment) on legislation embodying trade deals that have been negotiated by the Executive Branch. Given the political sensitivity of trade deals, they cannot easily be renegotiated if they are altered by congressional amendments. (Indeed, in recent decades all major trade agreements requiring implementing legislation have proceeded under TPA.)

If the Obama Administration decides that it wants to advance TTIP, it must rely on a Republican-controlled Congress to obtain TPA. Before it grants such authority, Congress should conduct hearings and demand that Administration officials testify about key aspects of the Administration’s TTIP negotiating philosophy, and, in particular, on how U.S. TTIP negotiators are approaching regulatory differences between the U.S. and the EU. Congress should make it a prerequisite to the grant of TPA that the final TTIP agreement embody welfare-enhancing mutual recognition of regulations and standards, rather than welfare-reducing harmonization. It should vote down any TTIP negotiated deal that fails to satisfy this requirement.

I thank Truth on the Market (and especially Geoff Manne) for adding me as a regular TOTM blogger, writing on antitrust, IP, and regulatory policy. I am a newly minted Senior Legal Fellow at the Heritage Foundation, and alumnus of BlackBerry and the Federal Trade Commission.

Representatives of over 100 competition agencies from around the globe, joined by “non-governmental advisors” (NGAs) from think tanks, universities and the private sector, gathered in Marrakech two weeks ago for the 13th Annual Conference of the International Competition Network (ICN).

The ICN, founded in 2001, seeks to promote “soft convergence” in competition law and policy by releasing non-binding (but highly influential) recommended “best practices,” holding teleseminars and workshops, and disseminating educational and training materials for use by governments.  ICN members produce their output through flexible project-oriented and results-based working groups, dealing with mergers, unilateral conduct, cartels, competition advocacy, and agency effectiveness (how to improve agency performance).  (I have been involved in ICN work since 2006, as a U.S. Federal Trade Commission representative and an NGA.  The term “competition” is generally employed in lieu of “antitrust” in most foreign jurisdictions.)

The Marrakech Conference yielded two new sets of recommended practices, focused on competition assessment and predatory pricing.  (I will have more to say on predatory pricing in my next blog post.)  To the extent they are eventually implemented in the U.S., the competition assessment recommendations could lower the burden of government-imposed regulatory restrictions to the benefit of American consumers and American competitiveness.

As then FTC Chairman Tim Muris observed in 2003, in highlighting the importance of combating government-imposed competitive restraints,

[a]ttempting to protect competition by focusing solely on private restraints is like trying to stop the flow of water at a fork in a stream by blocking only one of the channels.  Unless you block both channels, you are not likely to even slow, much less stop, the flow. Eventually, all the water will flow toward the unblocked channel.

Indeed, anticompetitive government regulations that restrict entry, protect state-sponsored firms, and otherwise dampen the competitive process are legion, and widely viewed as imposing far greater harm to consumer welfare than the purely private restraints traditionally condemned by antitrust. Because they operate openly and are backed by the enforcement power of government, public restraints, unlike private restraints, cannot be undermined by market forces, and thus are far more likely to have sweeping and harmful long-term effects.

The FTC and other competition agencies have employed “competition advocacy” to argue against particular anticompetitive government restrictions, but those efforts historically have been limited in number, scope, and effectiveness.  Despite the huge potential welfare benefits from lifting anticompetitive restrictions, those restraints typically are the fruits of successful lobbying by private beneficiaries of competitive distortions, or by “public interest” groups that trust rule by government fiat over market forces.  Moreover, consumers at large are generally ill-informed about regulatory harms and the costs to organize in favor of reform efforts are prohibitive.

Recently, however, international organizations, including the OECD, UNCTAD, and the World Bank, have stepped forward to highlight the costs of public sector regulatory restraints and to help competition agencies spot and advocate against different sorts of restrictions.  Building on these initiatives (and in particular the OECD’s Competition Assessment Toolkit), the ICN’s Advocacy Working Group drafted Recommended Practices on Competition Assessment (RPCA) that the ICN adopted and released as a new consensus product in Marrakech.

The RPCA apply broadly to proposed and existing legislation, regulations, and policies that may restrict competition.  Recognizing that government competition agencies differ greatly in their capacities and ability to influence other government bodies, the RPCA note that competition assessments can take many forms, ranging from recommendations drawn from application of general economic theory to resource-intensive competition impact assessments, with many variations in between.  The RPCA stress that they are intended to provide guidance, not require particular assessments, and that government entities other than competition agencies can carry out valuable assessment work.

The RPCA provide a comprehensive “soup to nuts” template for agencies tasked with assessments, comprising both process-related and substantive elements:

  • A competition assessment should identify an existing or proposed policy that may unduly restrict competition and evaluate its likely impact on competition;
  • Competition agencies should advocate for a policymaking environment that promotes consideration of competition principles (including delineation of legal authority and openness to outside sources of advice);
  • A transparent process should be used to conduct assessments;
  • Agencies should focus assessments on types of restrictions that pose the greatest threat to competition, and design selection criteria (which are described) to prioritize competition assessment among other advocacy activities;
  • Agencies should consider institutional arrangements and relationships with policymakers in building assessment programs (practical advice designed to enhance the political viability of assessments);
  • Agencies should consider whether a competitive restriction is reasonably related to the goals of the policy under review and whether the policy goal could be achieved without harming competition or in a less restrictive manner;
  • A competition assessment should start by identifying and considering the goals and objectives of the policy in question and review prior work in the area;
  • Agencies should consider how a policy’s restrictions are likely to influence the market structure and behavior of firms and customers in the market or neighboring markets;
  • Once a restraint and its possible competitive effects have been identified, agencies should evaluate the likely competitive effects on the basis of sound economic theory, and, where feasible, on empirical evidence;
  • Agencies should carefully consider the form of competition assessment most appropriate for a particular situation (i.e., agencies should be free to issue a formal or informal opinion with flexibility as to the manner of delivery);
  • Agencies should seek to deliver a competition assessment in a timely fashion; and,
  • Agencies should engage with interested third parties (e.g., policy organizations and domestic peer agencies) to promote policymakers’ consideration of an assessment.

The RPCA shine particularly bright in providing a concise yet nuanced evaluation of the sorts of restraints that are most likely to undermine the competitive process, including a cogent discussion of barriers to entry, exit, or expansion within a market; of policies that control how firms are allowed to compete in a market; of policies that shield firms from competitive pressure; and of policies that control the choices available to consumers.  The RPCA also highlight the value of attempting, where feasible, to derive quantitative welfare estimates of the costs of particular restrictions, based on a neutral metric and other tools of economic analysis.  Over the next year further work will be done on cataloguing existing case studies that contain welfare estimates and on the derivation of a metric.

The RPCA are no short-term panacea, but rather a practical manifesto for long-run regulatory reform.  They shed a useful spotlight on categories of economically harmful regulations that occur in a wide range of countries – not just in historically state-dominated economies.  Rent-seeking is ubiquitous, and regulations too often reflect wealth-destructive competitive limitations masquerading in public interest dress in all sorts of jurisdictions, including the United States.  Given the recent rapid rise in U.S. regulatory activity, the identification of U.S. federal and state government rules that undermine competition surely will remain a target-rich zone for competition advocates.

Let’s hope that, over time, when the political tides yield greater support for economic liberty, the lessons of Marrakech will point the way to repealing welfare-destructive regulatory impositions across the globe.

The ridiculousness currently emanating from ICANN and the NTIA (see these excellent posts from Milton Mueller and Eli Dourado on the issue) over .AMAZON, .PATAGONIA and other “geographic”/commercial TLDs is precisely why ICANN (and, apparently, the NTIA) is a problematic entity as a regulator.

The NTIA’s response to ICANN’s Governmental Advisory Committee’s (GAC) objection to Amazon’s application for the .AMAZON TLD (along with similar applications from other businesses for other TLDs) is particularly troubling, as Mueller notes:

In other words, the US statement basically says “we think that the GAC is going to do the wrong thing; its most likely course of action has no basis in international law and is contrary to vital policy principles the US is supposed to uphold. But who cares? We are letting everyone know that we will refuse to use the main tool we have that could either stop GAC from doing the wrong thing or provide it with an incentive to moderate its stance.”

Competition/antitrust issues don’t seem to be the focus of this latest chapter in the gTLD story, but it is instructive on this score nonetheless. As Berin Szoka and I wrote in ICLE’s comment to ICANN on gTLDS:

Among the greatest threats to this new “land rush” of innovation is the idea that ICANN should become a competition regulator, deciding whether to approve a TLD application based on its own competition analysis. But ICANN is not a regulator. It is a coordinator. ICANN should exercise its coordinating function by applying the same sort of analysis that it already does in coordinating other applications for TLDs.

* * *

Moreover, the practical difficulties in enforcing different rules for generic TLDs as opposed to brand TLDs likely render any competition pre-clearance mechanism unworkable. ICANN has already determined that .brand TLDs can and should be operated as closed domains for obvious and good reasons. But differentiating between, say .amazon the brand and .amazon the generic or .delta the brand and .delta the generic will necessarily result in arbitrary decisions and costly errors.

Of most obvious salience: implicit in the GAC’s recommendation is the notion that somehow Amazon.com is sufficiently different than .AMAZON to deny Amazon’s ownership of the latter. But as Berin and I point out:

While closed gTLDs might seem to some to limit competition, that limitation would occur only within a particular, closed TLD. But it has every potential to be outweighed by the dramatic opening of competition among gTLDs, including, importantly, competition with .com.

In short, the market for TLDs and domain name registrations do not present particular competitive risks, and there is no a priori reason for ICANN to intervene prospectively.

In other words, treating Amazon.com and .AMAZON as different products, in different relevant markets, is a mistake. No doubt Amazon.com would, even if .AMAZON were owned by Amazon, remain for the foreseeable future the more relevant site. If Latin American governments are concerned with cultural and national identity protection, they should (not that I’m recommending this) focus their objections on Amazon.com. But the reality is that Amazon.com doesn’t compromise cultural identity, and neither would Amazon’s ownership of .AMAZON. Rather, the wide availability of new TLDs opens up an enormous range of new competitive TLD and SLD constraints on existing, dominant .COM SLDs, any number of which could be effective in promoting and preserving cultural and national identities.

By the way – Amazonia.com, Amazonbasin.com and Amazonrainforest.com, presumably among many others, look to be unused and probably available for purchase. Perhaps opponents of Amazon’s ownership of .AMAZON should set their sights on those or other SLDs and avoid engaging in the sort of politicking that will ultimately ruin the Internet.

China

Paul H. Rubin —  27 June 2011

There are many stories about unrest in China.  Many factors are blamed for this unrest, including low wages, poor working conditions, and political factors.  But there is one thing that is not generally mentioned:  demographics.  The one child policy coupled with a preference for males (due to both economic and cultural factors) means that there ar significant numbers of unmarried and probably unmarriageable males.  This leads to severe male-male competition.  However, it also means that there are large numbers of socially discontent men with little to lose.  Similar factors probably operated in the Arabic world.  In both cases, it may be difficult to maintain an open democratic society.  I discussed this in Darwinian Politics, beginning at page 118.  It is also the theme of the book Bare Branches by Valerie M. Hudson and Andrea M. den Boer.  Because of demographic factors relating both to a very peculiar age structure and the gender imbalance mentioned here, China is going to face serious difficulties in the future.  Those projecting increasing power for China do not always take these factors into account.

The New York Times has an interesting story about land markets in China.  In order to get married a man needs to own property and land prices are very high in China.  As it its habit, the Times blames “overeager developers who force residents out of old neighborhoods.”

In fact, the Times gets it backwards.  The information needed to understand the issue is in the story: “The marriage competition is fierce, and statistically, women hold the cards. Given the nation’s gender imbalance, an outgrowth of a cultural preference for boys and China’s stringent family-planning policies, as many as 24 million men could be perpetual bachelors by 2020, according to the report.”  So what is happening is that there is a shortage of marriageable women and it is competition for the land needed to attract these women that is driving up land prices.

This competition is one unfortunate side effect of the one child policy and the Chinese preference for boys.  These 24 million unmarriageable men are going to be a long term problem for China.  In my book Darwinian Politics I argue that a large core of perpetual bachelors makes a free and open society difficult because this core will lead to social instability; the argument is also forcefully made in Bare Branches: The Security Implications of Asia’s Surplus Male Population by  Valerie M. Hudson and Andrea M. Den Boer. 

Much has been written about the problem of China’s aging population but I don’t think we have paid enough attention to the issues of gender imbalance.  More generally, I think much of the course of world politics over the next century is going to be driven by major demographic trends, and I think these worthy of increased study.  Nicholas Eberstadt of AEI is doing this sort of work, but I think there is much more to be done.

In light of economic worries in Vietnam, the WSJ reports that the country is soon likely to impose a widespread set of price controls and restrictions on political activity after an encouraging move toward freer markets:

Carlyle Thayer, a veteran Vietnam watcher and professor at the Australian Defense Academy in Canberra, says conservative factions in the ruling Politburo are tightening their grip on the country as Vietnam’s economic worries—especially inflation and fallout from currency devaluations—grow. He says he expects more crackdowns and arrests to come in the run-up to the country’s 2011 Party Congress, a major political event that will aim to map out Vietnam’s political and economic direction for the following five years.  In turn, the crackdowns threaten to curtail investment and economic growth in the country…..

Now, the price-control unit of Vietnam’s Finance Ministry is drafting proposals that, if implemented by the government, would compel private and foreign-owned companies to report pricing structures, according to documents viewed by The Wall Street Journal and corroborated by Vietnamese officials.  In some cases, the proposed rules would allow the government to set prices on a wide range of privately made or imported goods, including petroleum products, fertilizers and milk to help contain inflation as Vietnam continues pumping money into its volatile economy. Typically, the government applies this kind of aggressive measure only to state-owned businesses, and it is unclear whether Vietnam will write the wider rules into law.

Somewhat relatedly, here is one of my favorite papers about the economics of contractual relationships and enforcement institutions in Vietnam (McMillan & Woodruff).