Archives For China

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.

Over the last two decades, the United States government has taken the lead in convincing jurisdictions around the world to outlaw “hard core” cartel conduct.  Such cartel activity reduces economic welfare by artificially fixing prices and reducing the output of affected goods and services.  At the same, the United States has acted to promote international cooperation among government antitrust enforcers to detect, investigate, and punish cartels.

In 2017, however, the U.S. Court of Appeal for the Second Circuit (citing concerns of “international comity”) held that a Chinese export cartel that artificially raised the price of vitamin imports into the United States should be shielded from U.S. antitrust penalties—based merely on one brief from a Chinese government agency that said it approved of the conduct. The U.S. Supreme Court is set to review that decision later this year, in a case styled Animal Science Products, Inc., v. Hebei Welcome Pharmaceutical Co. Ltd.  By overturning the Second Circuit’s ruling (and disavowing the overly broad “comity doctrine” cited by that court), the Supreme Court would reaffirm the general duty of federal courts to apply federal law as written, consistent with the constitutional separation of powers.  It would also reaffirm the importance of the global fight against cartels, which has reflected consistent U.S. executive branch policy for decades (and has enjoyed strong support from the International Competition Network, the OECD, and the World Bank).

Finally, as a matter of economic policy, the Animal Science Products case highlights the very real harm that occurs when national governments tolerate export cartels that reduce economic welfare outside their jurisdictions, merely because domestic economic interests are not directly affected.  In order to address this problem, the U.S. government should negotiate agreements with other nations under which the signatory states would agree:  (1) not to legally defend domestic exporting entities that impose cartel harm in other jurisdictions; and (2) to cooperate more fully in rooting out harmful export-cartel activity, wherever it is found.

For a more fulsome discussion of the separation of powers, international relations, and economic policy issues raised by the Animal Science Products case, see my recent Heritage Foundation Legal Memorandum entitled The Supreme Court and Animal Science Products: Sovereignty and Export Cartels.

Over the last two years, the Scalia Law School’s Global Antitrust Institute (GAI) has taken a leadership role in promoting sound antitrust analysis of intellectual property rights (IPRs), through its insightful analysis of IP-antitrust guidance proffered by governments around the world (including by the United States antitrust agencies).  Key concepts that inform the GAI’s IP commentaries are that IP rights are full-fledged property rights, and should be treated as such; that IP licensing typically is procompetitive and often generates substantial efficiencies; that antitrust agencies should compare the competitive effects of IP licensing restrictions against what would have happened in the “but for” world in which there is no license; and that special limiting rules should not be applied to patents that cover technologies essential to the implementation of standards (“standard-essential patents”).  The overarching theme of the GAI submissions is that IP licensing generally enhances economic welfare and promotes innovation.

On April 13, the GAI once again turned its eye to IP licensing issues, in commenting on the Draft Anti-Monopoly Guidelines on the Abuse of Intellectual Property Rights (Draft Guidelines) propounded by the Chinese Government’s State Council (see here).  This commentary is particularly timely and important, given the vast scale of the Chinese economy and the large number of major companies involved in IP licensing in China.  While the April 13 GAI commentary praises the Draft Guidelines’ stated intent of condemning only those acts that “have the effect of excluding or restricting competition,” it explains that various Draft Guidelines provisions would nevertheless undermine that desirable goal.  Specifically, the commentary makes five key points:

  1. First, the Draft Guidelines do not explicitly recognize an IPR holder’s core right to exclude. The right to exclude is a central feature of IPRs, and economic theory and empirical evidence show that IPRs incentivize the creation of inventions, ideas, and original works.  Relatedly, the Draft Guidelines also do not incorporate throughout the well-accepted methodological principle that, when assessing the possible competitive effects of the use of IPRs, agencies should compare the competitive effect of the IPR use against what would have happened in the “but for” world in which there is no license.  This important analytical approach, which has been used by the U.S. antitrust agencies for the last 20 years, is absent from the Draft Guidelines.
  2. Second, the Draft Guidelines do not incorporate throughout the point that licensing is generally procompetitive. This modern economic understanding of licensing has informed the approach of the U.S. agencies, for example, for more than 20 years. The result is an approach that, with the exception of naked restraints such as price fixing, requires an effects-based analysis under which licensing restraints will be condemned only when any anticompetitive effects outweigh any procompetitive benefits.
  3. Third, and relatedly, the Draft Guidelines appear to create a number of presumptions that certain conduct (such as charging for expired or invalid patents and prohibiting a licensee from challenging the validity of its IPR) will, or is likely to, eliminate or restrict competition. Thus the State Council would be well advised to eliminate such presumptions and to adopt instead an effects-based approach.  This approach would benefit Chinese consumers because presumptions that are not appropriately calibrated are likely to capture conduct that is procompetitive, which is likely to have a chilling effect on potentially beneficial conduct.  Adopting an approach that incorporates these revisions would best serve competition and consumers, as well as China’s goal of becoming an innovation society.
  4. Fourth, the Draft Guidelines appear to create special rules for conduct involving standard-essential patents (SEPs). The State Council would be wise to reconsider this approach.  Instead, antitrust enforcers should ask whether particular conduct involving SEPs, including evasion of a FRAND assurance, has net anticompetitive effects, and should apply the same case-by-case, fact-specific analysis that is employed for non-SEPs.  Imposing special rules for SEPs, including creating presumptions of harm based on breach of contractual commitments such as a FRAND assurance, is not only unwarranted as a matter of competition policy, but also likely to deter participation in standard setting.
  5. Lastly, the State Council should adopt a more compliance-based approach that sets forth basic principles that would allow parties to self-advise. The Draft Guidelines instead set forth a list of factors that the Chinese competition agencies will consider when analyzing specific conduct, yet do not explain the significance of each of the factors or how they will be weighed in the competition agencies’ overall decision-making process.  This approach allows the agencies broad discretion in enforcement decision-making without providing the guidance stakeholders need to protect incentives to innovate and transfer technology that could be subject to Chinese antitrust jurisdiction.  To this end, the GAI’s commentary recommends that the State Council include throughout the Guidelines examples similar to those found in other guidelines, for example the U.S. antitrust agencies’ recently updated 2017 Antitrust Guidelines for the Licensing of Intellectual Property and the Canadian Bureau of Competition’s Intellectual Property Enforcement Guidelines.  Inclusion of illustrative examples will help IP holders understand how the Chinese agencies will apply the basic principles.

In sum, the Chinese Government would be well advised to adopt the April 13 commentary’s recommendations in finalizing its Guidelines.  Acceptance of the GAI’s recommendations would benefit consumers and producers, and promote innovation in the Chinese economy.  Once again (as one would expect), a GAI antitrust commentary is spot on.

  1. Introduction

For nearly two years, the Global Antitrust Institute (GAI) at George Mason University’s Scalia Law School has filed an impressive series of comments on foreign competition laws and regulations.  The latest GAI comment, dated March 19 (“March 19 comment”), focuses on proposed revisions to the Anti-Unfair Competition Law (AUCL) of the People’s Republic of China, currently under consideration by China’s national legislature, the National People’s Congress.  The AUCL “coexists” with China’s antitrust statute, the Anti-Monopoly Law (AML).  The key concern raised by the March 19 comment is that the AUCL revisions not undermine the application of sound competition law principles in the analysis of bundling (a seller’s offering of several goods as part of a single package sale).  As such, the March 19 comment notes that the best way to avoid such an outcome would be for the AUCL to avoid condemning bundling as a potential “unfair” practice, leaving bundling practices to be assessed solely under the AML.  Furthermore, the March 19 comment wisely stresses that any antitrust evaluation of bundling, whether under the AML (the preferred option) or under the AUCL, should give weight to the substantial efficiencies that bundling typically engenders.

  1. Highlights of the March 19 Comment

Specifically, the March 19 comment made the following key recommendations:

  • The National People’s Congress should be commended for having deleted Article 6 of an earlier AUCL draft, which prohibited a firm from “taking advantage of its comparative advantage position.” As explained in a March 2016 GAI comment, this provision would have undermined efficient contractual negotiations that could benefited consumer as well as producer welfare.
  • With respect to the remaining draft provisions, any provisions that relate to conduct covered by China’s Anti-Monopoly Law (AML) be omitted entirely.
  • In particular, Article 11 (which provides that “[b]usiness operators selling goods must not bundle the sale of goods against buyers’ wishes, and must not attach other unreasonable conditions”) should be omitted in its entirety, as such conduct is already covered by Article 17(5) of the AML.
  • In the alternative, at the very least, Article 11 should be revised to adopt an effect-based approach under which bundling will be condemned only when: (1) the seller has market power in one of the goods included in the bundle sufficient to enable it to restrain trade in the market(s) for the other goods in the bundle; and (2) the anticompetitive effects outweigh any procompetitive benefits.  Such an approach would be consistent with Article 17(5) of the AML, which provides for an effects-based approach that applies only to firms with a dominant market position.
  • Bundling is ubiquitous and widely used by a variety of firms and for a variety of reasons (see here). In the vast majority of cases, package sales are “easily explained by economies of scope in production or by reductions in transaction and information costs, with an obvious benefit to the seller, the buyer or both.”   Those benefits can include lower prices for consumers, facilitate entry into new markets, reduce conflicting incentives between manufacturers and their distributors, and mitigate retailer free-riding and other types of agency problems.  Indeed (see here), “bundling can serve the same efficiency-enhancing vertical control functions as have been identified in the economic literature on tying, exclusive dealing, and other forms of vertical restraints.”
  • The potential to harm competition and generate anticompetitive effects arises only when bundling is practiced by a firm with market power in one of the goods included in the bundle. As the U.S. Supreme Court explained in Jefferson Parrish v. Hyde (1984), “there is nothing inherently anticompetitive about package sales,” and the fact that “a purchaser is ‘forced’ to buy a product he would not have otherwise bought even from another seller” does not imply an “adverse impact on competition.”  Rather, for bundling to harm competition there would have to be an exclusionary effect on other sellers because bundling thwarts buyers’ desire to purchase substitutes for one or more of the goods in the bundle from those other sellers to an extent that harms competition in the markets for those products (see here).
  • Moreover, because of the widespread procompetitive use of bundling, by firms without and firms with market power, making bundling per se or presumptively unlawful is likely to generate many Type I (false positive) errors which, as the U.S. Supreme Court explained in Verizon v. Trinko (2004), “are especially costly, because they chill the very conduct the antitrust laws are designed to protect.”
  1. Conclusion

In sum, the GAI’s March 19 comment does an outstanding job of highlighting the typically procompetitive nature of bundling, and of calling for an economics-based approach to the antitrust evaluation of bundling in China.  Other competition law authorities (including, for example, the European Competition Commission) could benefit from this comment as well, when they scrutinize bundling arrangements.