Archives For litigation

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display-advertising business.

Broadly, the Texas complaint (previously discussed in this TOTM symposium) alleges that Google possesses market power in ad-buying tools and in search, illustrated in the figure below.

The complaint also alleges anticompetitive conduct by Google with respect to YouTube in a separate “inline video-advertising market.” According to the complaint, this market power is leveraged to force transactions through Google’s exchange, AdX, and its network, Google Display Network. The leverage is further exercised by forcing publishers to license Google’s ad server, Google Ad Manager.

Although the Texas complaint raises many specific allegations, the key ones constitute four broad claims: 

  1. Google forces publishers to license Google’s ad server and trade in Google’s ad exchange;
  2. Google uses its control over publishers’ inventory to block exchange competition;
  3. Google has disadvantaged technology known as “header bidding” in order to prevent publishers from accessing its competitors; and
  4. Google prevents rival ad-placement services from competing by not allowing them to buy YouTube ad space.

Alleged harms

The Texas complaint alleges Google’s conduct has caused harm to competing networks, exchanges, and ad servers. The complaint also claims that the plaintiff states’ economies have been harmed “by depriving the Plaintiff States and the persons within each Plaintiff State of the benefits of competition.”

In a nod to the widely accepted Consumer Welfare Standard, the Texas complaint alleges harm to three categories of consumers:

  1. Advertisers who pay for their ads to be displayed, but should be paying less;
  2. Publishers who are paid to provide space on their sites to display ads, but should be paid more; and
  3. Users who visit the sites, view the ads, and purchase or use the advertisers’ and publishers’ products and services.

The complaint claims users are harmed by above-competitive prices paid by advertisers, in that these higher costs are passed on in the form of higher prices and lower quality for the products and services they purchase from those advertisers. The complaint simultaneously claims that users are harmed by the below-market prices received by publishers in the form of “less content (lower output of content), lower-quality content, less innovation in content delivery, more paywalls, and higher subscription fees.”

Without saying so explicitly, the complaint insinuates that if intermediaries (e.g., Google and competing services) charged lower fees for their services, advertisers would pay less, publishers would be paid more, and consumers would be better off in the form of lower prices and better products from advertisers, as well as improved content and lower fees on publishers’ sites.

Effective competition is not an antitrust offense

A flawed premise underlies much of the Texas complaint. It asserts that conduct by a dominant incumbent firm that makes competition more difficult for competitors is inherently anticompetitive, even if that conduct confers benefits on users.

This amounts to a claim that Google is acting anti-competitively by innovating and developing products and services to benefit one or more display-advertising constituents (e.g., advertisers, publishers, or consumers) or by doing things that benefit the advertising ecosystem more generally. These include creating new and innovative products, lowering prices, reducing costs through vertical integration, or enhancing interoperability.

The argument, which is made explicitly elsewhere, is that Google must show that it has engineered and implemented its products to minimize obstacles its rivals face, and that any efficiencies created by its products must be shown to outweigh the costs imposed by those improvements on the company’s competitors.

Similarly, claims that Google has acted in an anticompetitive fashion rest on the unsupportable notion that the company acts unfairly when it designs products to benefit itself without considering how those designs would affect competitors. Google could, it is argued, choose alternate arrangements and practices that would possibly confer greater revenue on publishers or lower prices on advertisers without imposing burdens on competitors.

For example, a report published by the Omidyar Network sketching a “roadmap” for a case against Google claims that, if Google’s practices could possibly be reimagined to achieve the same benefits in ways that foster competition from rivals, then the practices should be condemned as anticompetitive:

It is clear even to us as lay people that there are less anticompetitive ways of delivering effective digital advertising—and thereby preserving the substantial benefits from this technology—than those employed by Google.

– Fiona M. Scott Morton & David C. Dinielli, “Roadmap for a Digital Advertising Monopolization Case Against Google”

But that’s not how the law—or the economics—works. This approach converts beneficial aspects of Google’s ad-tech business into anticompetitive defects, essentially arguing that successful competition and innovation create barriers to entry that merit correction through antitrust enforcement.

This approach turns U.S. antitrust law (and basic economics) on its head. As some of the most well-known words of U.S. antitrust jurisprudence have it:

A single producer may be the survivor out of a group of active competitors, merely by virtue of his superior skill, foresight and industry. In such cases a strong argument can be made that, although, the result may expose the public to the evils of monopoly, the Act does not mean to condemn the resultant of those very forces which it is its prime object to foster: finis opus coronat. The successful competitor, having been urged to compete, must not be turned upon when he wins.

– United States v. Aluminum Co. of America, 148 F.2d 416 (2d Cir. 1945)

U.S. antitrust law is intended to foster innovation that creates benefits for consumers, including innovation by incumbents. The law does not proscribe efficiency-enhancing unilateral conduct on the grounds that it might also inconvenience competitors, or that there is some other arrangement that could be “even more” competitive. Under U.S. antitrust law, firms are “under no duty to help [competitors] survive or expand.”  

To be sure, the allegations against Google are couched in terms of anticompetitive effect, rather than being described merely as commercial disagreements over the distribution of profits. But these effects are simply inferred, based on assumptions that Google’s vertically integrated business model entails an inherent ability and incentive to harm rivals.

The Texas complaint claims Google can surreptitiously derive benefits from display advertisers by leveraging its search-advertising capabilities, or by “withholding YouTube inventory,” rather than altruistically opening Google Search and YouTube up to rival ad networks. The complaint alleges Google uses its access to advertiser, publisher, and user data to improve its products without sharing this data with competitors.

All these charges may be true, but they do not describe inherently anticompetitive conduct. Under U.S. law, companies are not obliged to deal with rivals and certainly are not obliged to do so on those rivals’ preferred terms

As long ago as 1919, the U.S. Supreme Court held that:

In the absence of any purpose to create or maintain a monopoly, the [Sherman Act] does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.

– United States v. Colgate & Co.

U.S. antitrust law does not condemn conduct on the basis that an enforcer (or a court) is able to identify or hypothesize alternative conduct that might plausibly provide similar benefits at lower cost. In alleging that there are ostensibly “better” ways that Google could have pursued its product design, pricing, and terms of dealing, both the Texas complaint and Omidyar “roadmap” assert that, had the firm only selected a different path, an alternative could have produced even more benefits or an even more competitive structure.

The purported cure of tinkering with benefit-producing unilateral conduct by applying an “even more competition” benchmark is worse than the supposed disease. The adjudicator is likely to misapply such a benchmark, deterring the very conduct the law seeks to promote.

For example, Texas complaint alleges: “Google’s ad server passed inside information to Google’s exchange and permitted Google’s exchange to purchase valuable impressions at artificially depressed prices.” The Omidyar Network’s “roadmap” claims that “after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. Low prices for this service can force rivals to depart, thereby directly reducing competition.”

In contrast, as current U.S. Supreme Court Associate Justice Stephen Breyer once explained, in the context of above-cost low pricing, “the consequence of a mistake here is not simply to force a firm to forego legitimate business activity it wishes to pursue; rather, it is to penalize a procompetitive price cut, perhaps the most desirable activity (from an antitrust perspective) that can take place in a concentrated industry where prices typically exceed costs.”  That commentators or enforcers may be able to imagine alternative or theoretically more desirable conduct is beside the point.

It has been reported that the U.S. Justice Department (DOJ) may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

There is little doubt that Federal Trade Commission (FTC) unfair methods of competition rulemaking proceedings are in the offing. Newly named FTC Chair Lina Khan and Commissioner Rohit Chopra both have extolled the benefits of competition rulemaking in a major law review article. What’s more, in May, Commissioner Rebecca Slaughter (during her stint as acting chair) established a rulemaking unit in the commission’s Office of General Counsel empowered to “explore new rulemakings to prohibit unfair or deceptive practices and unfair methods of competition” (emphasis added).

In short, a majority of sitting FTC commissioners apparently endorse competition rulemaking proceedings. As such, it is timely to ask whether FTC competition rules would promote consumer welfare, the paramount goal of competition policy.

In a recently published Mercatus Center research paper, I assess the case for competition rulemaking from a competition perspective and find it wanting. I conclude that, before proceeding, the FTC should carefully consider whether such rulemakings would be cost-beneficial. I explain that any cost-benefit appraisal should weigh both the legal risks and the potential economic policy concerns (error costs and “rule of law” harms). Based on these considerations, competition rulemaking is inappropriate. The FTC should stick with antitrust enforcement as its primary tool for strengthening the competitive process and thereby promoting consumer welfare.

A summary of my paper follows.

Section 6(g) of the original Federal Trade Commission Act authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Section 6(g) rules are enacted pursuant to the “informal rulemaking” requirements of Section 553 of the Administrative Procedures Act (APA), which apply to the vast majority of federal agency rulemaking proceedings.

Before launching Section 6(g) competition rulemakings, however, the FTC would be well-advised first to weigh the legal risks and policy concerns associated with such an endeavor. Rulemakings are resource-intensive proceedings and should not lightly be undertaken without an eye to their feasibility and implications for FTC enforcement policy.

Only one appeals court decision addresses the scope of Section 6(g) rulemaking. In 1971, the FTC enacted a Section 6(g) rule stating that it was both an “unfair method of competition” and an “unfair act or practice” for refiners or others who sell to gasoline retailers “to fail to disclose clearly and conspicuously in a permanent manner on the pumps the minimum octane number or numbers of the motor gasoline being dispensed.” In 1973, in the National Petroleum Refiners case, the U.S. Court of Appeals for the D.C. Circuit upheld the FTC’s authority to promulgate this and other binding substantive rules. The court rejected the argument that Section 6(g) authorized only non-substantive regulations concerning regarding the FTC’s non-adjudicatory, investigative, and informative functions, spelled out elsewhere in Section 6.

In 1975, two years after National Petroleum Refiners was decided, Congress granted the FTC specific consumer-protection rulemaking authority (authorizing enactment of trade regulation rules dealing with unfair or deceptive acts or practices) through Section 202 of the Magnuson-Moss Warranty Act, which added Section 18 to the FTC Act. Magnuson-Moss rulemakings impose adjudicatory-type hearings and other specific requirements on the FTC, unlike more flexible section 6(g) APA informal rulemakings. However, the FTC can obtain civil penalties for violation of Magnuson-Moss rules, something it cannot do if 6(g) rules are violated.

In a recent set of public comments filed with the FTC, the Antitrust Section of the American Bar Association stated:

[T]he Commission’s [6(g)] rulemaking authority is buried in within an enumerated list of investigative powers, such as the power to require reports from corporations and partnerships, for example. Furthermore, the [FTC] Act fails to provide any sanctions for violating any rule adopted pursuant to Section 6(g). These two features strongly suggest that Congress did not intend to give the agency substantive rulemaking powers when it passed the Federal Trade Commission Act.

Rephrased, this argument suggests that the structure of the FTC Act indicates that the rulemaking referenced in Section 6(g) is best understood as an aid to FTC processes and investigations, not a source of substantive policymaking. Although the National Petroleum Refiners decision rejected such a reading, that ruling came at a time of significant judicial deference to federal agency activism, and may be dated.

The U.S. Supreme Court’s April 2021 decision in AMG Capital Management v. FTC further bolsters the “statutory structure” argument that Section 6(g) does not authorize substantive rulemaking. In AMG, the U.S. Supreme Court unanimously held that Section 13(b) of the FTC Act, which empowers the FTC to seek a “permanent injunction” to restrain an FTC Act violation, does not authorize the FTC to seek monetary relief from wrongdoers. The court’s opinion rejected the FTC’s argument that the term “permanent injunction” had historically been understood to include monetary relief. The court explained that the injunctive language was “buried” in a lengthy provision that focuses on injunctive, not monetary relief (note that the term “rules” is similarly “buried” within 6(g) language dealing with unrelated issues). The court also pointed to the structure of the FTC Act, with detailed and specific monetary-relief provisions found in Sections 5(l) and 19, as “confirm[ing] the conclusion” that Section 13(b) does not grant monetary relief.

By analogy, a court could point to Congress’ detailed enumeration of substantive rulemaking provisions in Section 18 (a mere two years after National Petroleum Refiners) as cutting against the claim that Section 6(g) can also be invoked to support substantive rulemaking. Finally, the Supreme Court in AMG flatly rejected several relatively recent appeals court decisions that upheld Section 13(b) monetary-relief authority. It follows that the FTC cannot confidently rely on judicial precedent (stemming from one arguably dated court decision, National Petroleum Refiners) to uphold its competition rulemaking authority.

In sum, the FTC will have to overcome serious fundamental legal challenges to its section 6(g) competition rulemaking authority if it seeks to promulgate competition rules.

Even if the FTC’s 6(g) authority is upheld, it faces three other types of litigation-related risks.

First, applying the nondelegation doctrine, courts might hold that the broad term “unfair methods of competition” does not provide the FTC “an intelligible principle” to guide the FTC’s exercise of discretion in rulemaking. Such a judicial holding would mean the FTC could not issue competition rules.

Second, a reviewing court might strike down individual proposed rules as “arbitrary and capricious” if, say, the court found that the FTC rulemaking record did not sufficiently take into account potentially procompetitive manifestations of a condemned practice.

Third, even if a final competition rule passes initial legal muster, applying its terms to individual businesses charged with rule violations may prove difficult. Individual businesses may seek to structure their conduct to evade the particular strictures of a rule, and changes in commercial practices may render less common the specific acts targeted by a rule’s language.

Economic Policy Concerns Raised by Competition Rulemaking

In addition to legal risks, any cost-benefit appraisal of FTC competition rulemaking should consider the economic policy concerns raised by competition rulemaking. These fall into two broad categories.

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules.

Conclusion

A combination of legal risks and economic policy harms strongly counsels against the FTC’s promulgation of substantive competition rules.

First, litigation issues would consume FTC resources and add to the costly delays inherent in developing competition rules in the first place. The compounding of separate serious litigation risks suggests a significant probability that costs would be incurred in support of rules that ultimately would fail to be applied.

Second, even assuming competition rules were to be upheld, their application would raise serious economic policy questions. The inherent inflexibility of rule-based norms is ill-suited to deal with dynamic evolving market conditions, compared with matter-specific antitrust litigation that flexibly applies the latest economic thinking to particular circumstances. New competition rules would also exacerbate costly policy inconsistencies stemming from the existence of dual federal antitrust enforcement agencies, the FTC and the Justice Department.

In conclusion, an evaluation of rule-related legal risks and economic policy concerns demonstrates that a reallocation of some FTC enforcement resources to the development of competition rules would not be cost-effective. Continued sole reliance on case-by-case antitrust litigation would generate greater economic welfare than a mixture of litigation and competition rules.

From Sen. Elizabeth Warren (D-Mass.) to Sen. Josh Hawley (R-Mo.), populist calls to “fix” our antitrust laws and the underlying Consumer Welfare Standard have found a foothold on Capitol Hill. At the same time, there are calls to “fix” the Supreme Court by packing it with new justices. The court’s unanimous decision in NCAA v. Alston demonstrates that neither needs repair. To the contrary, clearly anti-competitive conduct—like the NCAA’s compensation rules—is proscribed under the Consumer Welfare Standard, and every justice from Samuel Alito to Sonia Sotomayor can agree on that.

In 1984, the court in NCAA v. Board of Regents suggested that “courts should take care when assessing the NCAA’s restraints on student-athlete compensation.” After all, joint ventures like sports leagues are entitled to rule-of-reason treatment. But while times change, the Consumer Welfare Standard is sufficiently flexible to meet those changes.

Where a competitive restraint exists primarily to ensure that “enormous sums of money flow to seemingly everyone except the student athletes,” the court rightly calls it out for what it is. As Associate Justice Brett Kavanaugh wrote in his concurrence:

Nowhere else in America can businesses get away with agreeing not to pay their workers a fair market rate on the theory that their product is defined by not paying their workers a fair market rate.  And under ordinary principles of antitrust law, it is not evident why college sports should be any different.  The NCAA is not above the law.

Disturbing these “ordinary principles”—whether through legislation, administrative rulemaking, or the common law—is simply unnecessary. For example, the Open Markets Institute filed an amicus brief arguing that the rule of reason should be “bounded” and willfully blind to the pro-competitive benefits some joint ventures can create (an argument that has been used, unsuccessfully, to attack ridesharing services like Uber and Lyft). Sen. Amy Klobuchar (D-Minn.) has proposed shifting the burden of proof so that merging parties are guilty until proven innocent. Sen. Warren would go further, deeming Amazon’s acquisition of Whole Foods anti-competitive simply because the company is “big,” and ignoring the merger’s myriad pro-competitive benefits. Sen. Hawley has gone further still: calling on Amazon to be investigated criminally for the crime of being innovative and successful.

Several of the current proposals, including those from Sens. Klobuchar and Hawley (and those recently introduced in the House that essentially single out firms for disfavored treatment), would replace the Consumer Welfare Standard that has underpinned antitrust law for decades with a policy that effectively punishes firms for being politically unpopular.

These examples demonstrate we should be wary when those in power assert that things are so irreparably broken that they need a complete overhaul. The “solutions” peddled usually increase politicians’ power by enabling them to pick winners and losers through top-down approaches that stifle the bottom-up innovations that make consumers’ lives better.

Are antitrust law and the Supreme Court perfect? Hardly. But in a 9-0 decision, the court proved this week that there’s nothing broken about either.

It’s a telecom tale as old as time: industry gets a prime slice of radio spectrum and falls in love with it, only to take it for granted. Then, faced with the reapportionment of that spectrum, it proceeds to fight tooth and nail (and law firm) to maintain the status quo. 

In that way, the decision by the Intelligent Transportation Society of America (ITSA) and the American Association of State Highway and Transportation Officials (AASHTO) to seek judicial review of the Federal Communications Commission’s (FCC) order reassigning the 5.9GHz band was right out of central casting. But rather than simply asserting that the FCC’s order was arbitrary, ITSA foreshadowed many of the arguments that it intends to make against the order. 

There are three arguments of note, and should ITSA win on the merits of any of those arguments, it would mark a significant departure from the way spectrum is managed in the United States.

First, ITSA asserts that the U.S. Department of Transportation (DOT), by virtue of its role as the nation’s transportation regulator, retains authority to regulate radio spectrum as it pertains to DOT programs, not the FCC. Of course, this notion is absurd on its face. Congress mandated that the FCC act as the exclusive regulator of non-federal uses of wireless. This leaves the FCC free to—in the words of the Communications Act—“encourage the provision of new technologies and services to the public” and to “provide to all Americans” the best communications networks possible. 

In contrast, other federal agencies with some amount of allocated spectrum each focus exclusively on a particular mission, without regard to the broader concerns of the country (including uses by sister agencies or the states). That’s why, rather than allocate the spectrum directly to DOT, the statute directs the FCC to consider allocating spectrum for Intelligent Transportation Systems and to establish the rules for their spectrum use. The statute directs the FCC to consult with the DOT, but leaves final decisions to the FCC.

Today’s crowded airwaves make it impossible to allocate spectrum for 5G, Wi-Fi 6, and other innovative uses without somehow impacting spectrum used by a federal agency. Accepting the ITSA position would fundamentally alter the FCC’s role relative to other agencies with an interest in the disposition of spectrum, rendering the FCC a vestigial regulatory backwater subject to non-expert veto. As a matter of policy, this would effectively prevent the United States from meeting the growing challenges of our exponentially increasing demand for wireless access. 

It would also put us at a tremendous disadvantage relative to other countries.  International coordination of wireless policy has become critical in the global economy, with our global supply chains and wireless equipment manufacturers dependent on global standards to drive economies of scale and interoperability around the globe. At the last World Radio Conference in 2019, interagency spectrum squabbling significantly undermined the U.S. negotiation efforts. If agencies actually had veto power over the FCC’s spectrum decisions, the United States would have no way to create a coherent negotiating position, let alone to advocate effectively for our national interests.   

Second, though relatedly, ITSA asserts that the FCC’s engineers failed to appropriately evaluate safety impacts and interference concerns. It’s hard to see how this could be the case, given both the massive engineering record and the FCC’s globally recognized expertise in spectrum. As a general rule, the FCC leads the world in spectrum engineering (there is a reason things like mobile service and Wi-Fi started in the United States). No other federal agency (including DOT) has such extensive, varied, and lengthy experience with interference analysis. This allows the FCC to develop broadly applicable standards to protect all emergency communications. Every emergency first responder relies on this expertise every day that they use wireless communications to save lives. Here again, we see the wisdom in Congress delegating to a single expert agency the task of finding the right balance to meet all our wireless public-safety needs.

Third, the petition ambitiously asks the court to set aside all parts of the order, with the exception of the one portion that ITSA likes: freeing the top 30MHz of the band for use by C-V2X on a permanent basis. Given their other arguments, this assertion strains credulity. Either the FCC makes the decisions, or the DOT does. Giving federal agencies veto power over FCC decisions would be bad enough. Allowing litigants to play federal agencies against each other so they can mix and match results would produce chaos and/or paralysis in spectrum policy.

In short, ITSA is asking the court to fundamentally redefine the scope of FCC authority to administer spectrum when other federal agencies are involved; to undermine deference owed to FCC experts; and to do all of this while also holding that the FCC was correct on the one part of the order with which the complainants agree. This would make future progress in wireless technology effectively impossible.

We don’t let individual states decide which side of the road to drive on, or whether red or some other color traffic light means stop, because traffic rules only work when everybody follows the same rules. Wireless policy can only work if one agency makes the rules. Congress says that agency is the FCC. The courts (and other agencies) need to remember that.

Antitrust by Fiat

Jonathan M. Barnett —  23 February 2021

The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.

Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.

Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.

This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.

Blank Check #1

The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.

In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.

Blank Check #2

The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.

This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.

Anti-Market Antitrust

The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.

While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

U.S. antitrust regulators have a history of narrowly defining relevant markets—often to the point of absurdity—in order to create market power out of thin air. The Federal Trade Commission (FTC) famously declared that Whole Foods and Wild Oats operated in the “premium natural and organic supermarkets market”—a narrowly defined market designed to exclude other supermarkets carrying premium natural and organic foods, such as Walmart and Kroger. Similarly, for the Staples-Office Depot merger, the FTC

narrowly defined the relevant market as “office superstore” chains, which excluded general merchandisers such as Walmart, K-Mart and Target, who at the time accounted for 80% of office supply sales.

Texas Attorney General Ken Paxton’s complaint against Google’s advertising business, joined by the attorneys general of nine other states, continues this tradition of narrowing market definition to shoehorn market dominance where it may not exist.

For example, one recent paper critical of Google’s advertising business narrows the relevant market first from media advertising to digital advertising, then to the “open” supply of display ads and, finally, even further to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the authors conclude Google’s market share is “perhaps sufficient to confer market power.”

While whittling down market definitions may achieve the authors’ purpose of providing a roadmap to prosecute Google, one byproduct is a mishmash of market definitions that generates as many as 16 relevant markets for digital display and video advertising, in many of which Google doesn’t have anything approaching market power (and in some of which, in fact, Facebook, and not Google, is the most dominant player).

The Texas complaint engages in similar relevant-market gerrymandering. It claims that, within digital advertising, there exist several relevant markets and that Google monopolizes four of them:

  1. Publisher ad servers, which manage the inventory of a publisher’s (e.g., a newspaper’s website or a blog) space for ads;
  2. Display ad exchanges, the “marketplace” in which auctions directly match publishers’ selling of ad space with advertisers’ buying of ad space;
  3. Display ad networks, which are similar to exchanges, except a network acts as an intermediary that collects ad inventory from publishers and sells it to advertisers; and
  4. Display ad-buying tools, which include demand-side platforms that collect bids for ad placement with publishers.

The complaint alleges, “For online publishers and advertisers alike, the different online advertising formats are not interchangeable.” But this glosses over a bigger challenge for the attorneys general: Is online advertising a separate relevant market from offline advertising?

Digital advertising, of which display advertising is a small part, is only one of many channels through which companies market their products. About half of today’s advertising spending in the United States goes to digital channels, up from about 10% a decade ago. Approximately 30% of ad spending goes to television, with the remainder going to radio, newspapers, magazines, billboards and other “offline” forms of media.

Physical newspapers now account for less than 10% of total advertising spending. Traditionally, newspapers obtained substantial advertising revenues from classified ads. As internet usage increased, newspaper classifieds have been replaced by less costly and more effective internet classifieds—such as those offered by Craigslist—or targeted ads on Google Maps or Facebook.

The price of advertising has fallen steadily over the past decade, while output has risen. Spending on digital advertising in the United States grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period, the producer price index (PPI) for internet advertising sales declined by nearly 40%. Rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year.

Since 2000, advertising spending has been falling as a share of gross domestic product, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost and increasing total revenues are consistent with a growing and increasingly competitive market, rather than one of rising concentration and reduced competition.

There is little or no empirical data evaluating the extent to which online and offline advertising constitute distinct markets or the extent to which digital display is a distinct submarket of online advertising. As a result, analysis of adtech competition has relied on identifying several technical and technological factors—as well as the say-so of participants in the business—that the analysts assert distinguish online from offline and establish digital display (versus digital search) as a distinct submarket. This approach has been used and accepted, especially in cases in which pricing data has not been available.

But the pricing information that is available raises questions about the extent to which online advertising is a distinct market from offline advertising. For example, Avi Goldfarb and Catherine Tucker find that, when local regulations prohibit offline direct advertising, search advertising is more expensive, indicating that search and offline advertising are substitutes. In other research, they report that online display advertising circumvents, in part, local bans on offline billboard advertising for alcoholic beverages. In both studies, Goldfarb and Tucker conclude their results suggest online and offline advertising are substitutes. They also conclude this substitution suggests that online and offline markets should be considered together in the context of antitrust.

While this information is not sufficient to define a broader relevant market, it raises questions regarding solely relying on the technical or technological distinctions and the say-so of market participants.

In the United States, plaintiffs do not get to define the relevant market. That is up to the judge or the jury. Plaintiffs have the burden to convince the court that a proposed narrow market definition is the correct one. With strong evidence that online and offline ads are substitutes, the court should not blindly accept the gerrymandered market definitions posited by the attorneys general.

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

Today the International Center for Law & Economics (ICLE) submitted an amicus brief urging the Supreme Court to review the DC Circuit’s 2016 decision upholding the FCC’s 2015 Open Internet Order. The brief was authored by Geoffrey A. Manne, Executive Director of ICLE, and Justin (Gus) Hurwitz, Assistant Professor of Law at the University of Nebraska College of Law and ICLE affiliate, with able assistance from Kristian Stout and Allen Gibby of ICLE. Jeffrey A. Mandell of the Wisconsin law firm of Stafford Rosenbaum collaborated in drafting the brief and provided invaluable pro bono legal assistance, for which we are enormously grateful. Laura Lamansky of Stafford Rosenbaum also assisted. 

The following post discussing the brief was written by Jeff Mandell (originally posted here).

Courts generally defer to agency expertise when reviewing administrative rules that regulate conduct in areas where Congress has delegated authority to specialized executive-branch actors. An entire body of law—administrative law—governs agency actions and judicial review of those actions. And at the federal level, courts grant agencies varying degrees of deference, depending on what kind of function the agency is performing, how much authority Congress delegated, and the process by which the agency adopts or enforces policies.

Should courts be more skeptical when an agency changes a policy position, especially if the agency is reversing prior policy without a corresponding change to the governing statute? Daniel Berninger v. Federal Communications Commission, No. 17-498 (U.S.), raises these questions. And this week Stafford Rosenbaum was honored to serve as counsel of record for the International Center for Law & Economics (“ICLE”) in filing an amicus curiae brief urging the U.S. Supreme Court to hear the case and to answer these questions.

ICLE’s amicus brief highlights new academic research suggesting that systematic problems undermine judicial review of agency changes in policy. The brief also points out that judicial review is complicated by conflicting signals from the Supreme Court about the degree of deference that courts should accord agencies in reviewing reversals of prior policy. And the brief argues that the specific policy change at issue in this case lacks a sufficient basis but was affirmed by the court below as the result of a review that was, but should not have been, “particularly deferential.”

In 2015, the Federal Communications Commission (“FCC”) issued the Open Internet Order (“OIO”), which required Internet Service Providers to abide by a series of regulations popularly referred to as net neutrality. To support these regulations, the FCC interpreted the Communications Act of 1934 to grant it authority to heavily regulate broadband internet service. This interpretation reversed a long-standing agency understanding of the statute as permitting only limited regulation of broadband service.

The FCC ostensibly based the OIO on factual and legal analysis. However, ICLE argues, the OIO is actually based on questionable factual reinterpretations and misunderstanding of statutory interpretation adopted more in order to support radical changes in FCC policy than for their descriptive accuracy. When a variety of interested parties challenged the OIO, the U.S. Court of Appeals for the D.C. Circuit affirmed the regulations. In doing so, the court afforded substantial deference to the FCC—so much that the D.C. Circuit never addressed the reasonableness of the FCC’s decisionmaking process in reversing prior policy.

ICLE’s amicus brief argues that the D.C. Circuit’s decision “is both in tension with [the Supreme] Court’s precedents and, more, raises exceptionally important and previously unaddressed questions about th[e] Court’s precedents on judicial review of agency changes of policy.” Without further guidance from the Supreme Court, the brief argues, “there is every reason to believe” the FCC will again reverse its position on broadband regulation, such that “the process will become an endless feedback loop—in the case of this regulation and others—at great cost not only to regulated entities and their consumers, but also to the integrity of the regulatory process.”

The ramifications of the Supreme Court accepting this case would be twofold. First, administrative agencies would gain guidance for their decisionmaking processes in considering changes to existing policies. Second, lower courts would gain clarity on agency deference issues, making judicial review more uniform and appropriate where agencies reverse prior policy positions.

Read the full brief here.

I recently published a piece in the Hill welcoming the Canadian Supreme Court’s decision in Google v. Equustek. In this post I expand (at length) upon my assessment of the case.

In its decision, the Court upheld injunctive relief against Google, directing the company to avoid indexing websites offering the infringing goods in question, regardless of the location of the sites (and even though Google itself was not a party in the case nor in any way held liable for the infringement). As a result, the Court’s ruling would affect Google’s conduct outside of Canada as well as within it.

The case raises some fascinating and thorny issues, but, in the end, the Court navigated them admirably.

Some others, however, were not so… welcoming of the decision (see, e.g., here and here).

The primary objection to the ruling seems to be, in essence, that it is the top of a slippery slope: “If Canada can do this, what’s to stop Iran or China from doing it? Free expression as we know it on the Internet will cease to exist.”

This is a valid concern, of course — in the abstract. But for reasons I explain below, we should see this case — and, more importantly, the approach adopted by the Canadian Supreme Court — as reassuring, not foreboding.

Some quick background on the exercise of extraterritorial jurisdiction in international law

The salient facts in, and the fundamental issue raised by, the case were neatly summarized by Hugh Stephens:

[The lower Court] issued an interim injunction requiring Google to de-index or delist (i.e. not return search results for) the website of a firm (Datalink Gateways) that was marketing goods online based on the theft of trade secrets from Equustek, a Vancouver, B.C., based hi-tech firm that makes sophisticated industrial equipment. Google wants to quash a decision by the lower courts on several grounds, primarily that the basis of the injunction is extra-territorial in nature and that if Google were to be subject to Canadian law in this case, this could open a Pandora’s box of rulings from other jurisdictions that would require global delisting of websites thus interfering with freedom of expression online, and in effect “break the Internet”.

The question of jurisdiction with regard to cross-border conduct is clearly complicated and evolving. But, in important ways, it isn’t anything new just because the Internet is involved. As Jack Goldsmith and Tim Wu (yes, Tim Wu) wrote (way back in 2006) in Who Controls the Internet?: Illusions of a Borderless World:

A government’s responsibility for redressing local harms caused by a foreign source does not change because the harms are caused by an Internet communication. Cross-border harms that occur via the Internet are not any different than those outside the Net. Both demand a response from governmental authorities charged with protecting public values.

As I have written elsewhere, “[g]lobal businesses have always had to comply with the rules of the territories in which they do business.”

Traditionally, courts have dealt with the extraterritoriality problem by applying a rule of comity. As my colleague, Geoffrey Manne (Founder and Executive Director of ICLE), reminds me, the principle of comity largely originated in the work of the 17th Century Dutch legal scholar, Ulrich Huber. Huber wrote that comitas gentium (“courtesy of nations”) required the application of foreign law in certain cases:

[Sovereigns will] so act by way of comity that rights acquired within the limits of a government retain their force everywhere so far as they do not cause prejudice to the powers or rights of such government or of their subjects.

And, notably, Huber wrote that:

Although the laws of one nation can have no force directly with another, yet nothing could be more inconvenient to commerce and to international usage than that transactions valid by the law of one place should be rendered of no effect elsewhere on account of a difference in the law.

The basic principle has been recognized and applied in international law for centuries. Of course, the flip side of the principle is that sovereign nations also get to decide for themselves whether to enforce foreign law within their jurisdictions. To summarize Huber (as well as Lord Mansfield, who brought the concept to England, and Justice Story, who brought it to the US):

All three jurists were concerned with deeply polarizing public issues — nationalism, religious factionalism, and slavery. For each, comity empowered courts to decide whether to defer to foreign law out of respect for a foreign sovereign or whether domestic public policy should triumph over mere courtesy. For each, the court was the agent of the sovereign’s own public law.

The Canadian Supreme Court’s well-reasoned and admirably restrained approach in Equustek

Reconciling the potential conflict between the laws of Canada and those of other jurisdictions was, of course, a central subject of consideration for the Canadian Court in Equustek. The Supreme Court, as described below, weighed a variety of factors in determining the appropriateness of the remedy. In analyzing the competing equities, the Supreme Court set out the following framework:

[I]s there a serious issue to be tried; would the person applying for the injunction suffer irreparable harm if the injunction were not granted; and is the balance of convenience in favour of granting the interlocutory injunction or denying it. The fundamental question is whether the granting of an injunction is just and equitable in all of the circumstances of the case. This will necessarily be context-specific. [Here, as throughout this post, bolded text represents my own, added emphasis.]

Applying that standard, the Court held that because ordering an interlocutory injunction against Google was the only practical way to prevent Datalink from flouting the court’s several orders, and because there were no sufficient, countervailing comity or freedom of expression concerns in this case that would counsel against such an order being granted, the interlocutory injunction was appropriate.

I draw particular attention to the following from the Court’s opinion:

Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, “Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong”.

And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case. As Groberman J.A. concluded:

In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation. The order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

In fact, as Andrew Keane Woods writes at Lawfare:

Under longstanding conflicts of laws principles, a court would need to weigh the conflicting and legitimate governments’ interests at stake. The Canadian court was eager to undertake that comity analysis, but it couldn’t do so because the necessary ingredient was missing: there was no conflict of laws.

In short, the Canadian Supreme Court, while acknowledging the importance of comity and appropriate restraint in matters with extraterritorial effect, carefully weighed the equities in this case and found that they favored the grant of extraterritorial injunctive relief. As the Court explained:

Datalink [the direct infringer] and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. Equustek has made efforts to locate Datalink with limited success. Datalink is only able to survive — at the expense of Equustek’s survival — on Google’s search engine which directs potential customers to Datalink’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to Equustek pending the trial, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.

As I have stressed, key to the Court’s reasoning was its close consideration of possible countervailing concerns and its entirely fact-specific analysis. By the very terms of the decision, the Court made clear that its balancing would not necessarily lead to the same result where sensibilities or core values of other nations would be offended. In this particular case, they were not.

How critics of the decision (and there are many) completely miss the true import of the Court’s reasoning

In other words, the holding in this case was a function of how, given the facts of the case, the ruling would affect the particular core concerns at issue: protection and harmonization of global intellectual property rights on the one hand, and concern for the “sensibilities of other nations,” including their concern for free expression, on the other.

This should be deeply reassuring to those now criticizing the decision. And yet… it’s not.

Whether because they haven’t actually read or properly understood the decision, or because they are merely grandstanding, some commenters are proclaiming that the decision marks the End Of The Internet As We Know It — you know, it’s going to break the Internet. Or something.

Human Rights Watch, an organization I generally admire, issued a statement including the following:

The court presumed no one could object to delisting someone it considered an intellectual property violator. But other countries may soon follow this example, in ways that more obviously force Google to become the world’s censor. If every country tries to enforce its own idea of what is proper to put on the Internet globally, we will soon have a race to the bottom where human rights will be the loser.

The British Columbia Civil Liberties Association added:

Here it was technical details of a product, but you could easily imagine future cases where we might be talking about copyright infringement, or other things where people in private lawsuits are wanting things to be taken down off  the internet that are more closely connected to freedom of expression.

From the other side of the traditional (if insufficiently nuanced) “political spectrum,” AEI’s Ariel Rabkin asserted that

[O]nce we concede that Canadian courts can regulate search engine results in Turkey, it is hard to explain why a Turkish court shouldn’t have the reciprocal right. And this is no hypothetical — a Turkish court has indeed ordered Twitter to remove a user (AEI scholar Michael Rubin) within the United States for his criticism of Erdogan. Once the jurisdictional question is decided, it is no use raising free speech as an issue. Other countries do not have our free speech norms, nor Canada’s. Once Canada concedes that foreign courts have the right to regulate Canadian search results, they are on the internet censorship train, and there is no egress before the end of the line.

In this instance, in particular, it is worth noting not only the complete lack of acknowledgment of the Court’s articulated constraints on taking action with extraterritorial effect, but also the fact that Turkey (among others) has hardly been waiting for approval from Canada before taking action.   

And then there’s EFF (of course). EFF, fairly predictably, suggests first — with unrestrained hyperbole — that the Supreme Court held that:

A country has the right to prevent the world’s Internet users from accessing information.

Dramatic hyperbole aside, that’s also a stilted way to characterize the content at issue in the case. But it is important to EFF’s misleading narrative to begin with the assertion that offering infringing products for sale is “information” to which access by the public is crucial. But, of course, the distribution of infringing products is hardly “expression,” as most of us would understand that term. To claim otherwise is to denigrate the truly important forms of expression that EFF claims to want to protect.

And, it must be noted, even if there were expressive elements at issue, infringing “expression” is always subject to restriction under the copyright laws of virtually every country in the world (and free speech laws, where they exist).

Nevertheless, EFF writes that the decision:

[W]ould cut off access to information for U.S. users would set a dangerous precedent for online speech. In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

As always with EFF missives, caveat lector applies: None of this is fair or accurate. EFF (like the other critics quoted above) is looking only at the result — the specific contours of the global order related to the Internet — and not to the reasoning of the decision itself.

Quite tellingly, EFF urges its readers to ignore the case in front of them in favor of a theoretical one. That is unfortunate. Were EFF, et al. to pay closer attention, they would be celebrating this decision as a thoughtful, restrained, respectful, and useful standard to be employed as a foundational decision in the development of global Internet governance.

The Canadian decision is (as I have noted, but perhaps still not with enough repetition…) predicated on achieving equity upon close examination of the facts, and giving due deference to the sensibilities and core values of other nations in making decisions with extraterritorial effect.

Properly understood, the ruling is a shield against intrusions that undermine freedom of expression, and not an attack on expression.

EFF subverts the reasoning of the decision and thus camouflages its true import, all for the sake of furthering its apparently limitless crusade against all forms of intellectual property. The ruling can be read as an attack on expression only if one ascribes to the distribution of infringing products the status of protected expression — so that’s what EFF does. But distribution of infringing products is not protected expression.

Extraterritoriality on the Internet is complicated — but that undermines, rather than justifies, critics’ opposition to the Court’s analysis

There will undoubtedly be other cases that present more difficult challenges than this one in defining the jurisdictional boundaries of courts’ abilities to address Internet-based conduct with multi-territorial effects. But the guideposts employed by the Supreme Court of Canada will be useful in informing such decisions.

Of course, some states don’t (or won’t, when it suits them), adhere to principles of comity. But that was true long before the Equustek decision. And, frankly, the notion that this decision gives nations like China or Iran political cover for global censorship is ridiculous. Nations that wish to censor the Internet will do so regardless. If anything, reference to this decision (which, let me spell it out again, highlights the importance of avoiding relief that would interfere with core values or sensibilities of other nations) would undermine their efforts.

Rather, the decision will be far more helpful in combating censorship and advancing global freedom of expression. Indeed, as noted by Hugh Stephens in a recent blog post:

While the EFF, echoed by its Canadian proxy OpenMedia, went into hyperventilation mode with the headline, “Top Canadian Court permits Worldwide Internet Censorship”, respected organizations like the Canadian Civil Liberties Association (CCLA) welcomed the decision as having achieved the dual objectives of recognizing the importance of freedom of expression and limiting any order that might violate that fundamental right. As the CCLA put it,

While today’s decision upholds the worldwide order against Google, it nevertheless reflects many of the freedom of expression concerns CCLA had voiced in our interventions in this case.

As I noted in my piece in the Hill, this decision doesn’t answer all of the difficult questions related to identifying proper jurisdiction and remedies with respect to conduct that has global reach; indeed, that process will surely be perpetually unfolding. But, as reflected in the comments of the Canadian Civil Liberties Association, it is a deliberate and well-considered step toward a fair and balanced way of addressing Internet harms.

With apologies for quoting myself, I noted the following in an earlier piece:

I’m not unsympathetic to Google’s concerns. As a player with a global footprint, Google is legitimately concerned that it could be forced to comply with the sometimes-oppressive and often contradictory laws of countries around the world. But that doesn’t make it — or any other Internet company — unique. Global businesses have always had to comply with the rules of the territories in which they do business… There will be (and have been) cases in which taking action to comply with the laws of one country would place a company in violation of the laws of another. But principles of comity exist to address the problem of competing demands from sovereign governments.

And as Andrew Keane Woods noted:

Global takedown orders with no limiting principle are indeed scary. But Canada’s order has a limiting principle. As long as there is room for Google to say to Canada (or France), “Your order will put us in direct and significant violation of U.S. law,” the order is not a limitless assertion of extraterritorial jurisdiction. In the instance that a service provider identifies a conflict of laws, the state should listen.

That is precisely what the Canadian Supreme Court’s decision contemplates.

No one wants an Internet based on the lowest common denominator of acceptable speech. Yet some appear to want an Internet based on the lowest common denominator for the protection of original expression. These advocates thus endorse theories of jurisdiction that would deny societies the ability to enforce their own laws, just because sometimes those laws protect intellectual property.

And yet that reflects little more than an arbitrary prioritization of those critics’ personal preferences. In the real world (including the real online world), protection of property is an important value, deserving reciprocity and courtesy (comity) as much as does speech. Indeed, the G20 Digital Economy Ministerial Declaration adopted in April of this year recognizes the importance to the digital economy of promoting security and trust, including through the provision of adequate and effective intellectual property protection. Thus the Declaration expresses the recognition of the G20 that:

[A]pplicable frameworks for privacy and personal data protection, as well as intellectual property rights, have to be respected as they are essential to strengthening confidence and trust in the digital economy.

Moving forward in an interconnected digital universe will require societies to make a series of difficult choices balancing both competing values and competing claims from different jurisdictions. Just as it does in the offline world, navigating this path will require flexibility and skepticism (if not rejection) of absolutism — including with respect to the application of fundamental values. Even things like freedom of expression, which naturally require a balancing of competing interests, will need to be reexamined. We should endeavor to find that fine line between allowing individual countries to enforce their own national judgments and a tolerance for those countries that have made different choices. This will not be easy, as well manifested in something that Alice Marwick wrote earlier this year:

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

* * *

We need to move beyond this simplistic binary of free speech/censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs…. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

While I (and the Canadian Supreme Court, for that matter) share EFF’s unease over the scope of extraterritorial judgments, I fundamentally disagree with EFF that the Equustek decision “largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights.”

In fact, it is EFF’s position that comes much closer to a position indifferent to the laws and values of other countries; in essence, EFF’s position would essentially always prioritize the particular speech values adopted in the US, regardless of whether they had been adopted by the countries affected in a dispute. It is therefore inconsistent with the true nature of comity.

Absolutism and exceptionalism will not be a sound foundation for achieving global consensus and the effective operation of law. As stated by the Canadian Supreme Court in Equustek, courts should enforce the law — whatever the law is — to the extent that such enforcement does not substantially undermine the core sensitivities or values of nations where the order will have effect.

EFF ignores the process in which the Court engaged precisely because EFF — not another country, but EFF — doesn’t find the enforcement of intellectual property rights to be compelling. But that unprincipled approach would naturally lead in a different direction where the court sought to protect a value that EFF does care about. Such a position arbitrarily elevates EFF’s idiosyncratic preferences. That is simply not a viable basis for constructing good global Internet governance.

If the Internet is both everywhere and nowhere, our responses must reflect that reality, and be based on the technology-neutral application of laws, not the abdication of responsibility premised upon an outdated theory of tech exceptionalism under which cyberspace is free from the application of the laws of sovereign nations. That is not the path to either freedom or prosperity.

To realize the economic and social potential of the Internet, we must be guided by both a determination to meaningfully address harms, and a sober reservation about interfering in the affairs of other states. The Supreme Court of Canada’s decision in Google v. Equustek has planted a flag in this space. It serves no one to pretend that the Court decided that a country has the unfettered right to censor the Internet. That’s not what it held — and we should be grateful for that. To suggest otherwise may indeed be self-fulfilling.

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).