The American concept of “the rule of law” (see here) is embodied in the Due Process Clause of the Fifth Amendment to the U.S. Constitution, and in the constitutional principles of separation of powers, an independent judiciary, a government under law, and equality of all before the law (see here).  It holds that the executive must comply with the law because ours is “a government of laws, and not of men,” or, as Justice Anthony Kennedy put it in a 2006 address to the American Bar Association, “that the Law is superior to, and thus binds, the government and all its officials.”  (See here.)  More specifically, and consistent with these broader formulations, the late and great legal philosopher Friedrich Hayek wrote that the rule of law “means the government in all its actions is bound by rules fixed and announced beforehand – rules which make it possible to see with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.”  (See here.)  In other words, as former Boston University Law School Dean Ron Cass put it, the rule of law involves “a system of binding rules” adopted and applied by a valid government authority that embody “clarity, predictability, and equal applicability.”  (See here.)

Regrettably, by engaging in regulatory overreach and ignoring statutory limitations on the scope of their authority, federal administrative agencies have shown scant appreciation for rule of law restraints under the current administration (see here and here for commentaries on this problem by Heritage Foundation scholars).  Although many agencies could be singled out, the Federal Communications Commission’s (FCC) actions in recent years have been especially egregious (see here).

A prime example of regulatory overreach by the FCC that flouted the rule of law was its promulgation in 2015 of an order preempting state laws in Tennessee and North Carolina that prevented municipally-owned broadband providers from providing broadband service beyond their geographic boundaries (Municipal Broadband Order, see here).   As a matter of substance, this decision ignored powerful economic evidence that municipally-provided broadband services often involve wasteful subsidies for financially–troubled government-owned providers that interfere with effective private sector competition and are economically harmful (my analysis is here).   As a legal matter, the Municipal Broadband Order went beyond the FCC’s statutory authority and raises grave constitutional problems, thereby ignoring the constitutional limitations placed on the exercise of governmental powers that lie at the heart of the rule of law (see here).  The Order lacked a sound legal footing in basing its authority on Section 706 of the Telecommunications Act of 1996, which merely authorizes the FCC to promote local broadband competition and investment (a goal which the Order did not advance) and says nothing about preemption.   In addition, the FCC’s invocation of preemption authority trenched upon the power of the states to control their subordinate governmental entities, guaranteed to them by the Constitution as an essential element of their sovereignty in our federal system (see here).   What’s more, the Chattanooga, Tennessee and Wilson, North Carolina municipal broadband systems that had requested FCC preemption imposed content-based restrictions on users of their network that raised serious First Amendment issues (see here).   Specifically, those systems’ bans on the transmittal of various sorts of “abusive” language appeared to be too broad to withstand First Amendment “strict scrutiny.”  Moreover, by requiring prospective broadband enrollees to agree not to sue their provider as an initial condition of service, two of the municipal systems arguably unconstitutionally coerced users to forgo exercise of their First Amendment rights.

Fortunately, on August 10, 2016, in Tennessee v. FCC, the U.S. Court of Appeals for the Sixth Circuit struck down the Municipal Broadband Order, pithily stating:

The FCC order essentially serves to re-allocate decision-making power between the states and their municipalities. This is shown by the fact that no federal statute or FCC regulation requires the municipalities to expand or otherwise to act in contravention of the preempted state statutory provisions. This preemption by the FCC of the allocation of power between a state and its subdivisions requires at least a clear statement in the authorizing federal legislation. The FCC relies upon § 706 of the Telecommunications Act of 1996 for the authority to preempt in this case, but that statute falls far short of such a clear statement. The preemption order must accordingly be reversed.

The Sixth Circuit’s decision has important policy ramifications that extend beyond the immediate controversy, as Free State Foundation Scholars Randolph May and Seth Cooper explain:

The FCC’s Municipal Broadband Preemption Order would have turned constitutional federalism inside out by severing local political subdivisions’ accountability from the states governments that created them. Had the agency’s order been upheld, the FCC surely would have preempted several other state laws restricting municipalities’ ownership and operation of broadband networks. Several state governments would have been locked into an unwise policy of favoring municipal broadband business ventures with a track record of legal and proprietary conflicts of interest, expensive financial failures, and burdensome debts for local taxpayers.

The avoidance of a series of bad side effects in a corner of the regulatory world is not, however, sufficient grounds for breaking out the champagne.  From a global perspective, the Sixth Circuit’s Tennessee v. FCC decision, while helpful, does not address the broader problem of agency disregard for the limitations of constitutional federalism and the rule of law.  Administrative overreach, like a chronic debilitating virus, saps the initiative of the private sector (and, more generally, the body politic) and undermines its vitality.  In addition, not all federal judges can be counted on to rein in legally unjustified rules (which in any event impose costly delay and uncertainty, even if they are eventually overturned).  What is needed is an administration that emphasizes by word and deed that it is committed to constitutionalist rule of law principles – and insists that its appointees (including commissioners of independent agencies) share that philosophy.  Let us hope that we do not have to wait too long for such an administration.

Discussion

In recent years, U.S. government policymakers have recounted various alleged market deficiencies associated with patent licensing practices, as part of a call for patent policy “reforms” – with the “reforms” likely to have the effect of weakening patent rights.  In particular, antitrust enforcers have expressed concerns that:  (1) the holder of a patent covering the technology needed to implement some aspect of a technical standard (a “standard-essential patent,” or SEP) could “hold up” producers that utilize the standard by demanding  anticompetitively high royalty payments; (2) the accumulation of royalties for multiple complementary patent licenses needed to make a product exceeds the bundled monopoly rate that would be charged if all patents were under common control (“royalty stacking”); (3) an overlapping set of patent rights requiring that producers seeking to commercialize a new technology obtain licenses from multiple patentees deters innovation (“patent thickets”); and (4) the dispersed ownership of complementary patented inventions results in “excess” property rights, the underuse of resources, and economic inefficiency (“the tragedy of the anticommons”).  (See, for example, Federal Trade Commission and U.S. Justice Department reports on antitrust and intellectual property policy, here, here, and here).

Although some commentators have expressed skepticism about the actual real world incidence of these scenarios, relatively little attention has been paid to the underlying economic assumptions that give rise to the “excessive royalty” problem that is portrayed.  Very recently, however, Professor Daniel F. Spulber of Northwestern University circulated a paper that questions those assumptions.  The paper points out that claims of economic harm due to excessive royalty charges critically rest on the assumption that individual patent owners choose royalties using posted prices, thereby generating total royalties that are above the monopoly level that would be charged for all complementary patents if they were owned in common.  In other words, it is assumed that interdependencies among complements are ignored, with individual patent monopoly prices being separately charged – the “Cournot complements” problem.

In reality, however, Professor Spulber explains that patent licensing usually involves bargaining rather than posted prices, because such licensing involves long-term contractual relationships between patentees and producers, rather than immediate exchange.  Significantly, the paper shows that bargaining procedures reflecting long-term relationships maximize the joint profits of inventors (patentees) and producers, with licensing royalties being less than (as opposed to more than under posted prices) bundled monopoly royalties.  In short, bargaining over long-term patent licensing contracts yields an efficient market outcome, in marked contrast to the inefficient outcome posited by those who (wrongly) assume patent licensing under posted prices.  In other words, real world patent holders (as opposed to the inward-looking, non-cooperative, posted-price patentees of government legend) tend to engage in highly fruitful licensing negotiations that yield socially efficient outcomes.  This finding neatly explains why examples of economically-debilitating patent thickets, royalty stacks, hold-ups, and patent anti-commons, like unicorns (or perhaps, to be fair, black swans), are amazingly hard to spot in the real world.  It also explains why the business sector that should in theory be most prone to such “excessive patent” problems, the telecommunications industry (which involves many different patentees and producers, and tens of thousands of patents), has been (and remains) a leader in economic growth and innovation.  (See also here, for an article explaining that smartphone innovation has soared because of the large number of patents.)

Professor Spulber’s concluding section highlights the policy implications of his research:

The efficiency of the bargaining outcome differs from the outcome of the Cournot posted prices model. Understanding the role of bargaining helps address a host of public policy concerns, including SEP holdup, royalty stacking, patent thickets, the tragedy of the anticommons, and justification for patent pools. The efficiency of the bargaining outcome suggests the need for antitrust forbearance toward industries that combine multiple inventions, including SEPs.

Professor Spulber’s reference to “antitrust forbearance” is noteworthy.  As I have previously pointed out (see, for example, here, here, and here), in recent years U.S. antitrust enforcers have taken positions that tend to favor the weakening of patent rights.  Those positions are justified by the “patent policy problems” that Professor Spulber’s paper debunks, as well as an emphasis on low quality “probabilistic patents” (see, for example, here) that ignores a growing body of literature (both theoretical and empirical) on the economic benefits of a strong patent system (see, for example, here and here).

In sum, Professor Spulber’s impressive study is one more piece of compelling evidence that the federal government’s implicitly “anti-patent” positions are misguided.  The government should reject those positions and restore its previous policy of respect for robust patent rights – a policy that promotes American innovation and economic growth.

Appendix

While Professor Spulber’s long paper is well worth a careful read, key italicized excerpts from his debunking of prominent “excessive patent” stories are set forth below.

SEP Holdups

Standard Setting Organizations (SSOs) are voluntary organizations that establish and disseminate technology standards for industries. Patent owners may declare that their patents are essential to manufacturing products that conform to the standard. Many critics of SSOs suggest that inclusion of SEPs in technology standards allows patent owners to charge much higher royalties than if the SEPs were not included in the standard. SEPs are said to cause a form of “holdup” if producers using the patented technology would incur high costs of switching to alternative technologies. . . . [Academic] discussions of the effects of SEPs [summarized by the author] depend on patent owners choosing royalties using posted prices, generating total royalties above the bundled monopoly level. When IP owners and producers engage in bargaining, the present analysis suggests that total royalties will be less than the bundled monopoly level. Efficiencies in choosing licensing royalties should mitigate concerns about the effects of SEPs on total royalties when patent licensing involves bargaining. The present analysis further suggests bargaining should reduce or eliminate concerns about SEP “holdup”. Efficiencies in choosing patent licensing royalties also should help mitigate concerns about whether or not SSOs choose efficient technology standards.

Royalty Stacking

“Royalty stacking” refers to the situation in which total royalties are excessive in comparison to some benchmark, typically the bundled monopoly rate. . . . The present analysis shows that the perceived royalty stacking problem is due to the posted prices assumption in Cournot’s model. . . . The present analysis shows that royalty stacking need not occur with different market institutions, notably bargaining between IP owners and producers. In particular, with non-cooperative licensing offers and negotiation of royalty rates between IP owners and producers, total royalties will be less than the royalties chosen by a bundled monopoly IP owner. The result that total royalties are less than the bundled monopoly benchmark holds even if there are many patented inventions. Total royalties are less than the benchmark with innovative complements and substitutes.

Patent Thickets

The patent thickets view considers patents as deterrents to innovation. This view differs substantially from the view that patents function as property rights that stimulate innovation. . . . The bargaining analysis presented here suggests that multiple patents should not be viewed as deterring innovation. Multiple inventors can coordinate with producers through market transactions. This means that by making licensing offers to producers and negotiating patent royalties, inventors and producers can achieve efficient outcomes. There is no need for government regulation to restrict the total number of patents. Arbitrarily limiting the total number of patents by various regulatory mechanisms would likely discourage invention and innovation.

Tragedy of the Anticommons

The “Tragedy of the Anticommons” describes the situation in which dispersed ownership of complementary inventions results in underuse of resources[.] . . . . The present analysis shows that patents need not create excess property rights when there is bargaining between IP owners and producers. Bargaining results in a total output that maximizes the joint returns of inventors and producers. Social welfare and final output are greater with bargaining than in Cournot’s posted prices model. This contradicts the “Tragedy of the Anticommons” result and shows that there need not be underutilization of resources due to high royalties.

Copyright law, ever a sore point in some quarters, has found a new field of battle in the FCC’s recent set-top box proposal. At the request of members of Congress, the Copyright Office recently wrote a rather thorough letter outlining its view of the FCC’s proposal on rightsholders.

In sum, the CR’s letter was an even-handed look at the proposal which concluded:

As a threshold matter, it seems critical that any revised proposal respect the authority of creators to manage the exploitation of their copyrighted works through private licensing arrangements, because regulatory actions that undermine such arrangements would be inconsistent with the rights granted under the Copyright Act.

This fairly uncontroversial statement of basic legal principle was met with cries of alarm. And Stanford’s CIS had a post from Affiliated Scholar Annemarie Bridy that managed to trot out breathless comparisons to inapposite legal theories while simultaneously misconstruing the “fair use” doctrine (as well as how Copyright law works in the video market, for that matter).

Look out! Lochner is coming!

In its letter the Copyright Office warned the FCC that its proposed rules have the potential to disrupt the web of contracts that underlie cable programming, and by extension, risk infringing the rights of copyright holders to commercially exploit their property. This analysis actually tracks what Geoff Manne and I wrote in both our initial comment and our reply comment to the set-top box proposal.

Yet Professor Bridy seems to believe that, notwithstanding the guarantees of both the Constitution and Section 106 of the Copyright Act, the FCC should have the power to abrogate licensing contracts between rightsholders and third parties.  She believes that

[t]he Office’s view is essentially that the Copyright Act gives right holders not only the limited range of rights enumerated in Section 106 (i.e., reproduction, preparation of derivative works, distribution, public display, and public performance), but also a much broader and more amorphous right to “manage the commercial exploitation” of copyrighted works in whatever ways they see fit and can accomplish in the marketplace, without any regulatory interference from the government.

What in the world does this even mean? A necessary logical corollary of the Section 106 rights includes the right to exploit works commercially as rightsholders see fit. Otherwise, what could it possibly mean to have the right to control the reproduction or distribution of a work? The truth is that Section 106 sets out a general set of rights that inhere in rightsholders with respect to their protected works, and that commercial exploitation is merely a subset of this total bundle of rights.

The ability to contract with other parties over these rights is also a necessary corollary of the property rights recognized in Section 106. After all, the right to exclude implies by necessity the right to include. Which is exactly what a licensing arrangement is.

But wait, there’s more — she actually managed to pull out the Lochner bogeyman to validate her argument!

The Office’s absolutist logic concerning freedom of contract in the copyright licensing domain is reminiscent of the Supreme Court’s now-infamous reasoning in Lochner v. New York, a 1905 case that invalidated a state law limiting maximum working hours for bakers on the ground that it violated employer-employee freedom of contract. The Court in Lochner deprived the government of the ability to provide basic protections for workers in a labor environment that subjected them to unhealthful and unsafe conditions. As Julie Cohen describes it, “‘Lochner’ has become an epithet used to characterize an outmoded, over-narrow way of thinking about state and federal economic regulation; it goes without saying that hardly anybody takes the doctrine it represents seriously.”

This is quite a leap of logic, as there is precious little in common between the letter from the Copyright Office and the Lochner opinion aside from the fact that both contain the word “contracts” in their pages.  Perhaps the most critical problem with Professor Bridy’s analogy is the fact that Lochner was about a legislature interacting with the common law system of contract, whereas the FCC is a body subordinate to Congress, and IP is both constitutionally and statutorily guaranteed. A sovereign may be entitled to interfere with the operation of common law, but an administrative agency does not have the same sort of legal status as a legislature when redefining general legal rights.

The key argument that Professor Bridy offered in support of her belief that the FCC should be free to abrogate contracts at will is that “[r]egulatory limits on private bargains may come in the form of antitrust laws or telecommunications laws or, as here, telecommunications regulations that further antitrust ends.”  However, this completely misunderstand U.S. constitutional doctrine.

In particular, as Geoff Manne and I discussed in our set-top box comments to the FCC, using one constitutional clause to end-run another constitutional clause is generally a no-no:

Regardless of whether or how well the rules effect the purpose of Sec. 629, copyright violations cannot be justified by recourse to the Communications Act. Provisions of the Communications Act — enacted under Congress’s Commerce Clause power — cannot be used to create an end run around limitations imposed by the Copyright Act under the Constitution’s Copyright Clause. “Congress cannot evade the limits of one clause of the Constitution by resort to another,” and thus neither can an agency acting within the scope of power delegated to it by Congress. Establishing a regulatory scheme under the Communications Act whereby compliance by regulated parties forces them to violate content creators’ copyrights is plainly unconstitutional.

Congress is of course free to establish the implementation of the Copyright Act as it sees fit. However, unless Congress itself acts to change that implementation, the FCC — or any other party — is not at liberty to interfere with rightsholders’ constitutionally guaranteed rights.

You Have to Break the Law Before You Raise a Defense

Another bone of contention upon which Professor Bridy gnaws is a concern that licensing contracts will abrogate an alleged right to “fair use” by making the defense harder to muster:  

One of the more troubling aspects of the Copyright Office’s letter is the length to which it goes to assert that right holders must be free in their licensing agreements with MVPDs to bargain away the public’s fair use rights… Of course, the right of consumers to time-shift video programming for personal use has been enshrined in law since Sony v. Universal in 1984. There’s no uncertainty about that particular fair use question—none at all.

The major problem with this reasoning (notwithstanding the somewhat misleading drafting of Section 107) is that “fair use” is not an affirmative right, it is an affirmative defense. Despite claims that “fair use” is a right, the Supreme Court has noted on at least two separate occasions (1, 2) that Section 107 was “structured… [as]… an affirmative defense requiring a case-by-case analysis.”

Moreover, important as the Sony case is, it does not not establish that “[t]here’s no uncertainty about [time-shifting as a] fair use question—none at all.” What it actually establishes is that, given the facts of that case, time-shifting was a fair use. Not for nothing the Sony Court notes at the outset of its opinion that

An explanation of our rejection of respondents’ unprecedented attempt to impose copyright liability upon the distributors of copying equipment requires a quite detailed recitation of the findings of the District Court.

But more generally, the Sony doctrine stands for the proposition that:

“The limited scope of the copyright holder’s statutory monopoly, like the limited copyright duration required by the Constitution, reflects a balance of competing claims upon the public interest: creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts. The immediate effect of our copyright law is to secure a fair return for an ‘author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. ‘The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, ‘lie in the general benefits derived by the public from the labors of authors.’ Fox Film Corp. v. Doyal, 286 U. S. 123, 286 U. S. 127. See Kendall v. Winsor, 21 How. 322, 62 U. S. 327-328; Grant v. Raymond, 6 Pet. 218, 31 U. S. 241-242. When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 422 U. S. 156 (1975) (footnotes omitted).

In other words, courts must balance competing interests to maximize “the general benefits derived by the public,” subject to technological change and other criteria that might shift that balance in any particular case.  

Thus, even as an affirmative defense, nothing is guaranteed. The court will have to walk through a balancing test, and only after that point, and if the accused party’s behavior has not tipped the scales against herself, will the court find the use a “fair use.”  

As I noted before,

Not surprisingly, other courts are inclined to follow the Supreme Court. Thus the Eleventh Circuit, the Southern District of New York, and the Central District of California (here and here), to name but a few, all explicitly refer to fair use as an affirmative defense. Oh, and the Ninth Circuit did too, at least until Lenz.

The Lenz case was an interesting one because, despite the above noted Supreme Court precedent treating “fair use” as a defense, it is one of the very few cases that has held “fair use” to be an affirmative right (in that case, the court decided that Section 1201 of the DMCA required consideration of “fair use” as a part of filling out a take-down notice). And in doing so, it too tried to rely on Sony to restructure the nature of “fair use.” But as I have previously written, “[i]t bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.”

Further, even the Eleventh Circuit, which the Ninth relied upon in Lenz, later clarified its position that the above-noted Supreme Court precedent definitely binds lower courts, and that “fair use” is in fact an affirmative defense.

Thus, to say that rightsholders’ licensing contracts somehow impinge a “right” of fair use completely puts the cart before the horse. Remember, as an affirmative defense, “fair use” is an excuse for otherwise infringing behavior, and rightsholders are well within their constitutional and statutory rights to avoid potential infringing uses.

Think about it this way. When you commit a crime you can raise a defense: for instance, an insanity defense. But just because you might be excused for committing a crime if a court finds you were not operating with full faculties, this does not entitle every insane person to go out and commit that crime. The insanity defense can be raised only after a crime is committed, and at that point it will be examined by a judge and jury to determine if applying the defense furthers the overall criminal law scheme.

“Fair use” works in exactly the same manner. And even though Sony described how time- and space-shifting were potentially permissible, it did so only by determining on those facts that the balancing test came out to allow it. So, maybe a particular time-shifting use would be “fair use.” But maybe not. More likely, in this case, even the allegedly well-established “fair use” of time-shifting in the context of today’s digital media, on-demand programing, Netflix and the like may not meet that burden.

And what this means is that a rightsholder does not have an ex ante obligation to consider whether a particular contractual clause might in some fashion or other give rise to a “fair use” defense.

The contrary point of view makes no sense. Because “fair use” is a defense, forcing parties to build “fair use” considerations into their contractual negotiations essentially requires them to build in an allowance for infringement — and one that a court might or might not ever find appropriate in light of the requisite balancing of interests. That just can’t be right.

Instead, I think this article is just a piece of the larger IP-skeptic movement. I suspect that when “fair use” was in its initial stages of development, it was intended as a fairly gentle softening on the limits of intellectual property — something like the “public necessity” doctrine in common law with respect to real property and trespass. However, that is just not how “fair use” advocates see it today. As Geoff Manne has noted, the idea of “permissionless innovation” has wrongly come to mean “no contracts required (or permitted)”:  

[Permissionless innovation] is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright…, which otherwise requires those pernicious licenses (i.e., permission) from others.

But this position is nonsense — intangible property is still property. And at root, property is just a set of legal relations between persons that defines their rights and obligations with respect to some “thing.” It doesn’t matter if you can hold that thing in your hand or not. As property, IP can be subject to transfer and control through voluntarily created contracts.

Even if “fair use” were some sort of as-yet unknown fundamental right, it would still be subject to limitations upon it by other rights and obligations. To claim that “fair use” should somehow trump the right of a property holder to dispose of the property as she wishes is completely at odds with our legal system.

Last week the International Center for Law & Economics and I filed an amicus brief in the DC Circuit in support of en banc review of the court’s decision to uphold the FCC’s 2015 Open Internet Order.

In our previous amicus brief before the panel that initially reviewed the OIO, we argued, among other things, that

In order to justify its Order, the Commission makes questionable use of important facts. For instance, the Order’s ban on paid prioritization ignores and mischaracterizes relevant record evidence and relies on irrelevant evidence. The Order also omits any substantial consideration of costs. The apparent necessity of the Commission’s aggressive treatment of the Order’s factual basis demonstrates the lengths to which the Commission must go in its attempt to fit the Order within its statutory authority.

Our brief supporting en banc review builds on these points to argue that

By reflexively affording substantial deference to the FCC in affirming the Open Internet Order (“OIO”), the panel majority’s opinion is in tension with recent Supreme Court precedent….

The panel majority need not have, and arguably should not have, afforded the FCC the level of deference that it did. The Supreme Court’s decisions in State Farm, Fox, and Encino all require a more thorough vetting of the reasons underlying an agency change in policy than is otherwise required under the familiar Chevron framework. Similarly, Brown and Williamson, Utility Air Regulatory Group, and King all indicate circumstances in which an agency construction of an otherwise ambiguous statute is not due deference, including when the agency interpretation is a departure from longstanding agency understandings of a statute or when the agency is not acting in an expert capacity (e.g., its decision is based on changing policy preferences, not changing factual or technical considerations).

In effect, the panel majority based its decision whether to afford the FCC deference upon deference to the agency’s poorly supported assertions that it was due deference. We argue that this is wholly inappropriate in light of recent Supreme Court cases.

Moreover,

The panel majority failed to appreciate the importance of granting Chevron deference to the FCC. That importance is most clearly seen at an aggregate level. In a large-scale study of every Court of Appeals decision between 2003 and 2013, Professors Kent Barnett and Christopher Walker found that a court’s decision to defer to agency action is uniquely determinative in cases where, as here, an agency is changing established policy.

Kent Barnett & Christopher J. Walker, Chevron In the Circuit Courts 61, Figure 14 (2016), available at ssrn.com/abstract=2808848.

Figure 14 from Barnett & Walker, as reproduced in our brief.

As  that study demonstrates,

agency decisions to change established policy tend to present serious, systematic defects — and [thus that] it is incumbent upon this court to review the panel majority’s decision to reflexively grant Chevron deference. Further, the data underscore the importance of the Supreme Court’s command in Fox and Encino that agencies show good reason for a change in policy; its recognition in Brown & Williamson and UARG that departures from existing policy may fall outside of the Chevron regime; and its command in King that policies not made by agencies acting in their capacity as technical experts may fall outside of the Chevron regime. In such cases, the Court essentially holds that reflexive application of Chevron deference may not be appropriate because these circumstances may tend toward agency action that is arbitrary, capricious, in excess of statutory authority, or otherwise not in accordance with law.

As we conclude:

The present case is a clear example where greater scrutiny of an agency’s decision-making process is both warranted and necessary. The panel majority all too readily afforded the FCC great deference, despite the clear and unaddressed evidence of serious flaws in the agency’s decision-making process. As we argued in our brief before the panel, and as Judge Williams recognized in his partial dissent, the OIO was based on factually inaccurate, contradicted, and irrelevant record evidence.

Read our full — and very short — amicus brief here.

On August 6, the Global Antitrust Institute (the GAI, a division of the Antonin Scalia Law School at George Mason University) submitted a filing (GAI filing or filing) in response to the Japan Fair Trade Commission’s (JFTC’s) consultation on reforms to the Japanese system of administrative surcharges assessed for competition law violations (see here for a link to the GAI’s filing).  The GAI’s outstanding filing was authored by GAI Director Koren Wong Ervin and Professors Douglas Ginsburg, Joshua Wright, and Bruce Kobayashi of the Scalia Law School.

The GAI filing’s three sets of major recommendations, set forth in italics, are as follows:

(1)   Due Process

 While the filing recognizes that the process may vary depending on the jurisdiction, the filing strongly urges the JFTC to adopt the core features of a fair and transparent process, including:   

(a)        Legal representation for parties under investigation, allowing the participation of local and foreign counsel of the parties’ choosing;

(b)        Notifying the parties of the legal and factual bases of an investigation and sharing the evidence on which the agency relies, including any exculpatory evidence and excluding only confidential business information;

(c)        Direct and meaningful engagement between the parties and the agency’s investigative staff and decision-makers;

(d)        Allowing the parties to present their defense to the ultimate decision-makers; and

(e)        Ensuring checks and balances on agency decision-making, including meaningful access to independent courts.

(2)   Calculation of Surcharges

The filing agrees with the JFTC that Japan’s current inflexible system of surcharges is unlikely to accurately reflect the degree of economic harm caused by anticompetitive practices.  As a general matter, the filing recommends that under Japan’s new surcharge system, surcharges imposed should rely upon economic analysis, rather than using sales volume as a proxy, to determine the harm caused by violations of Japan’s Antimonopoly Act.   

In that light, and more specifically, the filing therefore recommends that the JFTC limit punitive surcharges to matters in which:

(a)          the antitrust violation is clear (i.e., if considered at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect the conduct at issue would likely be illegal) and is without any plausible efficiency justification;

(b)          it is feasible to articulate and calculate the harm caused by the violation;

(c)           the measure of harm calculated is the basis for any fines or penalties imposed; and

(d)          there are no alternative remedies that would adequately deter future violations of the law. 

In the alternative, and at the very least, the filing urges the JFTC to expand the circumstances under which it will not seek punitive surcharges to include two types of conduct that are widely recognized as having efficiency justifications:

  • unilateral conduct, such as refusals to deal and discriminatory dealing; and
  • vertical restraints, such as exclusive dealing, tying and bundling, and resale price maintenance.

(3)   Settlement Process

The filing recommends that the JFTC consider incorporating safeguards that prevent settlement provisions unrelated to the violation and limit the use of extended monitoring programs.  The filing notes that consent decrees and commitments extracted to settle a case too often end up imposing abusive remedies that undermine the welfare-enhancing goals of competition policy.  An agency’s ability to obtain in terrorem concessions reflects a party’s weighing of the costs and benefits of litigating versus the costs and benefits of acquiescing in the terms sought by the agency.  When firms settle merely to avoid the high relative costs of litigation and regulatory procedures, an agency may be able to extract more restrictive terms on firm behavior by entering into an agreement than by litigating its accusations in a court.  In addition, while settlements may be a more efficient use of scarce agency resources, the savings may come at the cost of potentially stunting the development of the common law arising through adjudication.

In sum, the latest filing maintains the GAI’s practice of employing law and economics analysis to recommend reforms in the imposition of competition law remedies (see here, here, and here for summaries of prior GAI filings that are in the same vein).  The GAI’s dispassionate analysis highlights principles of universal application – principles that may someday point the way toward greater economically-sensible convergence among national antitrust remedial systems.

Background

In addition to reforming substantive antitrust doctrine, the Supreme Court in recent decades succeeded in curbing the unwarranted costs of antitrust litigation by erecting new procedural barriers to highly questionable antitrust suits.  It did this principally through three key “gatekeeper” decisions, Monsanto (1984), Matsushita (1986), and Twombly (2007).

Prior to those holdings, bare allegations in a complaint typically were sufficient to avoid dismissal.  Furthermore, summary judgment was very hard to obtain, given the Supreme Court’s pronouncement in Poller v. CBS (1962) that “summary procedures should be used sparingly in complex antitrust litigation.”  Thus, plaintiffs had a strong incentive to file dubious (if not meritless) antitrust suits, in the hope of coercing unwarranted settlements from defendants faced with the prospect of burdensome, extended antitrust litigation – litigation that could impose serious business reputational costs over time, in addition to direct and indirect litigation costs.

This all changed starting in 1984.  Monsanto required that a plaintiff show a “conscious commitment to a common scheme designed to achieve an unlawful objective” to support a Sherman Act Section 1 (Section 1) antitrust conspiracy allegation.  Building on Monsanto, Matsushita held that “conduct as consistent with permissible competition as with illegal conspiracy does not, standing alone, support an inference of antitrust conspiracy.”  In Twombly, the Supreme Court made it easier to succeed on a motion to dismiss a Section 1 complaint, holding that mere evidence of parallel conduct does not establish a conspiracy.  Rather, under Twombly, a plaintiff seeking relief under Section 1 must allege, at a minimum, the general contours of when an agreement was made and must support those allegations with a context that tends to make such an agreement plausible.  (The Twombly Court’s approval of motions to dismiss as a tool to rein in excessive antitrust litigation costs was implicit in its admonition not to “forget that proceeding to antitrust discovery can be expensive.”)

In sum, as Professor Herbert Hovenkamp has put it, “[t]he effects of Twombly and Matsushita has [sic] been a far-reaching shift in the way antitrust cases proceed, and today a likely majority are dismissed on the pleadings or summary judgment before going to trial.”

Visa v. Osborn

So far, so good.  Trial lawyers never rest, however, and old lessons sometimes need to be relearned, as demonstrated by the D.C. Circuit’s strange opinion in Visa v. Osborn (2015).

Visa v. Osborn involves a putative class action filed against Visa, MasterCard, and three banks, essentially involving a bare bones complaint alleging that similar automatic teller machine pricing rules imposed by Visa and MasterCard were part of a price-fixing conspiracy among the banks and the credit card companies.  As I explained in my recent Competition Policy International article discussing this case, plaintiffs neither alleged any facts indicating any communications among defendants, nor did they suggest anything to undermine the very real possibility that the credit card firms separately adopted the rules as being in their independent self-interest.  In short, there is nothing in the complaint indicating that allegations of an anticompetitive agreement are plausible, and, as such, Twombly dictates that the complaint must be dismissed.  Amazingly, however, a D.C. Circuit panel held that the mere allegation “that the member banks used the bankcard associations to adopt and enforce” the purportedly anticompetitive access fee rule was “enough to satisfy the plausibility standard” required to survive a motion to dismiss.

Fortunately, the D.C. Circuit’s Osborn holding (which, in addition to being ill-reasoned, is inconsistent with Third, Fourth, and Ninth Circuit precedents) attracted the eye of the Supreme Court, which granted certiorari on June 28.  Specifically, the Supreme Court agreed to resolve the question “[w]hether allegations that members of a business association agreed to adhere to the association’s rules and possess governance rights in the association, without more, are sufficient to plead the element of conspiracy in violation of Section 1 of the Sherman Act, . . . or are insufficient, as the Third, Fourth, and Ninth Circuits have held.”

Conclusion

As I concluded in my Competition Policy International article:

Business associations bestow economic benefits on society through association rules that enable efficient cooperative activities.  Subjecting association members to potential antitrust liability merely for signing on to such rules and participating in association governance would substantially chill participation in associations and undermine the development of new and efficient forms of collaboration among businesses.  Such a development would reduce economic dynamism and harm both producers and consumers.  By decisively overruling the D.C. Circuit’s flawed decision in Osborn, the Supreme Court would preclude a harmful form of antitrust risk and establish an environment in which fruitful business association decision-making is granted greater freedom, to the benefit of the business community, consumers, and the overall economy.  

In addition, and more generally, the Court may wish to remind litigants that the antitrust litigation gatekeeper function laid out in Monsanto, Matsushita, and Twombly remains as strong and as vital as ever.  In so doing, the Court would reaffirm that motions to dismiss and summary judgment motions remain critically important tools needed to curb socially costly abusive antitrust litigation.

Background

Recently, an increasing amount of scholarship has focused on the excessive costs of occupational licensing, which too frequently serves merely as a protectionist state-created barrier to entry that arbitrarily prevents individuals (and, in particular, low-income individuals) from earning a living in their chosen field.  A 2015 White House report explains that occupational licensing restrictions have rapidly proliferated over the last six decades.  It notes that, while carefully crafted licensing schemes “can benefit consumers through higher quality services and improved health and safety standards”, too often licensing has been imposed without regard to its economic costs, in a manner that harms both workers and consumers:

Over the past several decades, the share of U.S. workers holding an occupational license has grown sharply.  When designed and implemented carefully, licensing can offer important health and safety protections to consumers, as well as benefits to workers. However, the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing. In some cases, alternative forms of occupational regulation, such as State certification, may offer a better balance between consumer protections and flexibility for workers. . . .

[What’s worse,] [m]ore than one-quarter of U.S. workers now require a license to do their jobs, with most of these workers licensed by the States. The share of workers licensed at the State level has risen five-fold since the 1950s.  About two-thirds of this change stems from an increase in the number of professions that require a license, with the remaining growth coming from changing composition of the workforce. . . .

Research shows that by imposing additional requirements on people seeking to enter licensed professions, licensing can reduce total employment in the licensed professions.  Estimates find that unlicensed workers earn 10 to 15 percent lower wages than licensed workers with similar levels of education, training, and experience.  Licensing laws also lead to higher prices for goods and services, with research showing effects on prices of between 3 and 16 percent. Moreover, in a number of other studies, licensing did not increase the quality of goods and services, suggesting that consumers are sometimes paying higher prices without getting improved goods or services.

Articles by Heritage Foundation scholars have explored the public choice explanations for licensing schemes (see here for a fulsome treatment of this topic by Paul Larkin) and discussed possible constitutional (equal protection) and antitrust theories that might be deployed to challenge blatantly protectionist licensing schemes (for example, see here for a Legal Memorandum by Paul Larkin and me and see here for a law review commentary by me).

Lawsuits challenging purely protectionist occupational licensing restraints certainly merit being pursued.   Realistically, however, such suits can at best only slightly constrain harmful occupational licensing, given the costly, case-by-case nature of litigation and doctrinal limitations on the application of antitrust and constitutional theories.  The widespread repeal (or substantial reform) of harmful state occupational licensing laws is the ideal long-term solution to the problem, but political constraints (the self-interested coalitions representing occupational cartels may be expected to oppose such change) suggest that state legislative reform will move slowly in the near term.

There are, however, two very recent legislative developments that give cause for hope – the public release of the Allow Act and the Model Occupational Board Reform Act.  Properly publicized, they may become the focus of reform discussions and prove to be harbingers of future legislative initiatives aimed at reining in excessive licensing restraints.

The Allow Act

The Allow Act, co-sponsored by Senators Mike Lee (R-UT) and Ben Sasse (R-Neb) and introduced in the Senate on July 12 (they also participated in a program at Heritage that day discussing the issue), “would make it easier for many Americans to begin work in their chosen field by reducing unnecessary licensing burdens.”  The Allow Act has three principal features, summarized by Senator Lee:

The Alternatives to Licensing that Lower Obstacles to Work (ALLOW) Act reduces the anticompetitive impact of unjustifiable licensing requirements by making targeted changes to licensure policies.  The Act:

Serves as a model for reform in the states by limiting the creation of occupational license requirements in the District [of Columbia] only to those circumstances in which it is the least restrictive means of protecting the public health, safety or welfare, and makes it District policy to limit the enforcement of a license requirement only to the sale of those goods and services expressly listed in the statute or regulations defining an occupation’s “scope of practice.”  Promotes less restrictive requirements, such as public and private certification.  Provides for the creation of a dedicated office in the District Attorney General’s Office, or within each relevant District agency, responsible for the active supervision of occupational boards.  Provides for legislative oversight, with a “sunrise review” when considering new proposed licensing requirements to evaluate the possible negative impacts on workers and economic growth, along with possible less restrictive regulations. And provides legislative “sunset review,” which applies the same analysis of net benefits and possible alternatives to existing occupational licensing laws in the District, with the goal of reviewing all such laws and proposing appropriate modifications over a five year period.

Harmonizes occupational entry requirements by providing endorsement on military bases of occupational licenses and public certifications issued in any state in order to promote workforce attachment for military spouses who are disproportionately affected by the patchwork of state licensing laws as they move with their enlisted spouse from post-to-post.  This approach will increase workforce mobility and labor market efficiency.

Emphasizes certification as an alternative approach to licensure by eliminating the need to obtain prior government approvals to speak about our Nation’s military, political and cultural history while offering tour guide services for a fee within National Military Parks and Battlefields, or the National Mall and Memorial Parks.

The Model Occupational Board Reform Act

The Model Occupational Board Reform Act (Model Act), developed by the American Legislative Exchange Council, has four key features:

The State will use the least restrictive regulation necessary to protect consumers from present, significant and substantiated harms that threaten public health and safety.

An occupational regulation may be enforced against an individual only to the extent the individual sells goods and services that are included explicitly in the statute that defines the occupation’s scope of practice.

The attorney general will establish an office of supervision of occupational boards. The office is responsible for actively supervising state occupational boards.

The legislature will establish a position in its nonpartisan research staff to analyze occupational regulations. The position is responsible for reviewing legislation and laws related to occupational regulations.

In short, the Model Act enlists state attorney generals’ offices (which typically also enforce state antitrust laws and have some familiarity with justifications for promoting competition) in actively supervising state occupation licensing boards.  This is in harmony with the Supreme Court’s 2015 North Carolina Dental Board decision, which upheld antitrust scrutiny of boards that are not actively supervised.  The Model Act also places an onus on state regulators to curb the breadth of the application of licensing rules, and to specially scrutinize new laws that affect occupational licensing.  Overall, then, the Model Act takes constructive steps to limit the scope of harmful occupational licensing restrictions, in a manner that may prove politically more feasible than wholesale statutory repeal.  (Another hopeful sign is the recent introduction of various types of occupational reform proposals in various states.)

Conclusion

Although the plague of proliferating harmful protectionist occupational licensing restrictions is still with us, we may be approaching a turning point.  Recent scholarship on the substantial harm of such restraints, and, in particular, their burden on lower income Americans seeking employment, has come to the attention of policymakers.  As a result, legislative efforts to curb the overregulation of occupational licensing are being developed and are receiving public attention.  Although we are far from winning the war on welfare-inimical occupational licensing, perhaps this is “the end of the beginning” of the public policy struggle.

In recent years much ink has been spilled on the problem of online privacy breaches, involving the unauthorized use of personal information transmitted over the Internet.  Internet privacy concerns are warranted.  According to a 2016 National Telecommunications and Information Administration survey of Internet-using households, 19 percent of such households (representing nearly 19 million households) reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior to the July 2015 survey.  Security breaches appear to be more common among the most intensive Internet-using households – 31 percent of those using at least five different types of online devices suffered such breaches.  Security breach statistics, of course, do not directly measure the consumer welfare losses attributable to the unauthorized use of personal data that consumers supply to Internet service providers and to the websites which they visit.

What is the correct overall approach government should take in dealing with Internet privacy problems?  In addressing this question, it is important to focus substantial attention on the effects of online privacy regulation on economic welfare.  In particular, policies should aim at addressing Internet privacy problems in a manner that does not unduly harm the private sector or deny opportunities to consumers who are not being harmed.  The U.S. Federal Trade Commission (FTC), the federal government’s primary consumer protection agency, has been the principal federal regulator of online privacy practices.  Very recently, however, the U.S. Federal Communications Commission (FCC) has asserted the authority to regulate the privacy practices of broadband Internet service providers, and is proposing an extremely burdensome approach to such regulation that would, if implemented, have harmful economic consequences.

In March 2016, FTC Commissioner Maureen Ohlhausen succinctly summarized the FTC’s general approach to online privacy-related enforcement under Section 5 of the FTC Act, which proscribes unfair or deceptive acts or practices:

[U]nfairness establishes a baseline prohibition on practices that the overwhelming majority of consumers would never knowingly approve. Above that baseline, consumers remain free to find providers that match their preferences, and our deception authority governs those arrangements. . . .  The FTC’s case-by-case enforcement of our unfairness authority shapes our baseline privacy practices.  Like the common law, this incremental approach has proven both relatively predictable and adaptable as new technologies and business models emerge.

In November 2015, Professor (and former FTC Commissioner) Joshua Wright argued the FTC’s approach is insufficiently attuned to economic analysis, in particular, the “tradeoffs between the value to consumers and society of the free flow and exchange of data and the creation of new products and services on the one hand, against the value lost by consumers from any associated reduction in privacy.”  Nevertheless, on balance, FTC enforcement in this area generally is restrained and somewhat attentive to cost-benefit considerations.  (This undoubtedly reflects the fact (see my Heritage Legal Memorandum, here) that the statutory definition of “unfairness” in Section 5(n) of the FTC Act embodies cost-benefit analysis, and that the FTC’s Policy Statement on Deception requires detriment to consumers acting reasonably in the circumstances.)  In other words, federal enforcement policy with respect to online privacy, although it could be improved, is in generally good shape.

Or it was in good shape.  Unfortunately, on April 1, 2016, the Federal Communications Commission (FCC) decided to inject itself into “privacy space” by issuing a Notice of Proposed Rulemaking entitled “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services.”  This “Privacy NPRM” sets forth detailed rules that, if adopted, would impose onerous privacy obligations on “Broadband Internet Access Service” (BIAS) Providers, the firms that provide the cables, wires, and telecommunications equipment through which Internet traffic flows – primarily cable (Comcast, for example) and telephone (Verizon, for example) companies.   The Privacy NPRM reclassifies BIAS provision as a “common carrier” service, thereby totally precluding the FTC from regulating BIAS Providers’ privacy practices (since the FTC is barred by law from regulating common carriers, under 15 U.S. Code § 45(a)(2)).  Put simply, the NPRM required BIAS Providers “to obtain express consent in advance of practically every use of a customer[s] data”, without regard to the effects of such a requirement on economic welfare.  All other purveyors of Internet services, however – in particular, the large numbers of “edge providers” that generate Internet content and services (Google, Amazon, and Facebook, for example) – are exempt from the new FCC regulatory requirements.  In short, the Privacy NPRM establishes a two-tier privacy regulatory system, with BIAS Providers subject to tight FCC privacy rules, while all other Internet service firms are subject to more nuanced, case-by-case, effects-based evaluation of their privacy practices by the FTC.  This disparate regulatory approach is peculiar (if not wholly illogical), since edge providers in general have greater access than BIAS Providers to consumers’ non-public information, and thus may appear to pose a greater threat to consumers’ interest in privacy.

The FCC’s proposal to regulate BIAS Providers’ privacy practices represents bad law and horrible economic policy.  First, it undermines the rule of law by extending the FCC’s authority beyond its congressional mandate.  It does this by basing its regulation of a huge universe of information exchanges on Section 222 of the Telecommunications Act of 1996, a narrow provision aimed at a very limited type of customer-related data obtained in connection with old-style voice telephony transmissions.  This is egregious regulatory overreach.  Second, if implemented, it will harm consumers, producers, and the overall economic by imposing a set of sweeping opt-in consent requirements on BIAS Providers, without regard to private sector burdens or actual consumer welfare (see here); by reducing BIAS Provider revenues and thereby dampening investment that is vital to the continued growth of and innovation in Internet-related industries (see here); by reducing the ability of BIAS Providers to provide welfare-enhancing competitive pressure on providers on Internet edge providers (see here); and by raising consumer prices for Internet services and deny discount programs desired by consumers (see here).

What’s worse, the FCC’s proposed involvement in online privacy oversight comes at a time of increased Internet privacy regulation by foreign countries, much of it highly intrusive and lacking in economic sophistication.  A particularly noteworthy effort to clarify cross-national legal standards is the Privacy Shield, a 2016 United States – European Union agreement that establishes regulatory online privacy protection norms, backed by FTC enforcement, that U.S. companies transmitting data into Europe may choose to accept on a voluntary basis.  (If they do not accede to the Shield, they may be subject to uncertain and heavy-handed European sanctions.)  The Privacy NPRM, if implemented, will create an additional concern for BIAS Providers, since they will have to evaluate the implications of new FCC regulation (rather than simply rely on FTC oversight) in deciding whether to opt in to the Shield’s standards and obligations.

In sum, the FCC’s Privacy NPRM would, if implemented, harm consumers and producers, slow innovation, and offend the rule of law.  This prompts four recommendations.

  • The FCC should withdraw the NPRM and leave it to the FTC to oversee all online privacy practices, under its Section 5 unfairness and deception authority. The adoption of the Privacy Shield, which designates the FTC as the responsible American privacy oversight agency, further strengthens the case against FCC regulation in this area. 
  • In overseeing online privacy practices, the FTC should employ a very light touch that stresses economic analysis and cost-benefit considerations. Moreover, it should avoid requiring that rigid privacy policy conditions be kept in place for long periods of time through consent decree conditions, in order to allow changing market conditions to shape and improve business privacy policies. 
  • Moreover, the FTC should borrow a page from former FTC Commissioner Joshua Wright by implementing an “economic approach” to privacy. Under such an approach:  

o             FTC economists would help make the Commission a privacy “thought leader” by developing a rigorous academic research agenda on the economics of privacy, featuring the economic evaluation of industry sectors and practices; 

o             the FTC would bear the burden of proof of showing that violations of a company’s privacy policy are material to consumer decision-making;

o             FTC economists would report independently to the FTC about proposed privacy-related enforcement initiatives; and

o             the FTC would publish the views of its Bureau of Economics in all privacy-related consent decrees that are placed on the public record.   

  • The FTC should encourage the European Commission and other foreign regulators to take into account the economics of privacy in developing their privacy regulatory policies. In so doing, it should emphasize that innovation is harmed, the beneficial development of the Internet is slowed, and consumer welfare and rights are undermined through highly prescriptive regulation in this area (well-intentioned though it may be).  Relatedly, the FTC and other U.S. Government negotiators should argue against adoption of a “one-size-fits-all” global privacy regulation framework.   Such a global framework could harmfully freeze into place over-regulatory policies and preclude beneficial experimentation in alternative forms of “lighter-touch” regulation and enforcement. 

While no panacea, these recommendations would help deter (or, at least, constrain) the economically harmful government micromanagement of businesses’ privacy practices, in the United States and abroad.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

The Global Antitrust Institute (GAI) at George Mason University Law School (officially the “Antonin Scalia Law School at George Mason University” as of July 1st) is doing an outstanding job at providing sound law and economics-centered advice to foreign governments regarding their proposed antitrust laws and guidelines.

The GAI’s latest inspired filing, released on July 9 (July 9 Comment), concerns guidelines on the disgorgement of illegal gains and punitive fines for antitrust violations proposed by China’s National Development and Reform Commission (NDRC) – a powerful agency that has broad planning and administrative authority over the Chinese economy.  With respect to antitrust, the NDRC is charged with investigating price-related anticompetitive behavior and abuses of dominance.  (China has two other antitrust agencies, the State Administration of Industry and Commerce (SAIC) that investigates non-price-related monopolistic behavior, and the Ministry of Foreign Commerce (MOFCOM) that reviews mergers.)  The July 9 Comment stresses that the NDRC’s proposed Guidelines call for Chinese antitrust enforcers to impose punitive financial sanctions on conduct that is not necessarily anticompetitive and may be efficiency-enhancing – an approach that is contrary to sound economics.  In so doing, the July 9 Comment summarizes the economics of penalties, recommends that the NDRD employ economic analysis in considering sanctions, and provides specific suggested changes to the NDRC’s draft.  The July 9 Comment provides a helpful summary of its analysis:

We respectfully recommend that the Draft Guidelines be revised to limit the application of disgorgement (or the confiscating of illegal gain) and punitive fines to matters in which: (1) the antitrust violation is clear (i.e., if measured at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect that the conduct at issue would likely be found to be illegal) and without any plausible efficiency justifications; (2) it is feasible to articulate and calculate the harm caused by the violation; (3) the measure of harm calculated is the basis for any fines or penalties imposed; and (4) there are no alternative remedies that would adequately deter future violations of the law.  In the alternative, and at the very least, we strongly urge the NDRC to expand the circumstances under which the Anti-Monopoly Enforcement Agencies (AMEAs) will not seek punitive sanctions such as disgorgement or fines to include two conduct categories that are widely recognized as having efficiency justifications: unilateral conduct such as refusals to deal and discriminatory dealing and vertical restraints such as exclusive dealing, tying and bundling, and resale price maintenance.

We also urge the NDRC to clarify how the total penalty, including disgorgement and fines, relate to the specific harm at issue and the theoretical optimal penalty.  As explained below, the economic analysis determines the total optimal penalties, which includes any disgorgement and fines.  When fines are calculated consistent with the optimal penalty framework, disgorgement should be a component of the total fine as opposed to an additional penalty on top of an optimal fine.  If disgorgement is an additional penalty, then any fines should be reduced relative to the optimal penalty.

Lastly, we respectfully recommend that the AMEAs rely on economic analysis to determine the harm caused by any violation.  When using proxies for the harm caused by the violation, such as using the illegal gains from the violations as the basis for fines or disgorgement, such calculations should be limited to those costs and revenues that are directly attributable to a clear violation.  This should be done in order to ensure that the resulting fines or disgorgement track the harms caused by the violation.  To that end, we recommend that the Draft Guidelines explicitly state that the AMEAs will use economic analysis to determine the but-for world, and will rely wherever possible on relevant market data.  When the calculation of illegal gain is unclear due to a lack of relevant information, we strongly recommend that the AMEAs refrain from seeking disgorgement.

The lack of careful economic analysis of the implications of disgorgement (which is really a financial penalty, viewed through an economic lens) is not confined to Chinese antitrust enforcers.  In recent years, the U.S. Federal Trade Commission (FTC) has shown an interest in more broadly employing disgorgement as an antitrust remedy, without fully weighing considerations of error costs and the deterrence of efficient business practices (see, for example, here and here).  Relatedly, the U.S. Department of Justice’s Antitrust Division has determined that disgorgement may be invoked as a remedy for a Sherman Antitrust Act violation, a position confirmed by a lower court (see, for example, here).  The general principles informing the thoughtful analysis delineated in the July 9 Comment could profitably be consulted by FTC and DOJ policy officials should they choose to reexamine their approach to disgorgement and other financial penalties.

More broadly, emphasizing the importantance of optimal sanctions and the economic analysis of business conduct, the July 9 Comment is in line with a cost-benefit framework for antitrust enforcement policy, rooted in decision theory – an approach that all antitrust agencies (including United States enforcers) should seek to adopt (see also here for an evaluation of the implicit decision-theoretic approach to antitrust employed by the U.S. Supreme Court under Chief Justice John Roberts).  Let us hope that DOJ, the FTC, and other government antitrust authorities around the world take to heart the benefits of decision-theoretic antitrust policy in evaluating (and, as appropriate, reforming) their enforcement norms.  Doing so would promote beneficial international convergence toward better enforcement policy and redound to the economic benefit of both producers and consumers.