On November 1st and 2nd, Cofece, the Mexican Competition Agency, hosted an International Competition Network (ICN) workshop on competition advocacy, featuring presentations from government agency officials, think tanks, and international organizations.  The workshop highlighted the excellent work that the ICN has done in supporting efforts to curb the most serious source of harm to the competitive process worldwide:  government enactment of anticompetitive regulatory schemes and guidance, often at the behest of well-connected, cronyist rent-seeking businesses that seek to protect their privileges by imposing costs on rivals.

The ICN describes the goal of its Advocacy Working Group in the following terms:

The mission of the Advocacy Working Group (AWG) is to undertake projects, to develop practical tools and guidance, and to facilitate experience-sharing among ICN member agencies, in order to improve the effectiveness of ICN members in advocating the dissemination of competition principles and to promote the development of a competition culture within society. Advocacy reinforces the value of competition by educating citizens, businesses and policy-makers. In addition to supporting the efforts of competition agencies in tackling private anti-competitive behaviour, advocacy is an important tool in addressing public restrictions to competition. Competition advocacy in this context refers to those activities conducted by the competition agency, that are related to the promotion of a competitive environment by means of non-enforcement mechanisms, mainly through its relationships with other governmental entities and by increasing public awareness in regard to the benefits of competition.  

At the Cofece workshop, I moderated a panel on “stakeholder engagement in the advocacy process,” featuring presentations by representatives of Cofece, the Japan Fair Trade Commission, and the Organization for Economic Cooperation and Development.  As I emphasized in my panel presentation:

Developing an appropriate competition advocacy strategy is key to successful interventions.  Public officials should be mindful of the relative importance of particular advocacy targets, as well as matter-specific political constraints and competing stakeholder interests.  In particular, a competition authority may greatly benefit by identifying and motivating stakeholders who are directly affected by the competitive restraints that are targeted by advocacy interventions.  The active support of such stakeholders may be key to the success of an advocacy initiative.  More generally, by reaching out to business and consumer stakeholders, a competition authority may build alliances that will strengthen its long-term ability to be effective in promoting a pro-competition agenda. 

The U.S. Federal Trade Commission, the FTC, has developed a well-thought-out approach to building strong relationships with stakeholders.  The FTC holds public publicized workshops highlighting emerging policy issues, in which NGAs and civil society representatives with expertise are invited to participate.  Its personnel (and, in particular, its head) speak before a variety of audiences to inform them of what the FTC is doing and of the opportunities for advocacy filings.  It reaches out to civil society groups and the general public through the media, utilizing the Internet and other sources of public information dissemination.  It is willing to hold informal non-public meetings with NGAs and civil society representatives to hear their candid views and concerns off the record.  It carries out major studies (often following up on information gathered at workshops and from non-government sources) in addition to making advocacy filings.  It interacts closely with substantive FTC enforcers and economists to obtain “leads” that may inform future advocacy projects and to suggest possible lines for substantive investigations, based on the input it has received.  It communicates with other competition authorities on advocacy strategies.  Other competition authorities may wish to note the FTC’s approach in organizing their own advocacy programs.  

Competition authorities would also benefit from consulting the ICN Market Studies Good Practice Handbook, last released in updated form at the April 2016 ICN 15th Annual Conference.  This discussion of the role of stakeholders, though presented in the context of market studies, provides insights that are broadly applicable more generally to the competition advocacy process.  As the Handbook explains, stakeholders are any individuals, groups of individuals, or organizations that have an interest in a particular market or that can be affected by market conditions.  The Handbook explains the crucial inputs that stakeholders can provide a competition authority and how engaging with stakeholders can influence the authority’s reputation.  The Handbook emphasizes that a stakeholder engagement strategy can be used to determine whether particular stakeholders will be influential, supportive, or unsupportive to a particular endeavor; to consider the input expected from the various stakeholders and plan for soliciting and using this input; and to describing how and when the authority will seek to engage stakeholders.  The Handbook provides a long list of categories of stakeholders and suggests ways of reaching out to stakeholders, including through public consultations, open seminars, workshops, and roundtables.  Next, the Handbook presents tactics for engaging with stakeholders.  The Handbook closes by summarizing key good practices, including publicly soliciting broad voluntary stakeholder engagement, developing a stakeholder engagement strategy early in a particular process, and reviewing and updating the engagement strategy as necessary throughout a particular competition authority undertaking.

In sum, properly conducted advocacy initiatives, along with investigations of hard core cartels, are among the highest-valued uses of limited competition agency resources.  To the extent advocacy succeeds in unraveling government-imposed impediments to effective competition, it pays long-run dividends in terms of enhanced consumer welfare, greater economic efficiency, and more robust economic growth.  Let us hope that governments around the world (including, of course, the United States Government) keep this in mind in making resource commitments and setting priorities for their competition agencies.

Over the weekend, Senator Al Franken and FCC Commissioner Mignon Clyburn issued an impassioned statement calling for the FCC to thwart the use of mandatory arbitration clauses in ISPs’ consumer service agreements — starting with a ban on mandatory arbitration of privacy claims in the Chairman’s proposed privacy rules. Unfortunately, their call to arms rests upon a number of inaccurate or weak claims. Before the Commissioners vote on the proposed privacy rules later this week, they should carefully consider whether consumers would actually be served by such a ban.

FCC regulations can’t override congressional policy favoring arbitration

To begin with, it is firmly cemented in Supreme Court precedent that the Federal Arbitration Act (FAA) “establishes ‘a liberal federal policy favoring arbitration agreements.’” As the Court recently held:

[The FAA] reflects the overarching principle that arbitration is a matter of contract…. [C]ourts must “rigorously enforce” arbitration agreements according to their terms…. That holds true for claims that allege a violation of a federal statute, unless the FAA’s mandate has been “overridden by a contrary congressional command.”

For better or for worse, that’s where the law stands, and it is the exclusive province of Congress — not the FCC — to change it. Yet nothing in the Communications Act (to say nothing of the privacy provisions in Section 222 of the Act) constitutes a “contrary congressional command.”

And perhaps that’s for good reason. In enacting the statute, Congress didn’t demonstrate the same pervasive hostility toward companies and their relationships with consumers that has characterized the way this FCC has chosen to enforce the Act. As Commissioner O’Rielly noted in dissenting from the privacy NPRM:

I was also alarmed to see the Commission acting on issues that should be completely outside the scope of this proceeding and its jurisdiction. For example, the Commission seeks comment on prohibiting carriers from including mandatory arbitration clauses in contracts with their customers. Here again, the Commission assumes that consumers don’t understand the choices they are making and is willing to impose needless costs on companies by mandating how they do business.

If the FCC were to adopt a provision prohibiting arbitration clauses in its privacy rules, it would conflict with the FAA — and the FAA would win. Along the way, however, it would create a thorny uncertainty for both companies and consumers seeking to enforce their contracts.  

The evidence suggests that arbitration is pro-consumer

But the lack of legal authority isn’t the only problem with the effort to shoehorn an anti-arbitration bias into the Commission’s privacy rules: It’s also bad policy.

In its initial broadband privacy NPRM, the Commission said this about mandatory arbitration:

In the 2015 Open Internet Order, we agreed with the observation that “mandatory arbitration, in particular, may more frequently benefit the party with more resources and more understanding of the dispute procedure, and therefore should not be adopted.” We further discussed how arbitration can create an asymmetrical relationship between large corporations that are repeat players in the arbitration system and individual customers who have fewer resources and less experience. Just as customers should not be forced to agree to binding arbitration and surrender their right to their day in court in order to obtain broadband Internet access service, they should not have to do so in order to protect their private information conveyed through that service.

The Commission may have “agreed with the cited observations about arbitration, but that doesn’t make those views accurate. As one legal scholar has noted, summarizing the empirical data on the effects of arbitration:

[M]ost of the methodologically sound empirical research does not validate the criticisms of arbitration. To give just one example, [employment] arbitration generally produces higher win rates and higher awards for employees than litigation.

* * *

In sum, by most measures — raw win rates, comparative win rates, some comparative recoveries and some comparative recoveries relative to amounts claimed — arbitration generally produces better results for claimants [than does litigation].

A comprehensive, empirical study by Northwestern Law’s Searle Center on AAA (American Arbitration Association) cases found much the same thing, noting in particular that

  • Consumer claimants in arbitration incur average arbitration fees of only about $100 to arbitrate small (under $10,000) claims, and $200 for larger claims (up to $75,000).
  • Consumer claimants also win attorneys’ fees in over 60% of the cases in which they seek them.
  • On average, consumer arbitrations are resolved in under 7 months.
  • Consumers win some relief in more than 50% of cases they arbitrate…
  • And they do almost exactly as well in cases brought against “repeat-player” business.

In short, it’s extremely difficult to sustain arguments suggesting that arbitration is tilted against consumers relative to litigation.

(Upper) class actions: Benefitting attorneys — and very few others

But it isn’t just any litigation that Clyburn and Franken seek to preserve; rather, they are focused on class actions:

If you believe that you’ve been wronged, you could take your service provider to court. But you’d have to find a lawyer willing to take on a multi-national telecom provider over a few hundred bucks. And even if you won the case, you’d likely pay more in legal fees than you’d recover in the verdict.

The only feasible way for you as a customer to hold that corporation accountable would be to band together with other customers who had been similarly wronged, building a case substantial enough to be worth the cost—and to dissuade that big corporation from continuing to rip its customers off.

While — of course — litigation plays an important role in redressing consumer wrongs, class actions frequently don’t confer upon class members anything close to the imagined benefits that plaintiffs’ lawyers and their congressional enablers claim. According to a 2013 report on recent class actions by the law firm, Mayer Brown LLP, for example:

  • “In [the] entire data set, not one of the class actions ended in a final judgment on the merits for the plaintiffs. And none of the class actions went to trial, either before a judge or a jury.” (Emphasis in original).
  • “The vast majority of cases produced no benefits to most members of the putative class.”
  • “For those cases that do settle, there is often little or no benefit for class members. What is more, few class members ever even see those paltry benefits — particularly in consumer class actions.”
  • “The bottom line: The hard evidence shows that class actions do not provide class members with anything close to the benefits claimed by their proponents, although they can (and do) enrich attorneys.”

Similarly, a CFPB study of consumer finance arbitration and litigation between 2008 and 2012 seems to indicate that the class action settlements and judgments it studied resulted in anemic relief to class members, at best. The CFPB tries to disguise the results with large, aggregated and heavily caveated numbers (never once actually indicating what the average payouts per person were) that seem impressive. But in the only hard numbers it provides (concerning four classes that ended up settling in 2013), promised relief amounted to under $23 each (comprising both cash and in-kind payment) if every class member claimed against the award. Back-of-the-envelope calculations based on the rest of the data in the report suggest that result was typical.

Furthermore, the average time to settlement of the cases the CFPB looked at was almost 2 years. And somewhere between 24% and 37% involved a non-class settlement — meaning class members received absolutely nothing at all because the named plaintiff personally took a settlement.

By contrast, according to the Searle Center study, the average award in the consumer-initiated arbitrations it studied (admittedly, involving cases with a broader range of claims) was almost $20,000, and the average time to resolution was less than 7 months.

To be sure, class action litigation has been an important part of our system of justice. But, as Arthur Miller — a legal pioneer who helped author the rules that make class actions viable — himself acknowledged, they are hardly a panacea:

I believe that in the 50 years we have had this rule, that there are certain class actions that never should have been brought, admitted; that we have burdened our judiciary, yes. But we’ve had a lot of good stuff done. We really have.

The good that has been done, according to Professor Miller, relates in large part to the civil rights violations of the 50’s and 60’s, which the class action rules were designed to mitigate:

Dozens and dozens and dozens of communities were desegregated because of the class action. You even see desegregation decisions in my old town of Boston where they desegregated the school system. That was because of a class action.

It’s hard to see how Franken and Clyburn’s concern for redress of “a mysterious 99-cent fee… appearing on your broadband bill” really comes anywhere close to the civil rights violations that spawned the class action rules. Particularly given the increasingly pervasive role of the FCC, FTC, and other consumer protection agencies in addressing and deterring consumer harms (to say nothing of arbitration itself), it is manifestly unclear why costly, protracted litigation that infrequently benefits anyone other than trial attorneys should be deemed so essential.

“Empowering the 21st century [trial attorney]”

Nevertheless, Commissioner Clyburn and Senator Franken echo the privacy NPRM’s faulty concerns about arbitration clauses that restrict consumers’ ability to litigate in court:

If you’re prohibited from using our legal system to get justice when you’re wronged, what’s to protect you from being wronged in the first place?

Well, what do they think the FCC is — chopped liver?

Hardly. In fact, it’s a little surprising to see Commissioner Clyburn (who sits on a Commission that proudly proclaims that “[p]rotecting consumers is part of [its] DNA”) and Senator Franken (among Congress’ most vocal proponents of the FCC’s claimed consumer protection mission) asserting that the only protection for consumers from ISPs’ supposed depredations is the cumbersome litigation process.

In fact, of course, the FCC has claimed for itself the mantle of consumer protector, aimed at “Empowering the 21st Century Consumer.” But nowhere does the agency identify “promoting and preserving the rights of consumers to litigate” among its tools of consumer empowerment (nor should it). There is more than a bit of irony in a federal regulator — a commissioner of an agency charged with making sure, among other things, that corporations comply with the law — claiming that, without class actions, consumers are powerless in the face of bad corporate conduct.

Moreover, even if it were true (it’s not) that arbitration clauses tend to restrict redress of consumer complaints, effective consumer protection would still not necessarily be furthered by banning such clauses in the Commission’s new privacy rules.

The FCC’s contemplated privacy regulations are poised to introduce a wholly new and untested regulatory regime with (at best) uncertain consequences for consumers. Given the risk of consumer harm resulting from the imposition of this new regime, as well as the corollary risk of its excessive enforcement by complainants seeking to test or push the boundaries of new rules, an agency truly concerned with consumer protection would tread carefully. Perhaps, if the rules were enacted without an arbitration ban, it would turn out that companies would mandate arbitration (though this result is by no means certain, of course). And perhaps arbitration and agency enforcement alone would turn out to be insufficient to effectively enforce the rules. But given the very real costs to consumers of excessive, frivolous or potentially abusive litigation, cabining the litigation risk somewhat — even if at first it meant the regime were tilted slightly too much against enforcement — would be the sensible, cautious and pro-consumer place to start.

____

Whether rooted in a desire to “protect” consumers or not, the FCC’s adoption of a rule prohibiting mandatory arbitration clauses to address privacy complaints in ISP consumer service agreements would impermissibly contravene the FAA. As the Court has made clear, such a provision would “‘stand[] as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress’ embodied in the Federal Arbitration Act.” And not only would such a rule tend to clog the courts in contravention of the FAA’s objectives, it would do so without apparent benefit to consumers. Even if such a rule wouldn’t effectively be invalidated by the FAA, the Commission should firmly reject it anyway: A rule that operates primarily to enrich class action attorneys at the expense of their clients has no place in an agency charged with protecting the public interest.

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

On October 6, 2016, the U.S. Federal Trade Commission (FTC) issued Patent Assertion Entity Activity: An FTC Study (PAE Study), its much-anticipated report on patent assertion entity (PAE) activity.  The PAE Study defined PAEs as follows:

Patent assertion entities (PAEs) are businesses that acquire patents from third parties and seek to generate revenue by asserting them against alleged infringers.  PAEs monetize their patents primarily through licensing negotiations with alleged infringers, infringement litigation, or both. In other words, PAEs do not rely on producing, manufacturing, or selling goods.  When negotiating, a PAE’s objective is to enter into a royalty-bearing or lump-sum license.  When litigating, to generate any revenue, a PAE must either settle with the defendant or ultimately prevail in litigation and obtain relief from the court.

The FTC was mindful of the costs that would be imposed on PAEs, required by compulsory process to respond to the agency’s requests for information.  Accordingly, the FTC obtained information from only 22 PAEs, 18 of which it called “Litigation PAEs” (which “typically sued potential licensees and settled shortly afterward by entering into license agreements with defendants covering small portfolios,” usually yielding total royalties of under $300,000) and 4 of which it dubbed “Portfolio PAEs” (which typically negotiated multimillion dollars licenses covering large portfolios of patents and raised their capital through institutional investors or manufacturing firms).

Furthermore, the FTC’s research was narrowly targeted, not broad-based.  The agency explained that “[o]f all the patents held by PAEs in the FTC’s study, 88% fell under the Computers & Communications or Other Electrical & Electronic technology categories, and more than 75% of the Study PAEs’ overall holdings were software-related patents.”  Consistent with the nature of this sample, the FTC concentrated primarily on a case study of PAE activity in the wireless chipset sector.  The case study revealed that PAEs were more likely to assert their patents through litigation than were wireless manufacturers, and that “30% of Portfolio PAE wireless patent licenses and nearly 90% of Litigation PAE wireless patent licenses resulted from litigation, while only 1% of Wireless Manufacturer wireless patent licenses resulted from litigation.”  But perhaps more striking than what the FTC found was what it did not uncover.  Due to data limitations, “[t]he FTC . . . [did not] attempt[] to determine if the royalties received by Study PAEs were higher or lower than those that the original assignees of the licensed patents could have earned.”  In addition, the case study did “not report how much revenue PAEs shared with others, including independent inventors, or the costs of assertion activity.”

Curiously, the PAE Study also leaped to certain conclusions regarding PAE settlements based on questionable assumptions and without considering legitimate potential incentives for such settlements.  Thus, for example, the FTC found it particularly significant that 77% of litigation PAE settlements were for less than $300,000.  Why?  Because $300,000 was a “de facto benchmark” for nuisance litigation settlements, merely based on one American Intellectual Property Law Association study that claimed defending a non-practicing entity patent lawsuit through the end of discovery costs between $300,000 and $2.5 million, depending on the amount in controversy.  In light of that one study, the FTC surmised “that discovery costs, and not the technological value of the patent, may set the benchmark for settlement value in Litigation PAE cases.”  Thus, according to the FTC, “the behavior of Litigation PAEs is consistent with nuisance litigation.”  As noted patent lawyer Gene Quinn has pointed out, however, the FTC ignored the alternative eminently logical possibility that many settlements for less than $300,000 merely represented reasonable valuations of the patent rights at issue.  Quinn pithily stated:

[T]he reality is the FTC doesn’t know enough about the industry to understand that $300,000 is an arbitrary line in the sand that holds no relevance in the real world. For the very same reason that they said the term “patent troll” is unhelpful (i.e., because it inappropriately discriminates against rights owners without understanding the business model and practices), so too is $300,000 equally unhelpful. Without any understanding or appreciation of the value of the core innovation subject to the license there is no way to know whether a license is being offered for nuisance value or whether it is being offered at full, fair and appropriate value to compensate the patent owner for the infringement they had to chase down in litigation.

I thought the FTC was charged with ensuring fair business practices? It seems what they are doing is radically discriminating against incremental innovations valued at less than $300,000 and actually encouraging patent owners to charge more for their licenses than they are worth so they don’t get labeled a nuisance. Talk about perverse incentives! The FTC should stick to areas where they have subject matter competence and leave these patent issues to the experts.     

In sum, the FTC found that in one particular specialized industry sector featuring a certain  category of patents (software patents), PAEs tended to sue more than manufacturers before agreeing to licensing terms – hardly a surprising finding or a sign of a problem.  (To the contrary, the existence of “substantial” PAE litigation that led to licenses might be a sign that PAEs were acting as efficient intermediaries representing the interests and effectively vindicating the rights of small patentees.)  The FTC was not, however, able to comment on the relative levels of royalties, the extent to which PAE revenues were distributed to inventors, or the costs of PAE litigation (as opposed to any other sort of litigation).  Additionally, the FTC made certain assumptions about certain PAE litigation settlements that ignored reasonable alternative explanations for the behavior that was observed.  Accordingly, the reasonable observer would conclude from this that the agency was (to say the least) in no position to make any sort of policy recommendations, given the absence of any hard evidence of PAE abuses or excessive waste from litigation.

Unfortunately, the reasonable observer would be mistaken.  The FTC recommended reforms to: (1) address discovery burden and “cost asymmetries” (the notion that PAEs are less subject to costly counterclaims because they are not producers) in PAE litigation; (2) provide the courts and defendants with more information about the plaintiffs that have filed infringement lawsuits; (3) streamline multiple cases brought against defendants on the same theories of infringement; and (4) provide sufficient notice of these infringement theories as courts continue to develop heightened pleading requirements for patent cases.

Without getting into the merits of these individual suggestions (and without in any way denigrating the hard work and dedication of the highly talented FTC staffers who drafted the PAE Study), it is sufficient to note that they bear no logical relationship to the factual findings of the report.  The recommendations, which closely echo certain elements of various “patent reform” legislative proposals that have been floated in recent years, could have been advanced before any data had been gathered – with a saving to the companies that had to respond.  In short, the recommendations are classic pre-baked “solutions” to problems that have long been hypothesized.  Advancing such recommendations based on discrete information regarding a small skewed sample of PAEs – without obtaining crucial information on the direct costs and benefits of the PAE transactions being observed, or the incentive effects of PAE activity – is at odds with the FTC’s proud tradition of empirical research.  Unfortunately, Devin Hartline of the Antonin Scalia Law School proved prescient when commenting last April on the possible problems with the PAE Report, based on what was known about it prior to its release (and based on the preliminary thoughts of noted economists and law professors):

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general.  The study is simply not designed to do this.  It instead is a fact-finding mission, the results of which could guide future missions.  Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected.  And it’s crucial not to draw policy conclusions from it.  Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

To the extent patent reform is warranted, it should be considered carefully in a measured fashion, with full consideration given to the costs, benefits, and potential unintended consequences of suggested changes to the patent system and to litigation procedures.  As John Malcolm and I explained in a 2015 Heritage Foundation Legal Backgrounder which explored the relative merits of individual proposed reforms:

Before deciding to take action, Congress should weigh the particular merits of individual reform proposals carefully and meticulously, taking into account their possible harmful effects as well as their intended benefits. Precipitous, unreflective action on legislation is unwarranted, and caution should be the byword, especially since the effects of 2011 legislative changes and recent Supreme Court decisions have not yet been fully absorbed. Taking time is key to avoiding the serious and costly errors that too often are the fruit of omnibus legislative efforts.

Notably, this Legal Backgrounder also noted potential beneficial aspects of PAE activity that were not reflected in the PAE Study:

[E]ven entities whose business model relies on purchasing patents and licensing them or suing those who refuse to enter into licensing agreements and infringe those patents can serve a useful—even a vital—purpose. Some infringers may be large companies that infringe the patents of smaller companies or individual inventors, banking on the fact that such a small-time inventor will be less likely to file a lawsuit against a well-financed entity. Patent aggregators, often backed by well-heeled investors, help to level the playing field and can prevent such abuses.

More important, patent aggregators facilitate an efficient division of labor between inventors and those who wish to use those inventions for the betterment of their fellow man, allowing inventors to spend their time doing what they do best: inventing. Patent aggregators can expand access to patent pools that allow third parties to deal with one vendor instead of many, provide much-needed capital to inventors, and lead to a variety of licensing and sublicensing agreements that create and reflect a valuable and vibrant marketplace for patent holders and provide the kinds of incentives that spur innovation. They can also aggregate patents for litigation purposes, purchasing patents and licensing them in bundles.

This has at least two advantages: It can reduce the transaction costs for licensing multiple patents, and it can help to outsource and centralize patent litigation for multiple patent holders, thereby decreasing the costs associated with such litigation. In the copyright space, the American Society of Composers, Authors, and Publishers (ASCAP) plays a similar role.

All of this is to say that there can be good patent assertion entities that seek licensing agreements and file claims to enforce legitimate patents and bad patent assertion entities that purchase broad and vague patents and make absurd demands to extort license payments or settlements. The proper way to address patent trolls, therefore, is by using the same means and methods that would likely work against ambulance chasers or other bad actors who exist in other areas of the law, such as medical malpractice, securities fraud, and product liability—individuals who gin up or grossly exaggerate alleged injuries and then make unreasonable demands to extort settlements up to and including filing frivolous lawsuits.

In conclusion, the FTC would be well advised to avoid putting forth patent reform recommendations based on the findings of the PAE Study.  At the very least, it should explicitly weigh the implications of other research, which explores PAE-related efficiencies and considers all the ramifications of procedural and patent law changes, before seeking to advance any “PAE reform” recommendations.

On October 6, the Heritage Foundation released a legal memorandum (authored by me) that recounts the Federal Communications Commission’s (FCC) recent sad history of ignoring the rule of law in its enforcement and regulatory actions.  The memorandum calls for a legislative reform agenda to rectify this problem by reining in the agency.  Key points culled from the memorandum are highlighted below (footnotes omitted).

1.  Background: The Rule of Law

The American concept of the rule of law is embodied in the Due Process Clause of the Fifth Amendment to the U.S. Constitution and in the constitutional principles of separation of powers, an independent judiciary, a government under law, and equality of all before the law.  As the late Friedrich Hayek explained:

[The rule of law] means the government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to see with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.

In other words, the rule of law involves a system of binding rules that have been adopted and applied by a valid government authority and that embody clarity, predictability, and equal applicability.   Practices employed by government agencies that undermine the rule of law ignore a fundamental duty that the government owes its citizens and thereby weaken America’s constitutional system.  It follows, therefore, that close scrutiny of federal administrative agencies’ activities is particularly important in helping to achieve public accountability for an agency’s failure to honor the rule of law standard.

2.  How the FCC Flouts the Rule of Law

Applying such scrutiny to the FCC reveals that it does a poor job in adhering to rule of law principles, both in its procedural practices and in various substantive actions that it has taken.

Opaque procedures that generate uncertainties regarding agency plans undermine the clarity and predictability of agency actions and thereby undermine the effectiveness of rule of law safeguards.  Process-based reforms designed to deal with these problems, to the extent that they succeed, strengthen the rule of law.  Procedural inadequacies at the FCC include inordinate delays and a lack of transparency, including the failure to promptly release the text of proposed and final rules.  The FCC itself has admitted that procedural improvements are needed, and legislative proposals have been advanced to make the Commission more transparent, efficient, and accountable.

Nevertheless, mere procedural reforms would not address the far more serious problem of FCC substantive actions that flout the rule of law.  Examples abound:

  • The FCC imposes a variety of “public interest” conditions on proposed mergers subject to its jurisdiction. Those conditions often are announced after inordinate delays, and typically have no bearing on the mergers’ actual effects.  The unpredictable nature and timing of such impositions generate a lack of certainty for businesses and thereby undermine the rule of law.
  • The FCC’s 2015 Municipal Broadband Order preempted state laws in Tennessee and North Carolina that prevented municipally owned broadband providers from providing broadband service beyond their geographic boundaries. Apart from its substantive inadequacies, this Order went beyond the FCC’s statutory authority and raised grave federalism problems (by interfering with a state’s sovereign right to oversee its municipalities), thereby ignoring the constitutional limitations placed on the exercise of governmental powers that lie at the heart of the rule of law.  The Order was struck down by the U.S. Court of Appeals for the Sixth Circuit in August 2016.
  • The FCC’s 2015 “net neutrality” rule (the Open Internet Order) subjects internet service providers (ISPs) to sweeping “reasonableness-based” FCC regulatory oversight. This “reasonableness” standard gives the FCC virtually unbounded discretion to impose sanctions on ISPs.  It does not provide, in advance, a knowable, predictable rule consistent with due process and rule of law norms.  In the dynamic and fast-changing “Internet ecosystem,” this lack of predictable guidance is a major drag on innovation.  Regrettably, in June 2014, a panel of the U.S. Court of Appeals for the District of Columbia, by a two-to-one vote, rejected a challenge to the order brought by ISPs and their trade association.
  • The FCC’s abrupt 2014 extension of its long-standing rules restricting common ownership of local television broadcast stations, to encompass Joint Sales Agreements (JSAs) likewise undermined the rule of law. JSAs, which allow one television station to sell advertising (but not programming) on another station, have long been used by stations that had no reason to believe that their actions in any way constituted illegal “ownership interests,” especially since many of them were originally approved by the FCC.  The U.S. Court of Appeals for the Third Circuit wisely vacated the television JSA rule in May 2016, stressing that the FCC had violated a statutory command by failing to carry out in a timely fashion the quadrennial review of the television ownership rules on which the JSA rule was based.
  • The FCC’s February 2016 proposed rules that are designed to “open” the market for video set-top boxes, appear to fly in the face of federal laws and treaty language protecting intellectual property rights, by arbitrarily denying protection to intellectual property based solely on a particular mode of information transmission. Such a denial is repugnant to rule of law principles.
  • FCC enforcement practices also show a lack of respect for rule of law principles, by seeking to obtain sanctions against behavior that has never been deemed contrary to law or regulatory edicts. Two examples illustrate this point.
    • In 2014, the FCC’s Enforcement Bureau proposed imposing a $10 million fine on TerraCom, Inc., and YourTelAmerica, Inc., two small telephone companies, for a data breach that exposed certain personally identifiable information to unauthorized access. In so doing, the FCC cited provisions of the Telecommunications Act of 1996 and accompanying regulations that had never been construed to authorize sanctions for failure to adopt “reasonable data security practices” to protect sensitive consumer information.
    • In November 2015, the FCC similarly imposed a $595,000 fine on Cox Communications for failure to prevent a data breach committed by a third-party hacker, although no statutory or regulatory language supported imposing any penalty on a firm that was itself victimized by a hack attack

3.  Legislative Reforms to Rein in the FCC

What is to be done?  One sure way to limit an agency’s ability to flout the rule of law is to restrict the scope of its legal authority.  As a matter of first principles, Congress should therefore examine the FCC’s activities with an eye to eliminating its jurisdiction over areas in which regulation is no longer needed:  For example, residual price regulation may be unnecessary in all markets where competition is effective. Regulation is called for only in the presence of serious market failure, coupled with strong evidence that government intervention will yield a better economic outcome than will a decision not to regulate.

Congress should craft legislation designed to sharply restrict the FCC’s ability to flout the rule of law.  At a minimum, no matter how it decides to pursue broad FCC reform, the following five proposals merit special congressional attention as a means of advancing rule of law principles:

  • Eliminate the FCC’s jurisdiction over all mergers. The federal antitrust agencies are best equipped to handle merger analysis, and this source of costly delay and uncertainty regarding ad hoc restrictive conditions should be eliminated.
  • Eliminate the FCC’s jurisdiction over broadband Internet service. Given the benefits associated with an open and unregulated Internet, Congress should provide clearly and unequivocally that the FCC has no jurisdiction, direct or indirect, in this area.
  • Shift FCC regulatory authority over broadband-related consumer protection (including, for example, deceptive advertising, privacy, and data protection) and competition to the Federal Trade Commission, which has longstanding experience and expertise in the area. This jurisdictional transfer would promote clarity and reduce uncertainty, thereby strengthening the rule of law.
  • Require that before taking regulatory action, the FCC carefully scrutinize regulatory language to seek to avoid the sorts of rule of law problems that have plagued prior commission rulemakings.
  • Require that the FCC not seek fines in an enforcement action unless the alleged infraction involves a violation of the precise language of a regulation or statutory provision.

4.  Conclusion

In recent years, the FCC too often has acted in a manner that undermines the rule of law. Internal agency reforms might be somewhat helpful in rectifying this situation, but they inevitably would be limited in scope and inherently malleable as FCC personnel changes. Accordingly, Congress should weigh major statutory reforms to rein in the FCC—reforms that will advance the rule of law and promote American economic well-being.

On September 28, the American Antitrust Institute released a report (“AAI Report”) on the state of U.S. antitrust policy, provocatively entitled “A National Competition Policy:  Unpacking the Problem of Declining Competition and Setting Priorities for Moving Forward.”  Although the AAI Report contains some valuable suggestions, in important ways it reminds one of the drunkard who seeks his (or her) lost key under the nearest lamppost.  What it requires is greater sobriety and a broader vision of the problems that beset the American economy.

The AAI Report begins by asserting that “[n]ot since the first federal antitrust law was enacted over 120 years ago has there been the level of public concern over the concentration of economic and political power that we see today.”  Well, maybe, although I for one am not convinced.  The paper then states that “competition is now on the front pages, as concerns over rising concentration, extraordinary profits accruing to the top slice of corporations, slowing innovation, and widening income and wealth inequality have galvanized attention.”  It then goes on to call for a more aggressive federal antitrust enforcement policy, with particular attention paid to concentrated markets.  The implicit message is that dedicated antitrust enforcers during the Obama Administration, led by Federal Trade Commission Chairs Jonathan Leibowitz and Edith Ramirez, and Antitrust Division chiefs Christine Varney, Bill Baer, and Renata Hesse (Acting) have been laggard or asleep at the switch.  But where is the evidence for this?  I am unaware of any and the AAI doesn’t say.  Indeed, federal antitrust officials in the Obama Administration consistently have called for tough enforcement, and they have actively pursued vertical as well as horizontal conduct cases and novel theories of IP-antitrust liability.  Thus, the AAI Report’s contention that antitrust needs to be “reinvigorated” is unconvincing.

The AAI Report highlights three “symptoms” of declining competition:  (1) rising concentration, (2) higher profits to the few and slowing rates of start-up activity, and (3) widening income and wealth inequality.  But these concerns are not something that antitrust policy is designed to address.  Mergers that threaten to harm competition are within the purview of antitrust, but modern antitrust rightly focuses on the likely effects of such mergers, not on the mere fact that they may increase concentration.  Furthermore, antitrust assesses the effects of business agreements on the competitive process.  Antitrust does not ask whether business arrangements yield “unacceptably” high profits, or “overly low” rates of business formation, or “unacceptable” wealth and income inequality.  Indeed, antitrust is not well equipped to address such questions, nor does it possess the tools to “solve” them (even assuming they need to be solved).

In short, if American competition is indeed declining based on the symptoms flagged by the AAI Report, the key to the solution will not be found by searching under the antitrust policy lamppost for illumination.  Rather, a more thorough search, with the help of “common sense” flashlights, is warranted.

The search outside the antitrust spotlight is not, however, a difficult one.  Finding the explanation for lagging competitive conditions in the United States requires no great policy legerdemain, because sound published research already provides the answer.  And that answer centers on government failures, not private sector abuses.

Consider overregulation.  In its annual Red Tape Rising reports (see here for the latest one), the Heritage Foundation has documented the growing burden of federal regulation on the American economy.  Overregulation acts like an implicit tax on businesses and disincentivizes business start-ups.  Moreover, as regulatory requirements grow in complexity and burdensomeness, they increasingly place a premium on large size – only relatively larger businesses can better afford the fixed costs needed to establish regulatory compliance department than their smaller rivals.  Heritage Foundation Scholar Norbert Michel summarizes this phenomenon in his article Dodd-Frank and Glass-Steagall – ‘Consumer Protection for Billionaires’:

Even when it’s not by nefarious design, we end up with rules that favor the largest/best-funded firms over their smaller/less-well-funded competitors. Put differently, our massive regulatory state ends up keeping large firms’ competitors at bay.  The more detailed regulators try to be, the more complex the rules become. And the more complex the rules become, the smaller the number of people who really care. Hence, more complicated rules and regulations serve to protect existing firms from competition more than simple ones. All of this means consumers lose. They pay higher prices, they have fewer choices of financial products and services, and they pretty much end up with the same level of protection they’d have with a smaller regulatory state.

What’s worse, some of the most onerous regulatory schemes are explicitly designed to favor large competitors over small ones.  A prime example is financial services regulation, and, in particular, the rules adopted pursuant to the 2010 Dodd-Frank Act (other examples could readily be provided).  As a Heritage Foundation report explains (footnote citations omitted):

The [Dodd-Frank] act was largely intended to reduce the risk of a major bank failure, but the regulatory burden is crippling community banks (which played little role in the financial crisis). According to Harvard University researchers Marshall Lux and Robert Greene, small banks’ share of U.S. commercial banking assets declined nearly twice as much since the second quarter of 2010—around the time of Dodd–Frank’s passage—as occurred between 2006 and 2010. Their share currently stands at just 22 percent, down from 41 percent in 1994.

The increased consolidation rate is driven by regulatory economies of scale—larger banks are better suited to handle increased regulatory burdens than are smaller banks, causing the average costs of community banks to rise. The decline in small bank assets spells trouble for their primary customer base—small business loans and those seeking residential mortgages.

Ironically, Dodd–Frank proponents pushed for the law as necessary to rein in the big banks and Wall Street. In fact, the regulations are giving the largest companies a competitive advantage over smaller enterprises—the opposite outcome sought by Senator Christopher Dodd (D–CT), Representative Barney Frank (D–MA), and their allies. As Goldman Sachs CEO Lloyd Blankfein recently explained: “More intense regulatory and technology requirements have raised the barriers to entry higher than at any other time in modern history. This is an expensive business to be in, if you don’t have the market share in scale.

In sum, as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.

More generally, as Heritage Foundation President Jim DeMint and Heritage Action for America CEO Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations, including special tax preferences. Those special preferences undermine competition on the merits by firms that lack insider status, to the public detriment.  Relatedly, the hideously complex system of American business taxation, which features the highest corporate tax rates in the developed world (which can better be manipulated by very large corporate players), depresses wages and is a serious drag on the American economy, as shown by Heritage Foundation scholars Curtis Dubay and David Burton.  In a similar vein, David Burton testified before Congress in 2015 on how the various excesses of the American regulatory state (including bad tax, health care, immigration, and other regulatory policies, combined with an overly costly legal system) undermine U.S. entrepreneurship (see here).

In other words, special subsidies, regulations, and tax and regulatory programs for the well-connected are part and parcel of crony capitalism, which (1) favors large businesses, tending to raise concentration; (2) confers higher profits on the well-connected while discouraging small business entrepreneurship; and (3) promotes income and wealth inequality, with the greatest returns going to the wealthiest government cronies who know best how to play the Washington “rent seeking game.”  Unfortunately, crony capitalism has grown like topsy during the Obama Administration.

Accordingly, I would counsel AAI to turn its scholarly gaze away from antitrust and toward the true source of the American competitive ailments it spotlights:  crony capitalism enabled by the growth of big government special interest programs and increasingly costly regulatory schemes.  Let’s see if AAI takes my advice.

There must have been a great gnashing of teeth in Chairman Wheeler’s office this morning as the FCC announced that it was pulling the Chairman’s latest modifications to the set-top box proposal from its voting agenda. This is surely but a bump in the road for the Chairman; he will undoubtedly press ever onward in his quest to “fix” a market that is flooded with competition and consumer choice. But, as we stop to take a breath for a moment while this latest FCC adventure is temporarily paused, there is a larger issue worth considering: the lack of transparency at the FCC.

Although the Commission has an unfortunate tradition of non-disclosure surrounding many of its regulatory proposals, the problem has seemingly been exacerbated by Chairman Wheeler’s aggressive agenda and his intransigence in the face of overwhelming and rigorous criticism.

Perhaps nowhere was this attitude more apparent than with his handling of the Open Internet Order, which was plagued with enough process problems to elicit a call for a delay of the Commission’s vote on the initial rules from Democratic Commissioner Rosenworcel, and a strong rebuke from the Chairman of the House Oversight Committee prior to the Commission’s vote on the final rules (which were not disclosed to the public until after the vote).

But the same cavalier dismissal of public and stakeholder input has plagued the Chairman’s beleaguered set-top box proposal, as well.

As Commissioner Pai noted before Congress in March:

The FCC continues to choose opacity over transparency. The decisions we make impact hundreds of millions of Americans and thousands of small businesses. And yet to the public, to Congress, and even to the Commissioners at the FCC, the agency’s work remains a black box.

Take this simple proposition: The public should be able to see what we’re voting on before we vote on it. That’s how Congress works, as you know. Anyone can look up any pending bill right now by going to congress.gov. And that’s how many state commissions work too. But not the FCC.

Exhibit A in Commissioner Pai’s lament was the set-top box proceeding:

Instead, the public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Now, although the Chairman’s initial proposal was eventually released, we have only a fact sheet and an op-ed by Chairman Wheeler on which to judge the purportedly substantial changes embodied in his latest version.

Even Democrats in Congress have recognized the process problems that have plagued this proceeding. As Senator Feinstein (D-CA) urged in a recent letter to Chairman Wheeler:

Given the significance of this proceeding, I ask that you make public the new proposal under consideration by the Commission, so that all interested stakeholders, members of Congress, copyright experts, and others can comment on the potential copyright implications of the new proposal before the Commission votes on it.

And as Senator Heller (R-NV) wrote in a letter to Chairman Wheeler this week:

I believe it is unacceptable that the FCC has not released the text of this proposal before Thursday’s vote. A three-page fact sheet does not provide enough details for Congress to conduct proper oversight of this rulemaking that will significantly impact both consumers and industry…. I encourage you to release the text immediately so that the American public has a full understanding of what is being considered by the Commission….

Of course, this isn’t a new problem at the FCC. In fact, before he supported Chairman Wheeler’s efforts to impose Open Internet rules without sufficient public disclosure, then-Senator Obama decried then-Chairman Martin’s efforts to enact new media ownership rules with insufficient process in 2007:

Repealing the cross ownership rules and retaining the rest of our existing regulations is not a proposal that has been put out for public comment; the proper process for vetting it is not in closed door meetings with lobbyists or in selective leaks to the New York Times.

Although such a proposal may pass the muster of a federal court, Congress and the public have the right to review any specific proposal and decide whether or not it constitutes sound policy. And the Commission has the responsibility to defend any new proposal in public discourse and debate.

And although you won’t find them complaining this time (because this time they want the excessive intervention that the NPRM seems to contemplate), regulatory advocates lamented just exactly this sort of secrecy at the Commission when Chairman Genachowski proposed his media ownership rules in 2012. At that time Free Press angrily wrote:

[T]he Commission still has not made public its actual media ownership order…. Furthermore, it’s disingenuous for the FCC to suggest that its process now is more transparent than the one former Chairman Martin used to adopt similar rules. Genachowski’s FCC has yet to publish any details of its final proposal, offering only vague snippets in press releases… despite the president’s instruction to rulemaking agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.

As Free Press noted, President Obama did indeed instruct “agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.” In his Memorandum on Transparency and Open Government, his first executive action, the president urged that:

Public engagement enhances the Government’s effectiveness and improves the quality of its decisions. Knowledge is widely dispersed in society, and public officials benefit from having access to that dispersed knowledge. Executive departments and agencies should offer Americans increased opportunities to participate in policymaking and to provide their Government with the benefits of their collective expertise and information.

The resulting Open Government Directive calls on executive agencies to

take prompt steps to expand access to information by making it available online in open formats. With respect to information, the presumption shall be in favor of openness….

The FCC is not an “executive agency,” and so is not directly subject to the Directive. But the Chairman’s willingness to stray so far from basic principles of transparency is woefully inconsistent with the basic principles of good government and the ideals of heightened transparency claimed by this administration.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

Public comments on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines have, not surprisingly, focused primarily on fine points of antitrust analysis carried out by those two federal agencies (see, for example, the thoughtful recommendations by the Global Antitrust Institute, here).  In a September 23 submission to the FTC and the DOJ, however, U.S. International Trade Commissioner F. Scott Kieff focused on a broader theme – that patent-antitrust assessments should keep in mind the indirect effects on commercialization that stem from IP (and, in particular, patents).  Kieff argues that antitrust enforcers have employed a public law “rules-based” approach that balances the “incentive to innovate” created when patents prevent copying against the goals of competition.  In contrast, Kieff characterizes the commercialization approach as rooted in the property rights nature of patents and the use of private contracting to bring together complementary assets and facilitate coordination.  As Kieff explains (in italics, footnote citations deleted):

A commercialization approach to IP views IP more in the tradition of private law, rather than public law. It does so by placing greater emphasis on viewing IP as property rights, which in turn is accomplished by greater reliance on interactions among private parties over or around those property rights, including via contracts. Centered on the relationships among private parties, this approach to IP emphasizes a different target and a different mechanism by which IP can operate. Rather than target particular individuals who are likely to respond to IP as incentives to create or invent in particular, this approach targets a broad, diverse set of market actors in general; and it does so indirectly. This broad set of indirectly targeted actors encompasses the creator or inventor of the underlying IP asset as well as all those complementary users of a creation or an invention who can help bring it to market, such as investors (including venture capitalists), entrepreneurs, managers, marketers, developers, laborers, and owners of other key assets, tangible and intangible, including other creations or inventions. Another key difference in this approach to IP lies in the mechanism by which these private actors interact over and around IP assets. This approach sees IP rights as tools for facilitating coordination among these diverse private actors, in furtherance of their own private interests in commercializing the creation or invention.

This commercialization approach sees property rights in IP serving a role akin to beacons in the dark, drawing to themselves all of those potential complementary users of the IP-protected-asset to interact with the IP owner and each other. This helps them each explore through the bargaining process the possibility of striking contracts with each other.

Several payoffs can flow from using this commercialization approach. Focusing on such a beacon-and-bargain effect can relieve the governmental side of the IP system of the need to amass the detailed information required to reasonably tailor a direct targeted incentive, such as each actor’s relative interests and contributions, needs, skills, or the like. Not only is amassing all of that information hard for the government to do, but large, established market actors may be better able than smaller market entrants to wield the political influence needed to get the government to act, increasing risk of concerns about political economy, public choice, and fairness. Instead, when governmental bodies closely adhere to a commercialization approach, each private party can bring its own expertise and other assets to the negotiating table while knowing—without necessarily having to reveal to other parties or the government—enough about its own level of interest and capability when it decides whether to strike a deal or not.            

Such successful coordination may help bring new business models, products, and services to market, thereby decreasing anticompetitive concentration of market power. It also can allow IP owners and their contracting parties to appropriate the returns to any of the rival inputs they invested towards developing and commercializing creations or inventions—labor, lab space, capital, and the like. At the same time, the government can avoid having to then go back to evaluate and trace the actual relative contributions that each participant brought to a creation’s or an invention’s successful commercialization—including, again, the cost of obtaining and using that information and the associated risks of political influence—by enforcing the terms of the contracts these parties strike with each other to allocate any value resulting from the creation’s or invention’s commercialization. In addition, significant economic theory and empirical evidence suggests this can all happen while the quality-adjusted prices paid by many end users actually decline and public access is high. In keeping with this commercialization approach, patents can be important antimonopoly devices, helping a smaller “David” come to market and compete against a larger “Goliath.”

A commercialization approach thereby mitigates many of the challenges raised by the tension that is a focus of the other intellectual approaches to IP, as well as by the responses these other approaches have offered to that tension, including some – but not all – types of AT regulation and enforcement. Many of the alternatives to IP that are often suggested by other approaches to IP, such as rewards, tax credits, or detailed rate regulation of royalties by AT enforcers can face significant challenges in facilitating the private sector coordination benefits envisioned by the commercialization approach to IP. While such approaches often are motivated by concerns about rising prices paid by consumers and direct benefits paid to creators and inventors, they may not account for the important cases in which IP rights are associated with declines in quality-adjusted prices paid by consumers and other forms of commercial benefits accrued to the entire IP production team as well as to consumers and third parties, which are emphasized in a commercialization approach. In addition, a commercialization approach can embrace many of the practical checks on the market power of an IP right that are often suggested by other approaches to IP, such as AT review, government takings, and compulsory licensing. At the same time this approach can show the importance of maintaining self-limiting principles within each such check to maintain commercialization benefits and mitigate concerns about dynamic efficiency, public choice, fairness, and the like.

To be sure, a focus on commercialization does not ignore creators or inventors or creations or inventions themselves. For example, a system successful in commercializing inventions can have the collateral benefit of providing positive incentives to those who do invent through the possibility of sharing in the many rewards associated with successful commercialization. Nor does a focus on commercialization guarantee that IP rights cause more help than harm. Significant theoretical and empirical questions remain open about benefits and costs of each approach to IP. And significant room to operate can remain for AT enforcers pursuing their important public mission, including at the IP-AT interface.

Commissioner Kieff’s evaluation is in harmony with other recent scholarly work, including Professor Dan Spulber’s explanation that the actual nature of long-term private contracting arrangements among patent licensors and licensees avoids alleged competitive “imperfections,” such as harmful “patent hold-ups,” “patent thickets,” and “royalty stacking” (see my discussion here).  More generally, Commissioner Kieff’s latest pronouncement is part of a broader and growing theoretical and empirical literature that demonstrates close associations between strong patent systems and economic growth and innovation (see, for example, here).

There is a major lesson here for U.S. (and foreign) antitrust enforcement agencies.  As I have previously pointed out (see, for example, here), in recent years, antitrust enforcers here and abroad have taken positions that tend to weaken patent rights.  Those positions typically are justified by the existence of “patent policy deficiencies” such as those that Professor Spulber’s paper debunks, as well as an alleged epidemic of low quality “probabilistic patents” (see, for example, here) – justifications that ignore the substantial economic benefits patents confer on society through contracting and commercialization.  It is high time for antitrust to accommodate the insights drawn from this new learning.  Specifically, government enforcers should change their approach and begin incorporating private law/contracting/commercialization considerations into patent-antitrust analysis, in order to advance the core goals of antitrust – the promotion of consumer welfare and efficiency.  Better yet, if the FTC and DOJ truly want to maximize the net welfare benefits of antitrust, they should undertake a more general “policy reboot” and adopt a “decision-theoretic” error cost approach to enforcement policy, rooted in cost-benefit analysis (see here) and consistent with the general thrust of Roberts Court antitrust jurisprudence (see here).

In a September 20 speech at the high profile Georgetown Global Antitrust Enforcement Symposium, Acting Assistant Attorney General Renata Hesse sent the wrong signals to the business community and to foreign enforcers (see here) regarding U.S. antitrust policy.  Admittedly, a substantial part of her speech was a summary of existing U.S. antitrust doctrine.  In certain other key respects, however, Ms. Hesse’s remarks could be read as a rejection of the mainstream American understanding (and the accepted approach endorsed by the International Competition Network) that promoting economic efficiency and consumer welfare are the antitrust lodestar, and that non-economic considerations should not be part of antitrust analysis.  Because foreign lawyers, practitioners, and enforcement officials were present, Ms. Hesse’s statement not only could be cited against U.S. interests in foreign venues, it could undermine longstanding efforts to advance international convergence toward economically sound antitrust rules.

Let’s examine some specifics.

Ms. Hesse’s speech begins with a paean to “economic fairness” – a theme that runs counter to the theme that leading federal antitrust enforcers have consistently stressed for decades, namely, that antitrust seeks to advance the economic goal of consumer welfare (and efficiency).  Consider this passage (emphasis added):

[E]nforcers [should be] focused on the ultimate goal of antitrust, economic fairness. . . .    The conservative leaning “Chicago School” made economic efficiency synonymous with the goals of antitrust in the 1970s, which incorporated theoretical economics into mainstream antitrust scholarship and practice.  Later, more centrist or left-leaning post-Chicago and Harvard School scholars showed that sophisticated empirical and theoretical economics tools can be used to support more aggressive enforcement agendas.  Together, these developments resulted in many technical discussions about what impact a business practice will have on consumer welfare mathematically measured – involving supply and demand curves, triangles representing “dead weight loss,” and so on.   But that sort of conversation is one that resonates very little – if at all – with those engaged in the straightforward, popular dialogue about the dangers of increasing corporate concentration.  The language of economic theory does not sound like the language of economic fairness that is the raw material for most popular discussions about competition and antitrust.      

Unfortunately, Ms. Hesse’s references to the importance of “fairness” recur throughout her remarks, driving home again and again that fairness is a principle that should play a key role in antitrust enforcement.  Yet fairness is an inherently subjective concept (fairness for whom, and measured by what standard?) that was often invoked in notorious and illogical U.S. Supreme Court decisions of days of yore – decisions that were rightly critiqued by leading scholars and largely confined to the dustbin of bad precedents, starting in the mid-1970s.

Equally bad are the speech’s multiple references to “high concentration” and “bigness,” unfortunate terms that also cropped up in economically irrational pre-1970s Supreme Court antitrust opinions.  Scholarship demonstrating that neither high market concentration nor large corporate size is necessarily associated with poor economic performance is generally accepted, and the core teaching that “bigness” is not “badness” is a staple of undergraduate industrial organization classes and introductory antitrust law courses in the United States.  Admittedly the speech also recognizes that bigness and high concentration are not necessarily harmful, but merely by giving lip service to these concepts, it encourages interventionists and foreign enforcers who are seeking additional justifications for antitrust crusades against “big” and “powerful” companies (more on this point later).

Perhaps the most unfortunate passage in the speech is Ms. Hesse’s defense of the Supreme Court’s “Philadelphia National Bank” (1963) (“PNB”) presumption that “a merger which produces a firm controlling an undue percentage share of the relevant market, and results in a significant increase in the concentration of firms in that market is so inherently likely to lessen competition substantially” that the law will presume it unlawful.  The PNB presumption is a discredited historical relic, an antitrust “oldie but baddy” that sound scholarship has shown should be relegated to the antitrust scrap heap.  Professor Joshua Wright and Judge Douglas Ginsburg explained why the presumption should be scrapped in a 2015 Antitrust Law Journal article:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. The problem for today’s courts in applying this semicentenary standard is that the field of industrial organization economics has long since moved beyond the structural presumption upon which the standard is based. That presumption is almost the last vestige of pre-modern economics still embedded in the antitrust law of the United States. Even the 2010 Horizontal Merger Guidelines issued jointly by the Federal Trade Commission and the Antitrust Division of the Department of Justice have abandoned the . . . presumption, though the agencies certainly do not resist the temptation to rely upon the presumption when litigating a case. There is no doubt the . . . presumption of PNB is a convenient litigation tool for the enforcement agencies, but the mission of the enforcement agencies is consumer welfare, not cheap victories in litigation. The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.  

Ms. Hesse ignored this reasoned analysis in commenting on the PNB presumption:

[I]n the wake of the Chicago School’s influence, antitrust commentators started to call into question the validity of this common-sense presumption, believing that economic theory showed that mergers tended to be beneficial or, if they resulted in harm, that harm was fleeting.  Those skeptics demanded more detailed proof of consumer harm in place of the presumption.  More recent economics studies, however, have given new life to the old presumption—in several ways.  First, we are learning more and more that mergers among substantial competitors tend to lead to higher prices. [citation omitted]  Second, economists have been finding that mergers often fail to deliver on the gains their proponents sought to achieve. [citation omitted] Taking these insights together, we should be skeptical of the claim that mergers among substantial competitors are beneficial.  The law – which builds this skepticism into it – provides an excellent tool for protecting competition from large, horizontal mergers.

Ms. Hesse’s discussion of the PNB presumption is problematic on several counts.  First, it cites one 2014 study that purports to find price increases following certain mergers in some oligopolistic industries as supporting the presumption, without acknowledging a key critique of that study – that it ignores efficiencies and potential gains in producer welfare (see here).  Second, it cites one 2001 study suggesting that financial performance may not be enhanced by some mergers while ignoring other studies to the contrary (see, for example, here and here).  Third, and most fundamentally, Ms. Hesse’s statement that “we should be skeptical of the claim that mergers among substantial competitors are beneficial” misses the point of antitrust enforcement entirely, and, in so doing, could be read as discouraging efficiency-seeking acquisitions.  It is not the role of antitrust enforcement to make merging parties prove that their proposed transaction will be beneficial – rather, enforcers must prove that a proposed transaction’s effect “may be substantially to lessen competition”, as stated in section 7 of the Clayton Act.  Requiring “proof” that a merger between competitors “will be beneficial” after the fact, in response to a negative presumption, strongly discourages potential efficiency-seeking consolidations, to the detriment of economic growth and welfare.  That was the case in the 1960s, and it could become so again today, if U.S. antitrust enforcers embark on a concerted campaign of touting the PNB presumption.  Relatedly, an efficient market for corporate control (involving the strong potential of acquisitions to achieve synergies or to correct management problems in badly-run targets) is chilled when a presumption blocks acquisitions absent a “proof” of future benefit, to the detriment of the economy.  Apart from these technical points, the PNB presumption in effect grants a government bureaucracy (exercising “the pretense of knowledge”) the right to condemn voluntary commercial transactions of a particular sort (horizontal mergers) that have not been shown to be harmful.  Such a grant of authority ignores the superior ability of information-seeking market participants to uncover and apply knowledge (as the late Friedrich Hayek might have pointed out) and is fundamentally at odds with the system of voluntary exchange that lies at the heart of a successful market economy.

Another highly problematic statement is Ms. Hesse’s discussion of the Federal Trade Commission’s (FTC) final 2010 Intel settlement:

The Federal Trade Commission’s case against Intel a decade later . . . shows how dominant firms can cut off the normal mechanisms of competition to maintain dominance.  In that case, the FTC alleged that Intel violated Section 5 of the FTC Act by maintaining its monopoly in central processing units (or CPUs) through a variety of payments and penalties (including loyalty or market-share discounts) to computer manufacturers to induce them not to purchase products from Intel’s rivals such as AMD and Via Technologies. [citation omitted]  When a monopolist pays customers to disfavor its rivals and punishes those customers who nevertheless do business with a rival, that does not look like the monopolist is competing with its rivals on the merits of their products.  Because these actions served only to foreclose competition from rival producers of CPUs, these actions distorted the competitive process.

Ms. Hesse ignores the fact that Intel involved a settlement, not a final litigated decision, and thus is lacking in precedential weight.  Firms that believe their conduct was perfectly legal may nevertheless settle an FTC investigation if they deem the costs (including harm to reputation) of continuing to litigate outweigh the costs of the settlement’s terms.  Furthermore, various learned commentators (such as Professor and then-FTC Commissioner Joshua Wright, see here) have pointed out that Intel’s discounts had tangible procompetitive effects and that there was a lack of evidence that Intel’s conduct harmed consumers or competitors (indeed, AMD, Intel’s principal competitor, continued to thrive during the period of Intel’s alleged “bad” behavior).  In short, Ms. Hesse’s conclusion that Intel’s actions “served only to foreclose competition from rival producers of CPUs” lacks credibility.  Moreover, Ms. Hesse’s reference to illegal “monopoly maintenance,” a Sherman Antitrust Act monopolization term of art, fails to note that the FTC stressed that Intel was brought purely under FTC section 5, “which is broader than the antitrust laws”.

Finally, the speech’s concluding section ends on a discordant note.  In summing up what she deemed to be an appropriate, up-to-date approach to antitrust litigation, Ms. Hesse reemphasizes the “fairness” theme, making such statements as “ultimately the plaintiff’s story should highlight the moral underpinnings of the antitrust laws—fighting against the unfairness of concentrated economic power” and “attempts to obtain or keep economic power unfairly”.  While such statements might be rationalized as having been made in the context of promoting a “non-technical” appreciation for antitrust by the general public, the emphasis on fairness as a rhetorical device in lieu of palpable economic harm and consumer welfare is quite troublesome.

On the domestic front, that emphasis may not have a direct impact on the exercise of prosecutorial discretion and on American judicial precedents in the short run (at least one hopes so).  In the longer run, however, it cuts against efforts to constrain populist impulses that would transform antitrust once again into an unguided missile aimed at the heart of the American market system.

On the international front, things are even worse.  A variety of major jurisdictions make explicit reference to “fairness” in their competition law statutes and decisions.  Foreign officials with a strongly interventionist bent might well cite Ms. Hesse’s speech in justifying expansive and economically untethered “fairness-based” competition law prosecutions.  Niceties as to whether their initiatives do not fall within the strict contours of Ms. Hesse’s analysis of the competitive process might be readily ignored, given the inherent elasticity (to say the least) of the “fairness” concept.  What’s more, Ms. Hesse’s remarks seriously undermine arguments advanced by the United States and leading commentators in multilateral fora (such as the ICN and the OECD) that competition law enforcement should focus solely on consumer welfare, with other policies handled under different statutory schemes.

In sum, Ms. Hesse’s speech summons up not the comforting ghost of Christmas past, but rather the malevolent goblin of antitrust past (whether she meant to do so or not).  Although her remarks concededly contain many well-reasoned and uncontroversial comments about antitrust analysis, her totally unnecessary application of a gaudy, un-economic populist gloss to the antitrust enterprise is what stares the reader in the face.  One can hope that, as an experienced and accomplished antitrust practitioner and public servant, Ms. Hesse will come to realize this and respond by unequivocally disavowing and stripping away the rhetorical gloss in a future major address.  Whether she chooses to do so or not, however, antitrust agency leadership in the next Administration should loudly and repeatedly make it clear that populist notions and “fairness” have no role in modern competition law analysis, whose lodestar should be consumer welfare and efficiency.