Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

On October 6, 2016, the U.S. Federal Trade Commission (FTC) issued Patent Assertion Entity Activity: An FTC Study (PAE Study), its much-anticipated report on patent assertion entity (PAE) activity.  The PAE Study defined PAEs as follows:

Patent assertion entities (PAEs) are businesses that acquire patents from third parties and seek to generate revenue by asserting them against alleged infringers.  PAEs monetize their patents primarily through licensing negotiations with alleged infringers, infringement litigation, or both. In other words, PAEs do not rely on producing, manufacturing, or selling goods.  When negotiating, a PAE’s objective is to enter into a royalty-bearing or lump-sum license.  When litigating, to generate any revenue, a PAE must either settle with the defendant or ultimately prevail in litigation and obtain relief from the court.

The FTC was mindful of the costs that would be imposed on PAEs, required by compulsory process to respond to the agency’s requests for information.  Accordingly, the FTC obtained information from only 22 PAEs, 18 of which it called “Litigation PAEs” (which “typically sued potential licensees and settled shortly afterward by entering into license agreements with defendants covering small portfolios,” usually yielding total royalties of under $300,000) and 4 of which it dubbed “Portfolio PAEs” (which typically negotiated multimillion dollars licenses covering large portfolios of patents and raised their capital through institutional investors or manufacturing firms).

Furthermore, the FTC’s research was narrowly targeted, not broad-based.  The agency explained that “[o]f all the patents held by PAEs in the FTC’s study, 88% fell under the Computers & Communications or Other Electrical & Electronic technology categories, and more than 75% of the Study PAEs’ overall holdings were software-related patents.”  Consistent with the nature of this sample, the FTC concentrated primarily on a case study of PAE activity in the wireless chipset sector.  The case study revealed that PAEs were more likely to assert their patents through litigation than were wireless manufacturers, and that “30% of Portfolio PAE wireless patent licenses and nearly 90% of Litigation PAE wireless patent licenses resulted from litigation, while only 1% of Wireless Manufacturer wireless patent licenses resulted from litigation.”  But perhaps more striking than what the FTC found was what it did not uncover.  Due to data limitations, “[t]he FTC . . . [did not] attempt[] to determine if the royalties received by Study PAEs were higher or lower than those that the original assignees of the licensed patents could have earned.”  In addition, the case study did “not report how much revenue PAEs shared with others, including independent inventors, or the costs of assertion activity.”

Curiously, the PAE Study also leaped to certain conclusions regarding PAE settlements based on questionable assumptions and without considering legitimate potential incentives for such settlements.  Thus, for example, the FTC found it particularly significant that 77% of litigation PAE settlements were for less than $300,000.  Why?  Because $300,000 was a “de facto benchmark” for nuisance litigation settlements, merely based on one American Intellectual Property Law Association study that claimed defending a non-practicing entity patent lawsuit through the end of discovery costs between $300,000 and $2.5 million, depending on the amount in controversy.  In light of that one study, the FTC surmised “that discovery costs, and not the technological value of the patent, may set the benchmark for settlement value in Litigation PAE cases.”  Thus, according to the FTC, “the behavior of Litigation PAEs is consistent with nuisance litigation.”  As noted patent lawyer Gene Quinn has pointed out, however, the FTC ignored the alternative eminently logical possibility that many settlements for less than $300,000 merely represented reasonable valuations of the patent rights at issue.  Quinn pithily stated:

[T]he reality is the FTC doesn’t know enough about the industry to understand that $300,000 is an arbitrary line in the sand that holds no relevance in the real world. For the very same reason that they said the term “patent troll” is unhelpful (i.e., because it inappropriately discriminates against rights owners without understanding the business model and practices), so too is $300,000 equally unhelpful. Without any understanding or appreciation of the value of the core innovation subject to the license there is no way to know whether a license is being offered for nuisance value or whether it is being offered at full, fair and appropriate value to compensate the patent owner for the infringement they had to chase down in litigation.

I thought the FTC was charged with ensuring fair business practices? It seems what they are doing is radically discriminating against incremental innovations valued at less than $300,000 and actually encouraging patent owners to charge more for their licenses than they are worth so they don’t get labeled a nuisance. Talk about perverse incentives! The FTC should stick to areas where they have subject matter competence and leave these patent issues to the experts.     

In sum, the FTC found that in one particular specialized industry sector featuring a certain  category of patents (software patents), PAEs tended to sue more than manufacturers before agreeing to licensing terms – hardly a surprising finding or a sign of a problem.  (To the contrary, the existence of “substantial” PAE litigation that led to licenses might be a sign that PAEs were acting as efficient intermediaries representing the interests and effectively vindicating the rights of small patentees.)  The FTC was not, however, able to comment on the relative levels of royalties, the extent to which PAE revenues were distributed to inventors, or the costs of PAE litigation (as opposed to any other sort of litigation).  Additionally, the FTC made certain assumptions about certain PAE litigation settlements that ignored reasonable alternative explanations for the behavior that was observed.  Accordingly, the reasonable observer would conclude from this that the agency was (to say the least) in no position to make any sort of policy recommendations, given the absence of any hard evidence of PAE abuses or excessive waste from litigation.

Unfortunately, the reasonable observer would be mistaken.  The FTC recommended reforms to: (1) address discovery burden and “cost asymmetries” (the notion that PAEs are less subject to costly counterclaims because they are not producers) in PAE litigation; (2) provide the courts and defendants with more information about the plaintiffs that have filed infringement lawsuits; (3) streamline multiple cases brought against defendants on the same theories of infringement; and (4) provide sufficient notice of these infringement theories as courts continue to develop heightened pleading requirements for patent cases.

Without getting into the merits of these individual suggestions (and without in any way denigrating the hard work and dedication of the highly talented FTC staffers who drafted the PAE Study), it is sufficient to note that they bear no logical relationship to the factual findings of the report.  The recommendations, which closely echo certain elements of various “patent reform” legislative proposals that have been floated in recent years, could have been advanced before any data had been gathered – with a saving to the companies that had to respond.  In short, the recommendations are classic pre-baked “solutions” to problems that have long been hypothesized.  Advancing such recommendations based on discrete information regarding a small skewed sample of PAEs – without obtaining crucial information on the direct costs and benefits of the PAE transactions being observed, or the incentive effects of PAE activity – is at odds with the FTC’s proud tradition of empirical research.  Unfortunately, Devin Hartline of the Antonin Scalia Law School proved prescient when commenting last April on the possible problems with the PAE Report, based on what was known about it prior to its release (and based on the preliminary thoughts of noted economists and law professors):

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general.  The study is simply not designed to do this.  It instead is a fact-finding mission, the results of which could guide future missions.  Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected.  And it’s crucial not to draw policy conclusions from it.  Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

To the extent patent reform is warranted, it should be considered carefully in a measured fashion, with full consideration given to the costs, benefits, and potential unintended consequences of suggested changes to the patent system and to litigation procedures.  As John Malcolm and I explained in a 2015 Heritage Foundation Legal Backgrounder which explored the relative merits of individual proposed reforms:

Before deciding to take action, Congress should weigh the particular merits of individual reform proposals carefully and meticulously, taking into account their possible harmful effects as well as their intended benefits. Precipitous, unreflective action on legislation is unwarranted, and caution should be the byword, especially since the effects of 2011 legislative changes and recent Supreme Court decisions have not yet been fully absorbed. Taking time is key to avoiding the serious and costly errors that too often are the fruit of omnibus legislative efforts.

Notably, this Legal Backgrounder also noted potential beneficial aspects of PAE activity that were not reflected in the PAE Study:

[E]ven entities whose business model relies on purchasing patents and licensing them or suing those who refuse to enter into licensing agreements and infringe those patents can serve a useful—even a vital—purpose. Some infringers may be large companies that infringe the patents of smaller companies or individual inventors, banking on the fact that such a small-time inventor will be less likely to file a lawsuit against a well-financed entity. Patent aggregators, often backed by well-heeled investors, help to level the playing field and can prevent such abuses.

More important, patent aggregators facilitate an efficient division of labor between inventors and those who wish to use those inventions for the betterment of their fellow man, allowing inventors to spend their time doing what they do best: inventing. Patent aggregators can expand access to patent pools that allow third parties to deal with one vendor instead of many, provide much-needed capital to inventors, and lead to a variety of licensing and sublicensing agreements that create and reflect a valuable and vibrant marketplace for patent holders and provide the kinds of incentives that spur innovation. They can also aggregate patents for litigation purposes, purchasing patents and licensing them in bundles.

This has at least two advantages: It can reduce the transaction costs for licensing multiple patents, and it can help to outsource and centralize patent litigation for multiple patent holders, thereby decreasing the costs associated with such litigation. In the copyright space, the American Society of Composers, Authors, and Publishers (ASCAP) plays a similar role.

All of this is to say that there can be good patent assertion entities that seek licensing agreements and file claims to enforce legitimate patents and bad patent assertion entities that purchase broad and vague patents and make absurd demands to extort license payments or settlements. The proper way to address patent trolls, therefore, is by using the same means and methods that would likely work against ambulance chasers or other bad actors who exist in other areas of the law, such as medical malpractice, securities fraud, and product liability—individuals who gin up or grossly exaggerate alleged injuries and then make unreasonable demands to extort settlements up to and including filing frivolous lawsuits.

In conclusion, the FTC would be well advised to avoid putting forth patent reform recommendations based on the findings of the PAE Study.  At the very least, it should explicitly weigh the implications of other research, which explores PAE-related efficiencies and considers all the ramifications of procedural and patent law changes, before seeking to advance any “PAE reform” recommendations.

On October 6, the Heritage Foundation released a legal memorandum (authored by me) that recounts the Federal Communications Commission’s (FCC) recent sad history of ignoring the rule of law in its enforcement and regulatory actions.  The memorandum calls for a legislative reform agenda to rectify this problem by reining in the agency.  Key points culled from the memorandum are highlighted below (footnotes omitted).

1.  Background: The Rule of Law

The American concept of the rule of law is embodied in the Due Process Clause of the Fifth Amendment to the U.S. Constitution and in the constitutional principles of separation of powers, an independent judiciary, a government under law, and equality of all before the law.  As the late Friedrich Hayek explained:

[The rule of law] means the government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to see with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.

In other words, the rule of law involves a system of binding rules that have been adopted and applied by a valid government authority and that embody clarity, predictability, and equal applicability.   Practices employed by government agencies that undermine the rule of law ignore a fundamental duty that the government owes its citizens and thereby weaken America’s constitutional system.  It follows, therefore, that close scrutiny of federal administrative agencies’ activities is particularly important in helping to achieve public accountability for an agency’s failure to honor the rule of law standard.

2.  How the FCC Flouts the Rule of Law

Applying such scrutiny to the FCC reveals that it does a poor job in adhering to rule of law principles, both in its procedural practices and in various substantive actions that it has taken.

Opaque procedures that generate uncertainties regarding agency plans undermine the clarity and predictability of agency actions and thereby undermine the effectiveness of rule of law safeguards.  Process-based reforms designed to deal with these problems, to the extent that they succeed, strengthen the rule of law.  Procedural inadequacies at the FCC include inordinate delays and a lack of transparency, including the failure to promptly release the text of proposed and final rules.  The FCC itself has admitted that procedural improvements are needed, and legislative proposals have been advanced to make the Commission more transparent, efficient, and accountable.

Nevertheless, mere procedural reforms would not address the far more serious problem of FCC substantive actions that flout the rule of law.  Examples abound:

  • The FCC imposes a variety of “public interest” conditions on proposed mergers subject to its jurisdiction. Those conditions often are announced after inordinate delays, and typically have no bearing on the mergers’ actual effects.  The unpredictable nature and timing of such impositions generate a lack of certainty for businesses and thereby undermine the rule of law.
  • The FCC’s 2015 Municipal Broadband Order preempted state laws in Tennessee and North Carolina that prevented municipally owned broadband providers from providing broadband service beyond their geographic boundaries. Apart from its substantive inadequacies, this Order went beyond the FCC’s statutory authority and raised grave federalism problems (by interfering with a state’s sovereign right to oversee its municipalities), thereby ignoring the constitutional limitations placed on the exercise of governmental powers that lie at the heart of the rule of law.  The Order was struck down by the U.S. Court of Appeals for the Sixth Circuit in August 2016.
  • The FCC’s 2015 “net neutrality” rule (the Open Internet Order) subjects internet service providers (ISPs) to sweeping “reasonableness-based” FCC regulatory oversight. This “reasonableness” standard gives the FCC virtually unbounded discretion to impose sanctions on ISPs.  It does not provide, in advance, a knowable, predictable rule consistent with due process and rule of law norms.  In the dynamic and fast-changing “Internet ecosystem,” this lack of predictable guidance is a major drag on innovation.  Regrettably, in June 2014, a panel of the U.S. Court of Appeals for the District of Columbia, by a two-to-one vote, rejected a challenge to the order brought by ISPs and their trade association.
  • The FCC’s abrupt 2014 extension of its long-standing rules restricting common ownership of local television broadcast stations, to encompass Joint Sales Agreements (JSAs) likewise undermined the rule of law. JSAs, which allow one television station to sell advertising (but not programming) on another station, have long been used by stations that had no reason to believe that their actions in any way constituted illegal “ownership interests,” especially since many of them were originally approved by the FCC.  The U.S. Court of Appeals for the Third Circuit wisely vacated the television JSA rule in May 2016, stressing that the FCC had violated a statutory command by failing to carry out in a timely fashion the quadrennial review of the television ownership rules on which the JSA rule was based.
  • The FCC’s February 2016 proposed rules that are designed to “open” the market for video set-top boxes, appear to fly in the face of federal laws and treaty language protecting intellectual property rights, by arbitrarily denying protection to intellectual property based solely on a particular mode of information transmission. Such a denial is repugnant to rule of law principles.
  • FCC enforcement practices also show a lack of respect for rule of law principles, by seeking to obtain sanctions against behavior that has never been deemed contrary to law or regulatory edicts. Two examples illustrate this point.
    • In 2014, the FCC’s Enforcement Bureau proposed imposing a $10 million fine on TerraCom, Inc., and YourTelAmerica, Inc., two small telephone companies, for a data breach that exposed certain personally identifiable information to unauthorized access. In so doing, the FCC cited provisions of the Telecommunications Act of 1996 and accompanying regulations that had never been construed to authorize sanctions for failure to adopt “reasonable data security practices” to protect sensitive consumer information.
    • In November 2015, the FCC similarly imposed a $595,000 fine on Cox Communications for failure to prevent a data breach committed by a third-party hacker, although no statutory or regulatory language supported imposing any penalty on a firm that was itself victimized by a hack attack

3.  Legislative Reforms to Rein in the FCC

What is to be done?  One sure way to limit an agency’s ability to flout the rule of law is to restrict the scope of its legal authority.  As a matter of first principles, Congress should therefore examine the FCC’s activities with an eye to eliminating its jurisdiction over areas in which regulation is no longer needed:  For example, residual price regulation may be unnecessary in all markets where competition is effective. Regulation is called for only in the presence of serious market failure, coupled with strong evidence that government intervention will yield a better economic outcome than will a decision not to regulate.

Congress should craft legislation designed to sharply restrict the FCC’s ability to flout the rule of law.  At a minimum, no matter how it decides to pursue broad FCC reform, the following five proposals merit special congressional attention as a means of advancing rule of law principles:

  • Eliminate the FCC’s jurisdiction over all mergers. The federal antitrust agencies are best equipped to handle merger analysis, and this source of costly delay and uncertainty regarding ad hoc restrictive conditions should be eliminated.
  • Eliminate the FCC’s jurisdiction over broadband Internet service. Given the benefits associated with an open and unregulated Internet, Congress should provide clearly and unequivocally that the FCC has no jurisdiction, direct or indirect, in this area.
  • Shift FCC regulatory authority over broadband-related consumer protection (including, for example, deceptive advertising, privacy, and data protection) and competition to the Federal Trade Commission, which has longstanding experience and expertise in the area. This jurisdictional transfer would promote clarity and reduce uncertainty, thereby strengthening the rule of law.
  • Require that before taking regulatory action, the FCC carefully scrutinize regulatory language to seek to avoid the sorts of rule of law problems that have plagued prior commission rulemakings.
  • Require that the FCC not seek fines in an enforcement action unless the alleged infraction involves a violation of the precise language of a regulation or statutory provision.

4.  Conclusion

In recent years, the FCC too often has acted in a manner that undermines the rule of law. Internal agency reforms might be somewhat helpful in rectifying this situation, but they inevitably would be limited in scope and inherently malleable as FCC personnel changes. Accordingly, Congress should weigh major statutory reforms to rein in the FCC—reforms that will advance the rule of law and promote American economic well-being.

On September 28, the American Antitrust Institute released a report (“AAI Report”) on the state of U.S. antitrust policy, provocatively entitled “A National Competition Policy:  Unpacking the Problem of Declining Competition and Setting Priorities for Moving Forward.”  Although the AAI Report contains some valuable suggestions, in important ways it reminds one of the drunkard who seeks his (or her) lost key under the nearest lamppost.  What it requires is greater sobriety and a broader vision of the problems that beset the American economy.

The AAI Report begins by asserting that “[n]ot since the first federal antitrust law was enacted over 120 years ago has there been the level of public concern over the concentration of economic and political power that we see today.”  Well, maybe, although I for one am not convinced.  The paper then states that “competition is now on the front pages, as concerns over rising concentration, extraordinary profits accruing to the top slice of corporations, slowing innovation, and widening income and wealth inequality have galvanized attention.”  It then goes on to call for a more aggressive federal antitrust enforcement policy, with particular attention paid to concentrated markets.  The implicit message is that dedicated antitrust enforcers during the Obama Administration, led by Federal Trade Commission Chairs Jonathan Leibowitz and Edith Ramirez, and Antitrust Division chiefs Christine Varney, Bill Baer, and Renata Hesse (Acting) have been laggard or asleep at the switch.  But where is the evidence for this?  I am unaware of any and the AAI doesn’t say.  Indeed, federal antitrust officials in the Obama Administration consistently have called for tough enforcement, and they have actively pursued vertical as well as horizontal conduct cases and novel theories of IP-antitrust liability.  Thus, the AAI Report’s contention that antitrust needs to be “reinvigorated” is unconvincing.

The AAI Report highlights three “symptoms” of declining competition:  (1) rising concentration, (2) higher profits to the few and slowing rates of start-up activity, and (3) widening income and wealth inequality.  But these concerns are not something that antitrust policy is designed to address.  Mergers that threaten to harm competition are within the purview of antitrust, but modern antitrust rightly focuses on the likely effects of such mergers, not on the mere fact that they may increase concentration.  Furthermore, antitrust assesses the effects of business agreements on the competitive process.  Antitrust does not ask whether business arrangements yield “unacceptably” high profits, or “overly low” rates of business formation, or “unacceptable” wealth and income inequality.  Indeed, antitrust is not well equipped to address such questions, nor does it possess the tools to “solve” them (even assuming they need to be solved).

In short, if American competition is indeed declining based on the symptoms flagged by the AAI Report, the key to the solution will not be found by searching under the antitrust policy lamppost for illumination.  Rather, a more thorough search, with the help of “common sense” flashlights, is warranted.

The search outside the antitrust spotlight is not, however, a difficult one.  Finding the explanation for lagging competitive conditions in the United States requires no great policy legerdemain, because sound published research already provides the answer.  And that answer centers on government failures, not private sector abuses.

Consider overregulation.  In its annual Red Tape Rising reports (see here for the latest one), the Heritage Foundation has documented the growing burden of federal regulation on the American economy.  Overregulation acts like an implicit tax on businesses and disincentivizes business start-ups.  Moreover, as regulatory requirements grow in complexity and burdensomeness, they increasingly place a premium on large size – only relatively larger businesses can better afford the fixed costs needed to establish regulatory compliance department than their smaller rivals.  Heritage Foundation Scholar Norbert Michel summarizes this phenomenon in his article Dodd-Frank and Glass-Steagall – ‘Consumer Protection for Billionaires’:

Even when it’s not by nefarious design, we end up with rules that favor the largest/best-funded firms over their smaller/less-well-funded competitors. Put differently, our massive regulatory state ends up keeping large firms’ competitors at bay.  The more detailed regulators try to be, the more complex the rules become. And the more complex the rules become, the smaller the number of people who really care. Hence, more complicated rules and regulations serve to protect existing firms from competition more than simple ones. All of this means consumers lose. They pay higher prices, they have fewer choices of financial products and services, and they pretty much end up with the same level of protection they’d have with a smaller regulatory state.

What’s worse, some of the most onerous regulatory schemes are explicitly designed to favor large competitors over small ones.  A prime example is financial services regulation, and, in particular, the rules adopted pursuant to the 2010 Dodd-Frank Act (other examples could readily be provided).  As a Heritage Foundation report explains (footnote citations omitted):

The [Dodd-Frank] act was largely intended to reduce the risk of a major bank failure, but the regulatory burden is crippling community banks (which played little role in the financial crisis). According to Harvard University researchers Marshall Lux and Robert Greene, small banks’ share of U.S. commercial banking assets declined nearly twice as much since the second quarter of 2010—around the time of Dodd–Frank’s passage—as occurred between 2006 and 2010. Their share currently stands at just 22 percent, down from 41 percent in 1994.

The increased consolidation rate is driven by regulatory economies of scale—larger banks are better suited to handle increased regulatory burdens than are smaller banks, causing the average costs of community banks to rise. The decline in small bank assets spells trouble for their primary customer base—small business loans and those seeking residential mortgages.

Ironically, Dodd–Frank proponents pushed for the law as necessary to rein in the big banks and Wall Street. In fact, the regulations are giving the largest companies a competitive advantage over smaller enterprises—the opposite outcome sought by Senator Christopher Dodd (D–CT), Representative Barney Frank (D–MA), and their allies. As Goldman Sachs CEO Lloyd Blankfein recently explained: “More intense regulatory and technology requirements have raised the barriers to entry higher than at any other time in modern history. This is an expensive business to be in, if you don’t have the market share in scale.

In sum, as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.

More generally, as Heritage Foundation President Jim DeMint and Heritage Action for America CEO Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations, including special tax preferences. Those special preferences undermine competition on the merits by firms that lack insider status, to the public detriment.  Relatedly, the hideously complex system of American business taxation, which features the highest corporate tax rates in the developed world (which can better be manipulated by very large corporate players), depresses wages and is a serious drag on the American economy, as shown by Heritage Foundation scholars Curtis Dubay and David Burton.  In a similar vein, David Burton testified before Congress in 2015 on how the various excesses of the American regulatory state (including bad tax, health care, immigration, and other regulatory policies, combined with an overly costly legal system) undermine U.S. entrepreneurship (see here).

In other words, special subsidies, regulations, and tax and regulatory programs for the well-connected are part and parcel of crony capitalism, which (1) favors large businesses, tending to raise concentration; (2) confers higher profits on the well-connected while discouraging small business entrepreneurship; and (3) promotes income and wealth inequality, with the greatest returns going to the wealthiest government cronies who know best how to play the Washington “rent seeking game.”  Unfortunately, crony capitalism has grown like topsy during the Obama Administration.

Accordingly, I would counsel AAI to turn its scholarly gaze away from antitrust and toward the true source of the American competitive ailments it spotlights:  crony capitalism enabled by the growth of big government special interest programs and increasingly costly regulatory schemes.  Let’s see if AAI takes my advice.

There must have been a great gnashing of teeth in Chairman Wheeler’s office this morning as the FCC announced that it was pulling the Chairman’s latest modifications to the set-top box proposal from its voting agenda. This is surely but a bump in the road for the Chairman; he will undoubtedly press ever onward in his quest to “fix” a market that is flooded with competition and consumer choice. But, as we stop to take a breath for a moment while this latest FCC adventure is temporarily paused, there is a larger issue worth considering: the lack of transparency at the FCC.

Although the Commission has an unfortunate tradition of non-disclosure surrounding many of its regulatory proposals, the problem has seemingly been exacerbated by Chairman Wheeler’s aggressive agenda and his intransigence in the face of overwhelming and rigorous criticism.

Perhaps nowhere was this attitude more apparent than with his handling of the Open Internet Order, which was plagued with enough process problems to elicit a call for a delay of the Commission’s vote on the initial rules from Democratic Commissioner Rosenworcel, and a strong rebuke from the Chairman of the House Oversight Committee prior to the Commission’s vote on the final rules (which were not disclosed to the public until after the vote).

But the same cavalier dismissal of public and stakeholder input has plagued the Chairman’s beleaguered set-top box proposal, as well.

As Commissioner Pai noted before Congress in March:

The FCC continues to choose opacity over transparency. The decisions we make impact hundreds of millions of Americans and thousands of small businesses. And yet to the public, to Congress, and even to the Commissioners at the FCC, the agency’s work remains a black box.

Take this simple proposition: The public should be able to see what we’re voting on before we vote on it. That’s how Congress works, as you know. Anyone can look up any pending bill right now by going to And that’s how many state commissions work too. But not the FCC.

Exhibit A in Commissioner Pai’s lament was the set-top box proceeding:

Instead, the public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Now, although the Chairman’s initial proposal was eventually released, we have only a fact sheet and an op-ed by Chairman Wheeler on which to judge the purportedly substantial changes embodied in his latest version.

Even Democrats in Congress have recognized the process problems that have plagued this proceeding. As Senator Feinstein (D-CA) urged in a recent letter to Chairman Wheeler:

Given the significance of this proceeding, I ask that you make public the new proposal under consideration by the Commission, so that all interested stakeholders, members of Congress, copyright experts, and others can comment on the potential copyright implications of the new proposal before the Commission votes on it.

And as Senator Heller (R-NV) wrote in a letter to Chairman Wheeler this week:

I believe it is unacceptable that the FCC has not released the text of this proposal before Thursday’s vote. A three-page fact sheet does not provide enough details for Congress to conduct proper oversight of this rulemaking that will significantly impact both consumers and industry…. I encourage you to release the text immediately so that the American public has a full understanding of what is being considered by the Commission….

Of course, this isn’t a new problem at the FCC. In fact, before he supported Chairman Wheeler’s efforts to impose Open Internet rules without sufficient public disclosure, then-Senator Obama decried then-Chairman Martin’s efforts to enact new media ownership rules with insufficient process in 2007:

Repealing the cross ownership rules and retaining the rest of our existing regulations is not a proposal that has been put out for public comment; the proper process for vetting it is not in closed door meetings with lobbyists or in selective leaks to the New York Times.

Although such a proposal may pass the muster of a federal court, Congress and the public have the right to review any specific proposal and decide whether or not it constitutes sound policy. And the Commission has the responsibility to defend any new proposal in public discourse and debate.

And although you won’t find them complaining this time (because this time they want the excessive intervention that the NPRM seems to contemplate), regulatory advocates lamented just exactly this sort of secrecy at the Commission when Chairman Genachowski proposed his media ownership rules in 2012. At that time Free Press angrily wrote:

[T]he Commission still has not made public its actual media ownership order…. Furthermore, it’s disingenuous for the FCC to suggest that its process now is more transparent than the one former Chairman Martin used to adopt similar rules. Genachowski’s FCC has yet to publish any details of its final proposal, offering only vague snippets in press releases… despite the president’s instruction to rulemaking agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.

As Free Press noted, President Obama did indeed instruct “agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.” In his Memorandum on Transparency and Open Government, his first executive action, the president urged that:

Public engagement enhances the Government’s effectiveness and improves the quality of its decisions. Knowledge is widely dispersed in society, and public officials benefit from having access to that dispersed knowledge. Executive departments and agencies should offer Americans increased opportunities to participate in policymaking and to provide their Government with the benefits of their collective expertise and information.

The resulting Open Government Directive calls on executive agencies to

take prompt steps to expand access to information by making it available online in open formats. With respect to information, the presumption shall be in favor of openness….

The FCC is not an “executive agency,” and so is not directly subject to the Directive. But the Chairman’s willingness to stray so far from basic principles of transparency is woefully inconsistent with the basic principles of good government and the ideals of heightened transparency claimed by this administration.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

Public comments on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines have, not surprisingly, focused primarily on fine points of antitrust analysis carried out by those two federal agencies (see, for example, the thoughtful recommendations by the Global Antitrust Institute, here).  In a September 23 submission to the FTC and the DOJ, however, U.S. International Trade Commissioner F. Scott Kieff focused on a broader theme – that patent-antitrust assessments should keep in mind the indirect effects on commercialization that stem from IP (and, in particular, patents).  Kieff argues that antitrust enforcers have employed a public law “rules-based” approach that balances the “incentive to innovate” created when patents prevent copying against the goals of competition.  In contrast, Kieff characterizes the commercialization approach as rooted in the property rights nature of patents and the use of private contracting to bring together complementary assets and facilitate coordination.  As Kieff explains (in italics, footnote citations deleted):

A commercialization approach to IP views IP more in the tradition of private law, rather than public law. It does so by placing greater emphasis on viewing IP as property rights, which in turn is accomplished by greater reliance on interactions among private parties over or around those property rights, including via contracts. Centered on the relationships among private parties, this approach to IP emphasizes a different target and a different mechanism by which IP can operate. Rather than target particular individuals who are likely to respond to IP as incentives to create or invent in particular, this approach targets a broad, diverse set of market actors in general; and it does so indirectly. This broad set of indirectly targeted actors encompasses the creator or inventor of the underlying IP asset as well as all those complementary users of a creation or an invention who can help bring it to market, such as investors (including venture capitalists), entrepreneurs, managers, marketers, developers, laborers, and owners of other key assets, tangible and intangible, including other creations or inventions. Another key difference in this approach to IP lies in the mechanism by which these private actors interact over and around IP assets. This approach sees IP rights as tools for facilitating coordination among these diverse private actors, in furtherance of their own private interests in commercializing the creation or invention.

This commercialization approach sees property rights in IP serving a role akin to beacons in the dark, drawing to themselves all of those potential complementary users of the IP-protected-asset to interact with the IP owner and each other. This helps them each explore through the bargaining process the possibility of striking contracts with each other.

Several payoffs can flow from using this commercialization approach. Focusing on such a beacon-and-bargain effect can relieve the governmental side of the IP system of the need to amass the detailed information required to reasonably tailor a direct targeted incentive, such as each actor’s relative interests and contributions, needs, skills, or the like. Not only is amassing all of that information hard for the government to do, but large, established market actors may be better able than smaller market entrants to wield the political influence needed to get the government to act, increasing risk of concerns about political economy, public choice, and fairness. Instead, when governmental bodies closely adhere to a commercialization approach, each private party can bring its own expertise and other assets to the negotiating table while knowing—without necessarily having to reveal to other parties or the government—enough about its own level of interest and capability when it decides whether to strike a deal or not.            

Such successful coordination may help bring new business models, products, and services to market, thereby decreasing anticompetitive concentration of market power. It also can allow IP owners and their contracting parties to appropriate the returns to any of the rival inputs they invested towards developing and commercializing creations or inventions—labor, lab space, capital, and the like. At the same time, the government can avoid having to then go back to evaluate and trace the actual relative contributions that each participant brought to a creation’s or an invention’s successful commercialization—including, again, the cost of obtaining and using that information and the associated risks of political influence—by enforcing the terms of the contracts these parties strike with each other to allocate any value resulting from the creation’s or invention’s commercialization. In addition, significant economic theory and empirical evidence suggests this can all happen while the quality-adjusted prices paid by many end users actually decline and public access is high. In keeping with this commercialization approach, patents can be important antimonopoly devices, helping a smaller “David” come to market and compete against a larger “Goliath.”

A commercialization approach thereby mitigates many of the challenges raised by the tension that is a focus of the other intellectual approaches to IP, as well as by the responses these other approaches have offered to that tension, including some – but not all – types of AT regulation and enforcement. Many of the alternatives to IP that are often suggested by other approaches to IP, such as rewards, tax credits, or detailed rate regulation of royalties by AT enforcers can face significant challenges in facilitating the private sector coordination benefits envisioned by the commercialization approach to IP. While such approaches often are motivated by concerns about rising prices paid by consumers and direct benefits paid to creators and inventors, they may not account for the important cases in which IP rights are associated with declines in quality-adjusted prices paid by consumers and other forms of commercial benefits accrued to the entire IP production team as well as to consumers and third parties, which are emphasized in a commercialization approach. In addition, a commercialization approach can embrace many of the practical checks on the market power of an IP right that are often suggested by other approaches to IP, such as AT review, government takings, and compulsory licensing. At the same time this approach can show the importance of maintaining self-limiting principles within each such check to maintain commercialization benefits and mitigate concerns about dynamic efficiency, public choice, fairness, and the like.

To be sure, a focus on commercialization does not ignore creators or inventors or creations or inventions themselves. For example, a system successful in commercializing inventions can have the collateral benefit of providing positive incentives to those who do invent through the possibility of sharing in the many rewards associated with successful commercialization. Nor does a focus on commercialization guarantee that IP rights cause more help than harm. Significant theoretical and empirical questions remain open about benefits and costs of each approach to IP. And significant room to operate can remain for AT enforcers pursuing their important public mission, including at the IP-AT interface.

Commissioner Kieff’s evaluation is in harmony with other recent scholarly work, including Professor Dan Spulber’s explanation that the actual nature of long-term private contracting arrangements among patent licensors and licensees avoids alleged competitive “imperfections,” such as harmful “patent hold-ups,” “patent thickets,” and “royalty stacking” (see my discussion here).  More generally, Commissioner Kieff’s latest pronouncement is part of a broader and growing theoretical and empirical literature that demonstrates close associations between strong patent systems and economic growth and innovation (see, for example, here).

There is a major lesson here for U.S. (and foreign) antitrust enforcement agencies.  As I have previously pointed out (see, for example, here), in recent years, antitrust enforcers here and abroad have taken positions that tend to weaken patent rights.  Those positions typically are justified by the existence of “patent policy deficiencies” such as those that Professor Spulber’s paper debunks, as well as an alleged epidemic of low quality “probabilistic patents” (see, for example, here) – justifications that ignore the substantial economic benefits patents confer on society through contracting and commercialization.  It is high time for antitrust to accommodate the insights drawn from this new learning.  Specifically, government enforcers should change their approach and begin incorporating private law/contracting/commercialization considerations into patent-antitrust analysis, in order to advance the core goals of antitrust – the promotion of consumer welfare and efficiency.  Better yet, if the FTC and DOJ truly want to maximize the net welfare benefits of antitrust, they should undertake a more general “policy reboot” and adopt a “decision-theoretic” error cost approach to enforcement policy, rooted in cost-benefit analysis (see here) and consistent with the general thrust of Roberts Court antitrust jurisprudence (see here).

In a September 20 speech at the high profile Georgetown Global Antitrust Enforcement Symposium, Acting Assistant Attorney General Renata Hesse sent the wrong signals to the business community and to foreign enforcers (see here) regarding U.S. antitrust policy.  Admittedly, a substantial part of her speech was a summary of existing U.S. antitrust doctrine.  In certain other key respects, however, Ms. Hesse’s remarks could be read as a rejection of the mainstream American understanding (and the accepted approach endorsed by the International Competition Network) that promoting economic efficiency and consumer welfare are the antitrust lodestar, and that non-economic considerations should not be part of antitrust analysis.  Because foreign lawyers, practitioners, and enforcement officials were present, Ms. Hesse’s statement not only could be cited against U.S. interests in foreign venues, it could undermine longstanding efforts to advance international convergence toward economically sound antitrust rules.

Let’s examine some specifics.

Ms. Hesse’s speech begins with a paean to “economic fairness” – a theme that runs counter to the theme that leading federal antitrust enforcers have consistently stressed for decades, namely, that antitrust seeks to advance the economic goal of consumer welfare (and efficiency).  Consider this passage (emphasis added):

[E]nforcers [should be] focused on the ultimate goal of antitrust, economic fairness. . . .    The conservative leaning “Chicago School” made economic efficiency synonymous with the goals of antitrust in the 1970s, which incorporated theoretical economics into mainstream antitrust scholarship and practice.  Later, more centrist or left-leaning post-Chicago and Harvard School scholars showed that sophisticated empirical and theoretical economics tools can be used to support more aggressive enforcement agendas.  Together, these developments resulted in many technical discussions about what impact a business practice will have on consumer welfare mathematically measured – involving supply and demand curves, triangles representing “dead weight loss,” and so on.   But that sort of conversation is one that resonates very little – if at all – with those engaged in the straightforward, popular dialogue about the dangers of increasing corporate concentration.  The language of economic theory does not sound like the language of economic fairness that is the raw material for most popular discussions about competition and antitrust.      

Unfortunately, Ms. Hesse’s references to the importance of “fairness” recur throughout her remarks, driving home again and again that fairness is a principle that should play a key role in antitrust enforcement.  Yet fairness is an inherently subjective concept (fairness for whom, and measured by what standard?) that was often invoked in notorious and illogical U.S. Supreme Court decisions of days of yore – decisions that were rightly critiqued by leading scholars and largely confined to the dustbin of bad precedents, starting in the mid-1970s.

Equally bad are the speech’s multiple references to “high concentration” and “bigness,” unfortunate terms that also cropped up in economically irrational pre-1970s Supreme Court antitrust opinions.  Scholarship demonstrating that neither high market concentration nor large corporate size is necessarily associated with poor economic performance is generally accepted, and the core teaching that “bigness” is not “badness” is a staple of undergraduate industrial organization classes and introductory antitrust law courses in the United States.  Admittedly the speech also recognizes that bigness and high concentration are not necessarily harmful, but merely by giving lip service to these concepts, it encourages interventionists and foreign enforcers who are seeking additional justifications for antitrust crusades against “big” and “powerful” companies (more on this point later).

Perhaps the most unfortunate passage in the speech is Ms. Hesse’s defense of the Supreme Court’s “Philadelphia National Bank” (1963) (“PNB”) presumption that “a merger which produces a firm controlling an undue percentage share of the relevant market, and results in a significant increase in the concentration of firms in that market is so inherently likely to lessen competition substantially” that the law will presume it unlawful.  The PNB presumption is a discredited historical relic, an antitrust “oldie but baddy” that sound scholarship has shown should be relegated to the antitrust scrap heap.  Professor Joshua Wright and Judge Douglas Ginsburg explained why the presumption should be scrapped in a 2015 Antitrust Law Journal article:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. The problem for today’s courts in applying this semicentenary standard is that the field of industrial organization economics has long since moved beyond the structural presumption upon which the standard is based. That presumption is almost the last vestige of pre-modern economics still embedded in the antitrust law of the United States. Even the 2010 Horizontal Merger Guidelines issued jointly by the Federal Trade Commission and the Antitrust Division of the Department of Justice have abandoned the . . . presumption, though the agencies certainly do not resist the temptation to rely upon the presumption when litigating a case. There is no doubt the . . . presumption of PNB is a convenient litigation tool for the enforcement agencies, but the mission of the enforcement agencies is consumer welfare, not cheap victories in litigation. The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.  

Ms. Hesse ignored this reasoned analysis in commenting on the PNB presumption:

[I]n the wake of the Chicago School’s influence, antitrust commentators started to call into question the validity of this common-sense presumption, believing that economic theory showed that mergers tended to be beneficial or, if they resulted in harm, that harm was fleeting.  Those skeptics demanded more detailed proof of consumer harm in place of the presumption.  More recent economics studies, however, have given new life to the old presumption—in several ways.  First, we are learning more and more that mergers among substantial competitors tend to lead to higher prices. [citation omitted]  Second, economists have been finding that mergers often fail to deliver on the gains their proponents sought to achieve. [citation omitted] Taking these insights together, we should be skeptical of the claim that mergers among substantial competitors are beneficial.  The law – which builds this skepticism into it – provides an excellent tool for protecting competition from large, horizontal mergers.

Ms. Hesse’s discussion of the PNB presumption is problematic on several counts.  First, it cites one 2014 study that purports to find price increases following certain mergers in some oligopolistic industries as supporting the presumption, without acknowledging a key critique of that study – that it ignores efficiencies and potential gains in producer welfare (see here).  Second, it cites one 2001 study suggesting that financial performance may not be enhanced by some mergers while ignoring other studies to the contrary (see, for example, here and here).  Third, and most fundamentally, Ms. Hesse’s statement that “we should be skeptical of the claim that mergers among substantial competitors are beneficial” misses the point of antitrust enforcement entirely, and, in so doing, could be read as discouraging efficiency-seeking acquisitions.  It is not the role of antitrust enforcement to make merging parties prove that their proposed transaction will be beneficial – rather, enforcers must prove that a proposed transaction’s effect “may be substantially to lessen competition”, as stated in section 7 of the Clayton Act.  Requiring “proof” that a merger between competitors “will be beneficial” after the fact, in response to a negative presumption, strongly discourages potential efficiency-seeking consolidations, to the detriment of economic growth and welfare.  That was the case in the 1960s, and it could become so again today, if U.S. antitrust enforcers embark on a concerted campaign of touting the PNB presumption.  Relatedly, an efficient market for corporate control (involving the strong potential of acquisitions to achieve synergies or to correct management problems in badly-run targets) is chilled when a presumption blocks acquisitions absent a “proof” of future benefit, to the detriment of the economy.  Apart from these technical points, the PNB presumption in effect grants a government bureaucracy (exercising “the pretense of knowledge”) the right to condemn voluntary commercial transactions of a particular sort (horizontal mergers) that have not been shown to be harmful.  Such a grant of authority ignores the superior ability of information-seeking market participants to uncover and apply knowledge (as the late Friedrich Hayek might have pointed out) and is fundamentally at odds with the system of voluntary exchange that lies at the heart of a successful market economy.

Another highly problematic statement is Ms. Hesse’s discussion of the Federal Trade Commission’s (FTC) final 2010 Intel settlement:

The Federal Trade Commission’s case against Intel a decade later . . . shows how dominant firms can cut off the normal mechanisms of competition to maintain dominance.  In that case, the FTC alleged that Intel violated Section 5 of the FTC Act by maintaining its monopoly in central processing units (or CPUs) through a variety of payments and penalties (including loyalty or market-share discounts) to computer manufacturers to induce them not to purchase products from Intel’s rivals such as AMD and Via Technologies. [citation omitted]  When a monopolist pays customers to disfavor its rivals and punishes those customers who nevertheless do business with a rival, that does not look like the monopolist is competing with its rivals on the merits of their products.  Because these actions served only to foreclose competition from rival producers of CPUs, these actions distorted the competitive process.

Ms. Hesse ignores the fact that Intel involved a settlement, not a final litigated decision, and thus is lacking in precedential weight.  Firms that believe their conduct was perfectly legal may nevertheless settle an FTC investigation if they deem the costs (including harm to reputation) of continuing to litigate outweigh the costs of the settlement’s terms.  Furthermore, various learned commentators (such as Professor and then-FTC Commissioner Joshua Wright, see here) have pointed out that Intel’s discounts had tangible procompetitive effects and that there was a lack of evidence that Intel’s conduct harmed consumers or competitors (indeed, AMD, Intel’s principal competitor, continued to thrive during the period of Intel’s alleged “bad” behavior).  In short, Ms. Hesse’s conclusion that Intel’s actions “served only to foreclose competition from rival producers of CPUs” lacks credibility.  Moreover, Ms. Hesse’s reference to illegal “monopoly maintenance,” a Sherman Antitrust Act monopolization term of art, fails to note that the FTC stressed that Intel was brought purely under FTC section 5, “which is broader than the antitrust laws”.

Finally, the speech’s concluding section ends on a discordant note.  In summing up what she deemed to be an appropriate, up-to-date approach to antitrust litigation, Ms. Hesse reemphasizes the “fairness” theme, making such statements as “ultimately the plaintiff’s story should highlight the moral underpinnings of the antitrust laws—fighting against the unfairness of concentrated economic power” and “attempts to obtain or keep economic power unfairly”.  While such statements might be rationalized as having been made in the context of promoting a “non-technical” appreciation for antitrust by the general public, the emphasis on fairness as a rhetorical device in lieu of palpable economic harm and consumer welfare is quite troublesome.

On the domestic front, that emphasis may not have a direct impact on the exercise of prosecutorial discretion and on American judicial precedents in the short run (at least one hopes so).  In the longer run, however, it cuts against efforts to constrain populist impulses that would transform antitrust once again into an unguided missile aimed at the heart of the American market system.

On the international front, things are even worse.  A variety of major jurisdictions make explicit reference to “fairness” in their competition law statutes and decisions.  Foreign officials with a strongly interventionist bent might well cite Ms. Hesse’s speech in justifying expansive and economically untethered “fairness-based” competition law prosecutions.  Niceties as to whether their initiatives do not fall within the strict contours of Ms. Hesse’s analysis of the competitive process might be readily ignored, given the inherent elasticity (to say the least) of the “fairness” concept.  What’s more, Ms. Hesse’s remarks seriously undermine arguments advanced by the United States and leading commentators in multilateral fora (such as the ICN and the OECD) that competition law enforcement should focus solely on consumer welfare, with other policies handled under different statutory schemes.

In sum, Ms. Hesse’s speech summons up not the comforting ghost of Christmas past, but rather the malevolent goblin of antitrust past (whether she meant to do so or not).  Although her remarks concededly contain many well-reasoned and uncontroversial comments about antitrust analysis, her totally unnecessary application of a gaudy, un-economic populist gloss to the antitrust enterprise is what stares the reader in the face.  One can hope that, as an experienced and accomplished antitrust practitioner and public servant, Ms. Hesse will come to realize this and respond by unequivocally disavowing and stripping away the rhetorical gloss in a future major address.  Whether she chooses to do so or not, however, antitrust agency leadership in the next Administration should loudly and repeatedly make it clear that populist notions and “fairness” have no role in modern competition law analysis, whose lodestar should be consumer welfare and efficiency.

The FCC’s blind, headlong drive to “unlock” the set-top box market is disconnected from both legal and market realities. Legally speaking, and as we’ve noted on this blog many times over the past few months (see here, here and here), the set-top box proposal is nothing short of an assault on contracts, property rights, and the basic freedom of consumers to shape their own video experience.

Although much of the impulse driving the Chairman to tilt at set-top box windmills involves a distrust that MVPDs could ever do anything procompetitive, Comcast’s recent decision (actually, long in the making) to include an app from Netflix — their alleged arch-rival — on the X1 platform highlights the FCC’s poor grasp of market realities as well. And it hardly seems that Comcast was dragged kicking and screaming to this point, as many of the features it includes have been long under development and include important customer-centered enhancements:

We built this experience on the core foundational elements of the X1 platform, taking advantage of key technical advances like universal search, natural language processing, IP stream processing and a cloud-based infrastructure.  We have expanded X1’s voice control to make watching Netflix content as simple as saying, “Continue watching Daredevil.”

Yet, on the topic of consumer video choice, Chairman Wheeler lives in two separate worlds. On the one hand, he recognizes that:

There’s never been a better time to watch television in America. We have more options than ever, and, with so much competition for eyeballs, studios and artists keep raising the bar for quality content.

But, on the other hand, he asserts that when it comes to set-top boxes, there is no such choice, and consumers have suffered accordingly.

Of course, this ignores the obvious fact that nearly all pay-TV content is already available from a large number of outlets, and that competition between devices and services that deliver this content is plentiful.

In fact, ten years ago — before Apple TV, Roku, Xfinity X1 and Hulu (among too many others to list) — Gigi Sohn, Chairman Wheeler’s chief legal counsel, argued before the House Energy and Commerce Committee that:

We are living in a digital gold age and consumers… are the beneficiaries.  Consumers have numerous choices for buying digital content and for buying devices on which to play that content. (emphasis added)

And, even on the FCC’s own terms, the multichannel video market is presumptively competitive nationwide with

direct broadcast satellite (DBS) providers’ market share of multi-channel video programming distributors (MVPDs) subscribers [rising] to 33.8%. “Telco” MVPDs increased their market share to 13% and their nationwide footprint grew by 5%. Broadband service providers such as Google Fiber also expanded their footprints. Meanwhile, cable operators’ market share fell to 52.8% of MVPD subscribers.

Online video distributor (OVD) services continue to grow in popularity with consumers. Netflix now has 47 million or more subscribers in the U.S., Amazon Prime has close to 60 million, and Hulu has close to 12 million. By contrast, cable MVPD subscriptions dropped to 53.7 million households in 2014.

The extent of competition has expanded dramatically over the years, and Comcast’s inclusion of Netflix in its ecosystem is only the latest indication of this market evolution.

And to further underscore the outdated notion of focusing on “boxes,” AT&T just announced that it would be offering a fully apps-based version of its Direct TV service. And what was one of the main drivers of AT&T being able to go in this direction? It was because the company realized the good economic sense of ditching boxes altogether:

The company will be able to give consumers a break [on price] because of the low cost of delivering the service. AT&T won’t have to send trucks to install cables or set-top boxes; customers just need to download an app. 

And lest you think that Comcast’s move was merely a cynical response meant to undermine the Commissioner (although, it is quite enjoyable on that score), the truth is that Comcast has no choice but to offer services like this on its platform — and it’s been making moves like this for quite some time (see here and here). Everyone knows, MVPDs included, that apps distributed on a range of video platforms are the future. If Comcast didn’t get on board the apps train, it would have been left behind at the station.

And there is other precedent for expecting just this convergence of video offerings on a platform. For instance, Amazon’s Fire TV gives consumers the Amazon video suite — available through the Prime Video subscription — but they also give you access to apps like Netflix, Hulu. (Of course Amazon is a so-called edge provider, so when it makes the exact same sort of moves that Comcast is now making, its easy for those who insist on old market definitions to miss the parallels.)

The point is, where Amazon and Comcast are going to make their money is in driving overall usage of their platform because, inevitably, no single service is going to have every piece of content a given user wants. Long term viability in the video market is necessarily going to be about offering consumers more choice, not less. And, in this world, the box that happens to be delivering the content is basically irrelevant; it’s the competition between platform providers that matters.

The Global Antitrust Institute (GAI) at George Mason University’s Antonin Scalia Law School released today a set of comments on the joint U.S. Department of Justice (DOJ) – Federal Trade Commission (FTC) August 12 Proposed Update to their 1995 Antitrust Guidelines for the Licensing of Intellectual Property (Proposed Update).  As has been the case with previous GAI filings (see here, for example), today’s GAI Comments are thoughtful and on the mark.

For those of you who are pressed for time, the latest GAI comments make these major recommendations (summary in italics):

Standard Essential Patents (SEPs):  The GAI Comments commended the DOJ and the FTC for preserving the principle that the antitrust framework is sufficient to address potential competition issues involving all IPRs—including both SEPs and non-SEPs.  In doing so, the DOJ and the FTC correctly rejected the invitation to adopt a special brand of antitrust analysis for SEPs in which effects-based analysis was replaced with unique presumptions and burdens of proof. 

o   The GAI Comments noted that, as FTC Chairman Edith Ramirez has explained, “the same key enforcement principles [found in the 1995 IP Guidelines] also guide our analysis when standard essential patents are involved.”

o   This is true because SEP holders, like other IP holders, do not necessarily possess market power in the antitrust sense, and conduct by SEP holders, including breach of a voluntary assurance to license its SEP on fair, reasonable, and nondiscriminatory (FRAND) terms, does not necessarily result in harm to the competitive process or to consumers. 

o   Again, as Chairwoman Ramirez has stated, “it is important to recognize that a contractual dispute over royalty terms, whether the rate or the base used, does not in itself raise antitrust concerns.”

Refusals to License:  The GAI Comments expressed concern that the statements regarding refusals to license in Sections 2.1 and 3 of the Proposed Update seem to depart from the general enforcement approach set forth in the 2007 DOJ-FTC IP Report in which those two agencies stated that “[a]ntitrust liability for mere unilateral, unconditional refusals to license patents will not play a meaningful part in the interface between patent rights and antitrust protections.”  The GAI recommended that the DOJ and the FTC incorporate this approach into the final version of their updated IP Guidelines.

“Unreasonable Conduct”:  The GAI Comments recommended that Section 2.2 of the Proposed Update be revised to replace the phrase “unreasonable conduct” with a clear statement that the agencies will only condemn licensing restraints when anticompetitive effects outweigh procompetitive benefits.

R&D Markets:  The GAI Comments urged the DOJ and the FTC to reconsider the inclusion (or, at the very least, substantially limit the use) of research and development (R&D) markets because: (1) the process of innovation is often highly speculative and decentralized, making it impossible to identify all market participants to be; (2) the optimal relationship between R&D and innovation is unknown; (3) the market structure most conducive to innovation is unknown; (4) the capacity to innovate is hard to monopolize given that the components of modern R&D—research scientists, engineers, software developers, laboratories, computer centers, etc.—are continuously available on the market; and (5) anticompetitive conduct can be challenged under the actual potential competition theory or at a later time.

While the GAI Comments are entirely on point, even if their recommendations are all adopted, much more needs to be done.  The Proposed Update, while relatively sound, should be viewed in the larger context of the Obama Administration’s unfortunate use of antitrust policy to weaken patent rights (see my article here, for example).  In addition to strengthening the revised Guidelines, as suggested by the GAI, the DOJ and the FTC should work with other component agencies of the next Administration – including the Patent Office and the White House – to signal enhanced respect for IP rights in general.  In short, a general turnaround in IP policy is called for, in order to spur American innovation, which has been all too lacking in recent years.