There must have been a great gnashing of teeth in Chairman Wheeler’s office this morning as the FCC announced that it was pulling the Chairman’s latest modifications to the set-top box proposal from its voting agenda. This is surely but a bump in the road for the Chairman; he will undoubtedly press ever onward in his quest to “fix” a market that is flooded with competition and consumer choice. But, as we stop to take a breath for a moment while this latest FCC adventure is temporarily paused, there is a larger issue worth considering: the lack of transparency at the FCC.

Although the Commission has an unfortunate tradition of non-disclosure surrounding many of its regulatory proposals, the problem has seemingly been exacerbated by Chairman Wheeler’s aggressive agenda and his intransigence in the face of overwhelming and rigorous criticism.

Perhaps nowhere was this attitude more apparent than with his handling of the Open Internet Order, which was plagued with enough process problems to elicit a call for a delay of the Commission’s vote on the initial rules from Democratic Commissioner Rosenworcel, and a strong rebuke from the Chairman of the House Oversight Committee prior to the Commission’s vote on the final rules (which were not disclosed to the public until after the vote).

But the same cavalier dismissal of public and stakeholder input has plagued the Chairman’s beleaguered set-top box proposal, as well.

As Commissioner Pai noted before Congress in March:

The FCC continues to choose opacity over transparency. The decisions we make impact hundreds of millions of Americans and thousands of small businesses. And yet to the public, to Congress, and even to the Commissioners at the FCC, the agency’s work remains a black box.

Take this simple proposition: The public should be able to see what we’re voting on before we vote on it. That’s how Congress works, as you know. Anyone can look up any pending bill right now by going to congress.gov. And that’s how many state commissions work too. But not the FCC.

Exhibit A in Commissioner Pai’s lament was the set-top box proceeding:

Instead, the public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Now, although the Chairman’s initial proposal was eventually released, we have only a fact sheet and an op-ed by Chairman Wheeler on which to judge the purportedly substantial changes embodied in his latest version.

Even Democrats in Congress have recognized the process problems that have plagued this proceeding. As Senator Feinstein (D-CA) urged in a recent letter to Chairman Wheeler:

Given the significance of this proceeding, I ask that you make public the new proposal under consideration by the Commission, so that all interested stakeholders, members of Congress, copyright experts, and others can comment on the potential copyright implications of the new proposal before the Commission votes on it.

And as Senator Heller (R-NV) wrote in a letter to Chairman Wheeler this week:

I believe it is unacceptable that the FCC has not released the text of this proposal before Thursday’s vote. A three-page fact sheet does not provide enough details for Congress to conduct proper oversight of this rulemaking that will significantly impact both consumers and industry…. I encourage you to release the text immediately so that the American public has a full understanding of what is being considered by the Commission….

Of course, this isn’t a new problem at the FCC. In fact, before he supported Chairman Wheeler’s efforts to impose Open Internet rules without sufficient public disclosure, then-Senator Obama decried then-Chairman Martin’s efforts to enact new media ownership rules with insufficient process in 2007:

Repealing the cross ownership rules and retaining the rest of our existing regulations is not a proposal that has been put out for public comment; the proper process for vetting it is not in closed door meetings with lobbyists or in selective leaks to the New York Times.

Although such a proposal may pass the muster of a federal court, Congress and the public have the right to review any specific proposal and decide whether or not it constitutes sound policy. And the Commission has the responsibility to defend any new proposal in public discourse and debate.

And although you won’t find them complaining this time (because this time they want the excessive intervention that the NPRM seems to contemplate), regulatory advocates lamented just exactly this sort of secrecy at the Commission when Chairman Genachowski proposed his media ownership rules in 2012. At that time Free Press angrily wrote:

[T]he Commission still has not made public its actual media ownership order…. Furthermore, it’s disingenuous for the FCC to suggest that its process now is more transparent than the one former Chairman Martin used to adopt similar rules. Genachowski’s FCC has yet to publish any details of its final proposal, offering only vague snippets in press releases… despite the president’s instruction to rulemaking agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.

As Free Press noted, President Obama did indeed instruct “agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.” In his Memorandum on Transparency and Open Government, his first executive action, the president urged that:

Public engagement enhances the Government’s effectiveness and improves the quality of its decisions. Knowledge is widely dispersed in society, and public officials benefit from having access to that dispersed knowledge. Executive departments and agencies should offer Americans increased opportunities to participate in policymaking and to provide their Government with the benefits of their collective expertise and information.

The resulting Open Government Directive calls on executive agencies to

take prompt steps to expand access to information by making it available online in open formats. With respect to information, the presumption shall be in favor of openness….

The FCC is not an “executive agency,” and so is not directly subject to the Directive. But the Chairman’s willingness to stray so far from basic principles of transparency is woefully inconsistent with the basic principles of good government and the ideals of heightened transparency claimed by this administration.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

Public comments on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines have, not surprisingly, focused primarily on fine points of antitrust analysis carried out by those two federal agencies (see, for example, the thoughtful recommendations by the Global Antitrust Institute, here).  In a September 23 submission to the FTC and the DOJ, however, U.S. International Trade Commissioner F. Scott Kieff focused on a broader theme – that patent-antitrust assessments should keep in mind the indirect effects on commercialization that stem from IP (and, in particular, patents).  Kieff argues that antitrust enforcers have employed a public law “rules-based” approach that balances the “incentive to innovate” created when patents prevent copying against the goals of competition.  In contrast, Kieff characterizes the commercialization approach as rooted in the property rights nature of patents and the use of private contracting to bring together complementary assets and facilitate coordination.  As Kieff explains (in italics, footnote citations deleted):

A commercialization approach to IP views IP more in the tradition of private law, rather than public law. It does so by placing greater emphasis on viewing IP as property rights, which in turn is accomplished by greater reliance on interactions among private parties over or around those property rights, including via contracts. Centered on the relationships among private parties, this approach to IP emphasizes a different target and a different mechanism by which IP can operate. Rather than target particular individuals who are likely to respond to IP as incentives to create or invent in particular, this approach targets a broad, diverse set of market actors in general; and it does so indirectly. This broad set of indirectly targeted actors encompasses the creator or inventor of the underlying IP asset as well as all those complementary users of a creation or an invention who can help bring it to market, such as investors (including venture capitalists), entrepreneurs, managers, marketers, developers, laborers, and owners of other key assets, tangible and intangible, including other creations or inventions. Another key difference in this approach to IP lies in the mechanism by which these private actors interact over and around IP assets. This approach sees IP rights as tools for facilitating coordination among these diverse private actors, in furtherance of their own private interests in commercializing the creation or invention.

This commercialization approach sees property rights in IP serving a role akin to beacons in the dark, drawing to themselves all of those potential complementary users of the IP-protected-asset to interact with the IP owner and each other. This helps them each explore through the bargaining process the possibility of striking contracts with each other.

Several payoffs can flow from using this commercialization approach. Focusing on such a beacon-and-bargain effect can relieve the governmental side of the IP system of the need to amass the detailed information required to reasonably tailor a direct targeted incentive, such as each actor’s relative interests and contributions, needs, skills, or the like. Not only is amassing all of that information hard for the government to do, but large, established market actors may be better able than smaller market entrants to wield the political influence needed to get the government to act, increasing risk of concerns about political economy, public choice, and fairness. Instead, when governmental bodies closely adhere to a commercialization approach, each private party can bring its own expertise and other assets to the negotiating table while knowing—without necessarily having to reveal to other parties or the government—enough about its own level of interest and capability when it decides whether to strike a deal or not.            

Such successful coordination may help bring new business models, products, and services to market, thereby decreasing anticompetitive concentration of market power. It also can allow IP owners and their contracting parties to appropriate the returns to any of the rival inputs they invested towards developing and commercializing creations or inventions—labor, lab space, capital, and the like. At the same time, the government can avoid having to then go back to evaluate and trace the actual relative contributions that each participant brought to a creation’s or an invention’s successful commercialization—including, again, the cost of obtaining and using that information and the associated risks of political influence—by enforcing the terms of the contracts these parties strike with each other to allocate any value resulting from the creation’s or invention’s commercialization. In addition, significant economic theory and empirical evidence suggests this can all happen while the quality-adjusted prices paid by many end users actually decline and public access is high. In keeping with this commercialization approach, patents can be important antimonopoly devices, helping a smaller “David” come to market and compete against a larger “Goliath.”

A commercialization approach thereby mitigates many of the challenges raised by the tension that is a focus of the other intellectual approaches to IP, as well as by the responses these other approaches have offered to that tension, including some – but not all – types of AT regulation and enforcement. Many of the alternatives to IP that are often suggested by other approaches to IP, such as rewards, tax credits, or detailed rate regulation of royalties by AT enforcers can face significant challenges in facilitating the private sector coordination benefits envisioned by the commercialization approach to IP. While such approaches often are motivated by concerns about rising prices paid by consumers and direct benefits paid to creators and inventors, they may not account for the important cases in which IP rights are associated with declines in quality-adjusted prices paid by consumers and other forms of commercial benefits accrued to the entire IP production team as well as to consumers and third parties, which are emphasized in a commercialization approach. In addition, a commercialization approach can embrace many of the practical checks on the market power of an IP right that are often suggested by other approaches to IP, such as AT review, government takings, and compulsory licensing. At the same time this approach can show the importance of maintaining self-limiting principles within each such check to maintain commercialization benefits and mitigate concerns about dynamic efficiency, public choice, fairness, and the like.

To be sure, a focus on commercialization does not ignore creators or inventors or creations or inventions themselves. For example, a system successful in commercializing inventions can have the collateral benefit of providing positive incentives to those who do invent through the possibility of sharing in the many rewards associated with successful commercialization. Nor does a focus on commercialization guarantee that IP rights cause more help than harm. Significant theoretical and empirical questions remain open about benefits and costs of each approach to IP. And significant room to operate can remain for AT enforcers pursuing their important public mission, including at the IP-AT interface.

Commissioner Kieff’s evaluation is in harmony with other recent scholarly work, including Professor Dan Spulber’s explanation that the actual nature of long-term private contracting arrangements among patent licensors and licensees avoids alleged competitive “imperfections,” such as harmful “patent hold-ups,” “patent thickets,” and “royalty stacking” (see my discussion here).  More generally, Commissioner Kieff’s latest pronouncement is part of a broader and growing theoretical and empirical literature that demonstrates close associations between strong patent systems and economic growth and innovation (see, for example, here).

There is a major lesson here for U.S. (and foreign) antitrust enforcement agencies.  As I have previously pointed out (see, for example, here), in recent years, antitrust enforcers here and abroad have taken positions that tend to weaken patent rights.  Those positions typically are justified by the existence of “patent policy deficiencies” such as those that Professor Spulber’s paper debunks, as well as an alleged epidemic of low quality “probabilistic patents” (see, for example, here) – justifications that ignore the substantial economic benefits patents confer on society through contracting and commercialization.  It is high time for antitrust to accommodate the insights drawn from this new learning.  Specifically, government enforcers should change their approach and begin incorporating private law/contracting/commercialization considerations into patent-antitrust analysis, in order to advance the core goals of antitrust – the promotion of consumer welfare and efficiency.  Better yet, if the FTC and DOJ truly want to maximize the net welfare benefits of antitrust, they should undertake a more general “policy reboot” and adopt a “decision-theoretic” error cost approach to enforcement policy, rooted in cost-benefit analysis (see here) and consistent with the general thrust of Roberts Court antitrust jurisprudence (see here).

In a September 20 speech at the high profile Georgetown Global Antitrust Enforcement Symposium, Acting Assistant Attorney General Renata Hesse sent the wrong signals to the business community and to foreign enforcers (see here) regarding U.S. antitrust policy.  Admittedly, a substantial part of her speech was a summary of existing U.S. antitrust doctrine.  In certain other key respects, however, Ms. Hesse’s remarks could be read as a rejection of the mainstream American understanding (and the accepted approach endorsed by the International Competition Network) that promoting economic efficiency and consumer welfare are the antitrust lodestar, and that non-economic considerations should not be part of antitrust analysis.  Because foreign lawyers, practitioners, and enforcement officials were present, Ms. Hesse’s statement not only could be cited against U.S. interests in foreign venues, it could undermine longstanding efforts to advance international convergence toward economically sound antitrust rules.

Let’s examine some specifics.

Ms. Hesse’s speech begins with a paean to “economic fairness” – a theme that runs counter to the theme that leading federal antitrust enforcers have consistently stressed for decades, namely, that antitrust seeks to advance the economic goal of consumer welfare (and efficiency).  Consider this passage (emphasis added):

[E]nforcers [should be] focused on the ultimate goal of antitrust, economic fairness. . . .    The conservative leaning “Chicago School” made economic efficiency synonymous with the goals of antitrust in the 1970s, which incorporated theoretical economics into mainstream antitrust scholarship and practice.  Later, more centrist or left-leaning post-Chicago and Harvard School scholars showed that sophisticated empirical and theoretical economics tools can be used to support more aggressive enforcement agendas.  Together, these developments resulted in many technical discussions about what impact a business practice will have on consumer welfare mathematically measured – involving supply and demand curves, triangles representing “dead weight loss,” and so on.   But that sort of conversation is one that resonates very little – if at all – with those engaged in the straightforward, popular dialogue about the dangers of increasing corporate concentration.  The language of economic theory does not sound like the language of economic fairness that is the raw material for most popular discussions about competition and antitrust.      

Unfortunately, Ms. Hesse’s references to the importance of “fairness” recur throughout her remarks, driving home again and again that fairness is a principle that should play a key role in antitrust enforcement.  Yet fairness is an inherently subjective concept (fairness for whom, and measured by what standard?) that was often invoked in notorious and illogical U.S. Supreme Court decisions of days of yore – decisions that were rightly critiqued by leading scholars and largely confined to the dustbin of bad precedents, starting in the mid-1970s.

Equally bad are the speech’s multiple references to “high concentration” and “bigness,” unfortunate terms that also cropped up in economically irrational pre-1970s Supreme Court antitrust opinions.  Scholarship demonstrating that neither high market concentration nor large corporate size is necessarily associated with poor economic performance is generally accepted, and the core teaching that “bigness” is not “badness” is a staple of undergraduate industrial organization classes and introductory antitrust law courses in the United States.  Admittedly the speech also recognizes that bigness and high concentration are not necessarily harmful, but merely by giving lip service to these concepts, it encourages interventionists and foreign enforcers who are seeking additional justifications for antitrust crusades against “big” and “powerful” companies (more on this point later).

Perhaps the most unfortunate passage in the speech is Ms. Hesse’s defense of the Supreme Court’s “Philadelphia National Bank” (1963) (“PNB”) presumption that “a merger which produces a firm controlling an undue percentage share of the relevant market, and results in a significant increase in the concentration of firms in that market is so inherently likely to lessen competition substantially” that the law will presume it unlawful.  The PNB presumption is a discredited historical relic, an antitrust “oldie but baddy” that sound scholarship has shown should be relegated to the antitrust scrap heap.  Professor Joshua Wright and Judge Douglas Ginsburg explained why the presumption should be scrapped in a 2015 Antitrust Law Journal article:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. The problem for today’s courts in applying this semicentenary standard is that the field of industrial organization economics has long since moved beyond the structural presumption upon which the standard is based. That presumption is almost the last vestige of pre-modern economics still embedded in the antitrust law of the United States. Even the 2010 Horizontal Merger Guidelines issued jointly by the Federal Trade Commission and the Antitrust Division of the Department of Justice have abandoned the . . . presumption, though the agencies certainly do not resist the temptation to rely upon the presumption when litigating a case. There is no doubt the . . . presumption of PNB is a convenient litigation tool for the enforcement agencies, but the mission of the enforcement agencies is consumer welfare, not cheap victories in litigation. The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.  

Ms. Hesse ignored this reasoned analysis in commenting on the PNB presumption:

[I]n the wake of the Chicago School’s influence, antitrust commentators started to call into question the validity of this common-sense presumption, believing that economic theory showed that mergers tended to be beneficial or, if they resulted in harm, that harm was fleeting.  Those skeptics demanded more detailed proof of consumer harm in place of the presumption.  More recent economics studies, however, have given new life to the old presumption—in several ways.  First, we are learning more and more that mergers among substantial competitors tend to lead to higher prices. [citation omitted]  Second, economists have been finding that mergers often fail to deliver on the gains their proponents sought to achieve. [citation omitted] Taking these insights together, we should be skeptical of the claim that mergers among substantial competitors are beneficial.  The law – which builds this skepticism into it – provides an excellent tool for protecting competition from large, horizontal mergers.

Ms. Hesse’s discussion of the PNB presumption is problematic on several counts.  First, it cites one 2014 study that purports to find price increases following certain mergers in some oligopolistic industries as supporting the presumption, without acknowledging a key critique of that study – that it ignores efficiencies and potential gains in producer welfare (see here).  Second, it cites one 2001 study suggesting that financial performance may not be enhanced by some mergers while ignoring other studies to the contrary (see, for example, here and here).  Third, and most fundamentally, Ms. Hesse’s statement that “we should be skeptical of the claim that mergers among substantial competitors are beneficial” misses the point of antitrust enforcement entirely, and, in so doing, could be read as discouraging efficiency-seeking acquisitions.  It is not the role of antitrust enforcement to make merging parties prove that their proposed transaction will be beneficial – rather, enforcers must prove that a proposed transaction’s effect “may be substantially to lessen competition”, as stated in section 7 of the Clayton Act.  Requiring “proof” that a merger between competitors “will be beneficial” after the fact, in response to a negative presumption, strongly discourages potential efficiency-seeking consolidations, to the detriment of economic growth and welfare.  That was the case in the 1960s, and it could become so again today, if U.S. antitrust enforcers embark on a concerted campaign of touting the PNB presumption.  Relatedly, an efficient market for corporate control (involving the strong potential of acquisitions to achieve synergies or to correct management problems in badly-run targets) is chilled when a presumption blocks acquisitions absent a “proof” of future benefit, to the detriment of the economy.  Apart from these technical points, the PNB presumption in effect grants a government bureaucracy (exercising “the pretense of knowledge”) the right to condemn voluntary commercial transactions of a particular sort (horizontal mergers) that have not been shown to be harmful.  Such a grant of authority ignores the superior ability of information-seeking market participants to uncover and apply knowledge (as the late Friedrich Hayek might have pointed out) and is fundamentally at odds with the system of voluntary exchange that lies at the heart of a successful market economy.

Another highly problematic statement is Ms. Hesse’s discussion of the Federal Trade Commission’s (FTC) final 2010 Intel settlement:

The Federal Trade Commission’s case against Intel a decade later . . . shows how dominant firms can cut off the normal mechanisms of competition to maintain dominance.  In that case, the FTC alleged that Intel violated Section 5 of the FTC Act by maintaining its monopoly in central processing units (or CPUs) through a variety of payments and penalties (including loyalty or market-share discounts) to computer manufacturers to induce them not to purchase products from Intel’s rivals such as AMD and Via Technologies. [citation omitted]  When a monopolist pays customers to disfavor its rivals and punishes those customers who nevertheless do business with a rival, that does not look like the monopolist is competing with its rivals on the merits of their products.  Because these actions served only to foreclose competition from rival producers of CPUs, these actions distorted the competitive process.

Ms. Hesse ignores the fact that Intel involved a settlement, not a final litigated decision, and thus is lacking in precedential weight.  Firms that believe their conduct was perfectly legal may nevertheless settle an FTC investigation if they deem the costs (including harm to reputation) of continuing to litigate outweigh the costs of the settlement’s terms.  Furthermore, various learned commentators (such as Professor and then-FTC Commissioner Joshua Wright, see here) have pointed out that Intel’s discounts had tangible procompetitive effects and that there was a lack of evidence that Intel’s conduct harmed consumers or competitors (indeed, AMD, Intel’s principal competitor, continued to thrive during the period of Intel’s alleged “bad” behavior).  In short, Ms. Hesse’s conclusion that Intel’s actions “served only to foreclose competition from rival producers of CPUs” lacks credibility.  Moreover, Ms. Hesse’s reference to illegal “monopoly maintenance,” a Sherman Antitrust Act monopolization term of art, fails to note that the FTC stressed that Intel was brought purely under FTC section 5, “which is broader than the antitrust laws”.

Finally, the speech’s concluding section ends on a discordant note.  In summing up what she deemed to be an appropriate, up-to-date approach to antitrust litigation, Ms. Hesse reemphasizes the “fairness” theme, making such statements as “ultimately the plaintiff’s story should highlight the moral underpinnings of the antitrust laws—fighting against the unfairness of concentrated economic power” and “attempts to obtain or keep economic power unfairly”.  While such statements might be rationalized as having been made in the context of promoting a “non-technical” appreciation for antitrust by the general public, the emphasis on fairness as a rhetorical device in lieu of palpable economic harm and consumer welfare is quite troublesome.

On the domestic front, that emphasis may not have a direct impact on the exercise of prosecutorial discretion and on American judicial precedents in the short run (at least one hopes so).  In the longer run, however, it cuts against efforts to constrain populist impulses that would transform antitrust once again into an unguided missile aimed at the heart of the American market system.

On the international front, things are even worse.  A variety of major jurisdictions make explicit reference to “fairness” in their competition law statutes and decisions.  Foreign officials with a strongly interventionist bent might well cite Ms. Hesse’s speech in justifying expansive and economically untethered “fairness-based” competition law prosecutions.  Niceties as to whether their initiatives do not fall within the strict contours of Ms. Hesse’s analysis of the competitive process might be readily ignored, given the inherent elasticity (to say the least) of the “fairness” concept.  What’s more, Ms. Hesse’s remarks seriously undermine arguments advanced by the United States and leading commentators in multilateral fora (such as the ICN and the OECD) that competition law enforcement should focus solely on consumer welfare, with other policies handled under different statutory schemes.

In sum, Ms. Hesse’s speech summons up not the comforting ghost of Christmas past, but rather the malevolent goblin of antitrust past (whether she meant to do so or not).  Although her remarks concededly contain many well-reasoned and uncontroversial comments about antitrust analysis, her totally unnecessary application of a gaudy, un-economic populist gloss to the antitrust enterprise is what stares the reader in the face.  One can hope that, as an experienced and accomplished antitrust practitioner and public servant, Ms. Hesse will come to realize this and respond by unequivocally disavowing and stripping away the rhetorical gloss in a future major address.  Whether she chooses to do so or not, however, antitrust agency leadership in the next Administration should loudly and repeatedly make it clear that populist notions and “fairness” have no role in modern competition law analysis, whose lodestar should be consumer welfare and efficiency.

The FCC’s blind, headlong drive to “unlock” the set-top box market is disconnected from both legal and market realities. Legally speaking, and as we’ve noted on this blog many times over the past few months (see here, here and here), the set-top box proposal is nothing short of an assault on contracts, property rights, and the basic freedom of consumers to shape their own video experience.

Although much of the impulse driving the Chairman to tilt at set-top box windmills involves a distrust that MVPDs could ever do anything procompetitive, Comcast’s recent decision (actually, long in the making) to include an app from Netflix — their alleged arch-rival — on the X1 platform highlights the FCC’s poor grasp of market realities as well. And it hardly seems that Comcast was dragged kicking and screaming to this point, as many of the features it includes have been long under development and include important customer-centered enhancements:

We built this experience on the core foundational elements of the X1 platform, taking advantage of key technical advances like universal search, natural language processing, IP stream processing and a cloud-based infrastructure.  We have expanded X1’s voice control to make watching Netflix content as simple as saying, “Continue watching Daredevil.”

Yet, on the topic of consumer video choice, Chairman Wheeler lives in two separate worlds. On the one hand, he recognizes that:

There’s never been a better time to watch television in America. We have more options than ever, and, with so much competition for eyeballs, studios and artists keep raising the bar for quality content.

But, on the other hand, he asserts that when it comes to set-top boxes, there is no such choice, and consumers have suffered accordingly.

Of course, this ignores the obvious fact that nearly all pay-TV content is already available from a large number of outlets, and that competition between devices and services that deliver this content is plentiful.

In fact, ten years ago — before Apple TV, Roku, Xfinity X1 and Hulu (among too many others to list) — Gigi Sohn, Chairman Wheeler’s chief legal counsel, argued before the House Energy and Commerce Committee that:

We are living in a digital gold age and consumers… are the beneficiaries.  Consumers have numerous choices for buying digital content and for buying devices on which to play that content. (emphasis added)

And, even on the FCC’s own terms, the multichannel video market is presumptively competitive nationwide with

direct broadcast satellite (DBS) providers’ market share of multi-channel video programming distributors (MVPDs) subscribers [rising] to 33.8%. “Telco” MVPDs increased their market share to 13% and their nationwide footprint grew by 5%. Broadband service providers such as Google Fiber also expanded their footprints. Meanwhile, cable operators’ market share fell to 52.8% of MVPD subscribers.

Online video distributor (OVD) services continue to grow in popularity with consumers. Netflix now has 47 million or more subscribers in the U.S., Amazon Prime has close to 60 million, and Hulu has close to 12 million. By contrast, cable MVPD subscriptions dropped to 53.7 million households in 2014.

The extent of competition has expanded dramatically over the years, and Comcast’s inclusion of Netflix in its ecosystem is only the latest indication of this market evolution.

And to further underscore the outdated notion of focusing on “boxes,” AT&T just announced that it would be offering a fully apps-based version of its Direct TV service. And what was one of the main drivers of AT&T being able to go in this direction? It was because the company realized the good economic sense of ditching boxes altogether:

The company will be able to give consumers a break [on price] because of the low cost of delivering the service. AT&T won’t have to send trucks to install cables or set-top boxes; customers just need to download an app. 

And lest you think that Comcast’s move was merely a cynical response meant to undermine the Commissioner (although, it is quite enjoyable on that score), the truth is that Comcast has no choice but to offer services like this on its platform — and it’s been making moves like this for quite some time (see here and here). Everyone knows, MVPDs included, that apps distributed on a range of video platforms are the future. If Comcast didn’t get on board the apps train, it would have been left behind at the station.

And there is other precedent for expecting just this convergence of video offerings on a platform. For instance, Amazon’s Fire TV gives consumers the Amazon video suite — available through the Prime Video subscription — but they also give you access to apps like Netflix, Hulu. (Of course Amazon is a so-called edge provider, so when it makes the exact same sort of moves that Comcast is now making, its easy for those who insist on old market definitions to miss the parallels.)

The point is, where Amazon and Comcast are going to make their money is in driving overall usage of their platform because, inevitably, no single service is going to have every piece of content a given user wants. Long term viability in the video market is necessarily going to be about offering consumers more choice, not less. And, in this world, the box that happens to be delivering the content is basically irrelevant; it’s the competition between platform providers that matters.

The Global Antitrust Institute (GAI) at George Mason University’s Antonin Scalia Law School released today a set of comments on the joint U.S. Department of Justice (DOJ) – Federal Trade Commission (FTC) August 12 Proposed Update to their 1995 Antitrust Guidelines for the Licensing of Intellectual Property (Proposed Update).  As has been the case with previous GAI filings (see here, for example), today’s GAI Comments are thoughtful and on the mark.

For those of you who are pressed for time, the latest GAI comments make these major recommendations (summary in italics):

Standard Essential Patents (SEPs):  The GAI Comments commended the DOJ and the FTC for preserving the principle that the antitrust framework is sufficient to address potential competition issues involving all IPRs—including both SEPs and non-SEPs.  In doing so, the DOJ and the FTC correctly rejected the invitation to adopt a special brand of antitrust analysis for SEPs in which effects-based analysis was replaced with unique presumptions and burdens of proof. 

o   The GAI Comments noted that, as FTC Chairman Edith Ramirez has explained, “the same key enforcement principles [found in the 1995 IP Guidelines] also guide our analysis when standard essential patents are involved.”

o   This is true because SEP holders, like other IP holders, do not necessarily possess market power in the antitrust sense, and conduct by SEP holders, including breach of a voluntary assurance to license its SEP on fair, reasonable, and nondiscriminatory (FRAND) terms, does not necessarily result in harm to the competitive process or to consumers. 

o   Again, as Chairwoman Ramirez has stated, “it is important to recognize that a contractual dispute over royalty terms, whether the rate or the base used, does not in itself raise antitrust concerns.”

Refusals to License:  The GAI Comments expressed concern that the statements regarding refusals to license in Sections 2.1 and 3 of the Proposed Update seem to depart from the general enforcement approach set forth in the 2007 DOJ-FTC IP Report in which those two agencies stated that “[a]ntitrust liability for mere unilateral, unconditional refusals to license patents will not play a meaningful part in the interface between patent rights and antitrust protections.”  The GAI recommended that the DOJ and the FTC incorporate this approach into the final version of their updated IP Guidelines.

“Unreasonable Conduct”:  The GAI Comments recommended that Section 2.2 of the Proposed Update be revised to replace the phrase “unreasonable conduct” with a clear statement that the agencies will only condemn licensing restraints when anticompetitive effects outweigh procompetitive benefits.

R&D Markets:  The GAI Comments urged the DOJ and the FTC to reconsider the inclusion (or, at the very least, substantially limit the use) of research and development (R&D) markets because: (1) the process of innovation is often highly speculative and decentralized, making it impossible to identify all market participants to be; (2) the optimal relationship between R&D and innovation is unknown; (3) the market structure most conducive to innovation is unknown; (4) the capacity to innovate is hard to monopolize given that the components of modern R&D—research scientists, engineers, software developers, laboratories, computer centers, etc.—are continuously available on the market; and (5) anticompetitive conduct can be challenged under the actual potential competition theory or at a later time.

While the GAI Comments are entirely on point, even if their recommendations are all adopted, much more needs to be done.  The Proposed Update, while relatively sound, should be viewed in the larger context of the Obama Administration’s unfortunate use of antitrust policy to weaken patent rights (see my article here, for example).  In addition to strengthening the revised Guidelines, as suggested by the GAI, the DOJ and the FTC should work with other component agencies of the next Administration – including the Patent Office and the White House – to signal enhanced respect for IP rights in general.  In short, a general turnaround in IP policy is called for, in order to spur American innovation, which has been all too lacking in recent years.

Section 5(a)(2) of the Federal Trade Commission (FTC) Act authorizes the FTC to “prevent persons, partnerships, or corporations, except . . . common carriers subject to the Acts to regulate commerce . . . from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.”  On August 29, in FTC v. AT&T, the Ninth Circuit issued a decision that exempts non-common carrier data services from U.S. Federal Trade Commission (FTC) jurisdiction, merely because they are offered by a company that has common carrier status.  This case involved an FTC allegation that AT&T had “throttled” data (slowed down Internet service) for “unlimited mobile data” customers without adequate consent or disclosures, in violation of Section 5 of the FTC Act.  The FTC had claimed that although AT&T mobile wireless voice services were a common carrier service, the company’s mobile wireless data services were not, and, thus, were subject to FTC oversight.  Reversing a federal district court’s refusal to grant AT&T’s motion to dismiss, the Ninth Circuit concluded that “when Congress used the term ‘common carrier’ in the FTC Act, [there is no indication] it could only have meant ‘common carrier to the extent engaged in common carrier activity.’”  The Ninth Circuit therefore determined that “a literal reading of the words Congress selected simply does comport with [the FTC’s] activity-based approach.”  The FTC’s pending case against AT&T in the Northern District of California (which is within the Ninth Circuit) regarding alleged unfair and deceptive advertising of satellite services by AT&T subsidiary DIRECTTV (see here) could be affected by this decision.

The Ninth Circuit’s AT&T holding threatens to further extend the FCC’s jurisdictional reach at the expense of the FTC.  It comes on the heels of the divided D.C. Circuit’s benighted and ill-reasoned decision (see here) upholding the FCC’s “Open Internet Order,” including its decision to reclassify Internet broadband service as a common carrier service.  That decision subjects broadband service to heavy-handed and costly FCC “consumer protection” regulation, including in the area of privacy.  The FCC’s overly intrusive approach stands in marked contrast to the economic efficiency considerations (albeit not always perfectly applied) that underlie FTC consumer protection mode of analysis.  As I explained in a May 2015 Heritage Foundation Legal Memorandum,  the FTC’s highly structured, analytic, fact-based methodology, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

I argued in this space in March 2016 that, should the D.C. Circuit uphold the FCC’s Open Internet Order, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.  The D.C. Circuit’s decision upholding that Order, combined with the Ninth Circuit’s latest ruling, makes the case for potential action by the next Congress even more urgent.

While it is at it, the next Congress should also weigh whether to repeal the FTC’s common carrier exemption, as well as all special exemptions for specified categories of institutions, such as banks, savings and loans, and federal credit unions (see here).  In so doing, Congress might also do away with the Consumer Financial Protection Bureau, an unaccountable bureaucracy whose consumer protection regulatory responsibilities should cease (see my February 2016 Heritage Legal Memorandum here).

Finally, as Heritage Foundation scholars have urged, Congress should look into enacting additional regulatory reform legislation, such as requiring congressional approval of new major regulations issued by agencies (including financial services regulators) and subjecting “independent” agencies (including the FCC) to executive branch regulatory review.

That’s enough for now.  Stay tuned.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

As Commissioner Wheeler moves forward with his revised set-top box proposal, and on the eve of tomorrow’s senate FCC oversight hearing, we would do well to reflect on some insightful testimony regarding another of the Commission’s rulemakings from ten years ago:

We are living in a digital gold age and consumers… are the beneficiaries. Consumers have numerous choices for buying digital content and for buying devices on which to play that content. They have never had so much flexibility and so much opportunity.  

* * *

As the content industry has ramped up on-line delivery of content, it has been testing a variety of protection measures that provide both security for the industry and flexibility for consumers.

So to answer the question, can content protection and technological innovation coexist?  It is a resounding yes. Look at the robust market for on-line content distribution facilitated by the technologies and networks consumers love.

* * *

[T]he Federal Communications Commission should not become the Federal Computer Commission or the Federal Copyright Commission, and the marketplace, not the Government, is the best arbiter of what technologies succeed or fail.

That’s not the self-interested testimony of a studio or cable executive — that was Gigi Sohn, current counsel to Chairman Wheeler, speaking on behalf of Public Knowledge in 2006 before the House Energy and Commerce Committee against the FCC’s “broadcast flag” rules. Those rules, supported by a broad spectrum of rightsholders, required consumer electronics devices to respect programming conditions preventing the unauthorized transmission over the internet of digital broadcast television content.

Ms. Sohn and Public Knowledge won that fight in court, convincing the DC Circuit that Congress hadn’t given the FCC authority to impose the rules in the first place, and she successfully urged Congress not to give the FCC the authority to reinstate them.

Yet today, she and the Chairman seem to have forgotten her crucial insights from ten years ago. If the marketplace for video content was sufficiently innovative and competitive then, how can it possibly not be so now, with audiences having orders of magnitude more choices, both online and off? And if the FCC lacked authority to adopt copyright-related rules then, how does the FCC suddenly have that authority now, in the absence of any intervening congressional action?

With Section 106 of the Copyright Act, Congress granted copyright holders the exclusive rights to engage in or license the reproduction, distribution, and public performance of their works. The courts are the “backstop,” not the FCC (as Chairman Wheeler would have it), and section 629 of the Communications Act doesn’t say otherwise. All section 629 does is direct the FCC to promote a competitive market for devices to access pay-TV services from pay-TV providers. As we noted last week, it very simply doesn’t allow the FCC to interfere with the license arrangements that fill those devices, and, short of explicit congressional direction, the Commission is simply not empowered to interfere with the framework set forth in the Copyright Act.

Chairman Wheeler’s latest proposal has improved on his initial plan by, for example, moving toward an applications-based approach and away from the mandatory disaggregation of content. But it would still arrogate to the FCC the authority to stand up a licensing body for the distribution of content over pay-TV applications; set rules on the terms such licenses must, may, and may not include; and even allow the FCC itself to create terms or the entire license. Such rules would necessarily implicate the extent to which rightsholders are able to control the distribution of their content.

The specifics of the regulations may be different from 2006, but the point is the same: What the FCC could not do in 2006, it cannot do today.

Mylan Pharmaceuticals recently reinvigorated the public outcry over pharmaceutical price increases when news surfaced that the company had raised the price of EpiPens by more than 500% over the past decade and, purportedly, had plans to increase the price even more. The Mylan controversy comes on the heels of several notorious pricing scandals last year. Recall Valeant Pharmaceuticals, that acquired cardiac drugs Isuprel and Nitropress and then quickly raised their prices by 525% and 212%, respectively. And of course, who can forget Martin Shkreli of Turing Pharmaceuticals, who increased the price of toxoplasmosis treatment Daraprim by 5,000% and then claimed he should have raised the price even higher.

However, one company, pharmaceutical giant Allergan, seems to be taking a different approach to pricing.   Last week, Allergan CEO Brent Saunders condemned the scandalous pricing increases that have raised suspicions of drug companies and placed the entire industry in the political hot seat. In an entry on the company’s blog, Saunders issued Allergan’s “social contract with patients” that made several drug pricing commitments to its customers.

Some of the most important commitments Allergan made to its customers include:

  • A promise to not increase prices more than once a year, and to limit price increases to singe-digit percentage increases.
  • A pledge to improve patient access to Allergan medications by enhancing patient assistance programs in 2017
  • A vow to cooperate with policy makers and payers (including government drug plans, private insurers, and pharmacy benefit managers) to facilitate better access to Allergan products by offering pricing discounts and paying rebates to lower drug costs.
  • An assurance that Allergan will no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry, without cost increases that justify the increase.
  • A commitment to provide annual updates on how pricing affects Allergan’s business.
  • A pledge to price Allergan products in a way that is commensurate with, or lower than, the value they create.

Saunders also makes several non-pricing pledges to maintain a continuous supply of its drugs, diligently monitor the safety of its products, and appropriately educate physicians about its medicines. He also makes the point that the recent pricing scandals have shifted attention away from the vibrant medical innovation ecosystem that develops new life-saving and life-enhancing drugs. Saunders contends that the focus on pricing by regulators and the public has incited suspicions about this innovation ecosystem: “This ecosystem can quickly fall apart if it is not continually nourished with the confidence that there will be a longer term opportunity for appropriate return on investment in the long R&D journey.”

Policy-makers and the public would be wise to focus on the importance of brand drug innovation. Brand drug companies are largely responsible for pharmaceutical innovation. Since 2000, brand companies have spent over half a trillion dollars on R&D, and they currently account for over 90 percent of the spending on the clinical trials necessary to bring new drugs to market. As a result of this spending, over 550 new drugs have been approved by the FDA since 2000, and another 7,000 are currently in development globally. And this innovation is directly tied to health advances. Empirical estimates of the benefits of pharmaceutical innovation indicate that each new drug brought to market saves 11,200 life-years each year.  Moreover, new drugs save money by reducing doctor visits, hospitalizations, and other medical procedures, ultimately for every $1 spent on new drugs, total medical spending decreases by more than $7.

But, as Saunders suggests, this innovation depends on drugmakers earning a sufficient return on their investment in R&D. The costs to bring a new drug to market with FDA approval are now estimated at over $2 billion, and only 1 in 10 drugs that begin clinical trials are ever approved by the FDA. Brand drug companies must price a drug not only to recoup the drug’s own costs, they must also consider the costs of all the product failures in their pricing decisions. However, they have a very limited window to recoup these costs before generic competition destroys brand profits: within three months of the first generic entry, generics have already captured over 70 percent of the brand drugs’ market. Drug companies must be able to price drugs at a level where they can earn profits sufficient to offset their R&D costs and the risk of failures. Failure to cover these costs will slow investment in R&D; drug companies will not spend millions and billions of dollars developing drugs if they cannot recoup the costs of that development.

Yet several recent proposals threaten to control prices in a way that could prevent drug companies from earning a sufficient return on their investment in R&D. Ultimately, we must remember that a social contract involves commitment from all members of a group; it should involve commitments from drug companies to price responsibly, and commitments from the public and policy makers to protect innovation. Hopefully, more drug companies will follow Allergan’s lead and renounce the exorbitant price increases we’ve seen in recent times. But in return, we should all remember that innovation and, in turn, health improvements, depend on drug companies’ profitability.