Archives For ftc

Public comments on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines have, not surprisingly, focused primarily on fine points of antitrust analysis carried out by those two federal agencies (see, for example, the thoughtful recommendations by the Global Antitrust Institute, here).  In a September 23 submission to the FTC and the DOJ, however, U.S. International Trade Commissioner F. Scott Kieff focused on a broader theme – that patent-antitrust assessments should keep in mind the indirect effects on commercialization that stem from IP (and, in particular, patents).  Kieff argues that antitrust enforcers have employed a public law “rules-based” approach that balances the “incentive to innovate” created when patents prevent copying against the goals of competition.  In contrast, Kieff characterizes the commercialization approach as rooted in the property rights nature of patents and the use of private contracting to bring together complementary assets and facilitate coordination.  As Kieff explains (in italics, footnote citations deleted):

A commercialization approach to IP views IP more in the tradition of private law, rather than public law. It does so by placing greater emphasis on viewing IP as property rights, which in turn is accomplished by greater reliance on interactions among private parties over or around those property rights, including via contracts. Centered on the relationships among private parties, this approach to IP emphasizes a different target and a different mechanism by which IP can operate. Rather than target particular individuals who are likely to respond to IP as incentives to create or invent in particular, this approach targets a broad, diverse set of market actors in general; and it does so indirectly. This broad set of indirectly targeted actors encompasses the creator or inventor of the underlying IP asset as well as all those complementary users of a creation or an invention who can help bring it to market, such as investors (including venture capitalists), entrepreneurs, managers, marketers, developers, laborers, and owners of other key assets, tangible and intangible, including other creations or inventions. Another key difference in this approach to IP lies in the mechanism by which these private actors interact over and around IP assets. This approach sees IP rights as tools for facilitating coordination among these diverse private actors, in furtherance of their own private interests in commercializing the creation or invention.

This commercialization approach sees property rights in IP serving a role akin to beacons in the dark, drawing to themselves all of those potential complementary users of the IP-protected-asset to interact with the IP owner and each other. This helps them each explore through the bargaining process the possibility of striking contracts with each other.

Several payoffs can flow from using this commercialization approach. Focusing on such a beacon-and-bargain effect can relieve the governmental side of the IP system of the need to amass the detailed information required to reasonably tailor a direct targeted incentive, such as each actor’s relative interests and contributions, needs, skills, or the like. Not only is amassing all of that information hard for the government to do, but large, established market actors may be better able than smaller market entrants to wield the political influence needed to get the government to act, increasing risk of concerns about political economy, public choice, and fairness. Instead, when governmental bodies closely adhere to a commercialization approach, each private party can bring its own expertise and other assets to the negotiating table while knowing—without necessarily having to reveal to other parties or the government—enough about its own level of interest and capability when it decides whether to strike a deal or not.            

Such successful coordination may help bring new business models, products, and services to market, thereby decreasing anticompetitive concentration of market power. It also can allow IP owners and their contracting parties to appropriate the returns to any of the rival inputs they invested towards developing and commercializing creations or inventions—labor, lab space, capital, and the like. At the same time, the government can avoid having to then go back to evaluate and trace the actual relative contributions that each participant brought to a creation’s or an invention’s successful commercialization—including, again, the cost of obtaining and using that information and the associated risks of political influence—by enforcing the terms of the contracts these parties strike with each other to allocate any value resulting from the creation’s or invention’s commercialization. In addition, significant economic theory and empirical evidence suggests this can all happen while the quality-adjusted prices paid by many end users actually decline and public access is high. In keeping with this commercialization approach, patents can be important antimonopoly devices, helping a smaller “David” come to market and compete against a larger “Goliath.”

A commercialization approach thereby mitigates many of the challenges raised by the tension that is a focus of the other intellectual approaches to IP, as well as by the responses these other approaches have offered to that tension, including some – but not all – types of AT regulation and enforcement. Many of the alternatives to IP that are often suggested by other approaches to IP, such as rewards, tax credits, or detailed rate regulation of royalties by AT enforcers can face significant challenges in facilitating the private sector coordination benefits envisioned by the commercialization approach to IP. While such approaches often are motivated by concerns about rising prices paid by consumers and direct benefits paid to creators and inventors, they may not account for the important cases in which IP rights are associated with declines in quality-adjusted prices paid by consumers and other forms of commercial benefits accrued to the entire IP production team as well as to consumers and third parties, which are emphasized in a commercialization approach. In addition, a commercialization approach can embrace many of the practical checks on the market power of an IP right that are often suggested by other approaches to IP, such as AT review, government takings, and compulsory licensing. At the same time this approach can show the importance of maintaining self-limiting principles within each such check to maintain commercialization benefits and mitigate concerns about dynamic efficiency, public choice, fairness, and the like.

To be sure, a focus on commercialization does not ignore creators or inventors or creations or inventions themselves. For example, a system successful in commercializing inventions can have the collateral benefit of providing positive incentives to those who do invent through the possibility of sharing in the many rewards associated with successful commercialization. Nor does a focus on commercialization guarantee that IP rights cause more help than harm. Significant theoretical and empirical questions remain open about benefits and costs of each approach to IP. And significant room to operate can remain for AT enforcers pursuing their important public mission, including at the IP-AT interface.

Commissioner Kieff’s evaluation is in harmony with other recent scholarly work, including Professor Dan Spulber’s explanation that the actual nature of long-term private contracting arrangements among patent licensors and licensees avoids alleged competitive “imperfections,” such as harmful “patent hold-ups,” “patent thickets,” and “royalty stacking” (see my discussion here).  More generally, Commissioner Kieff’s latest pronouncement is part of a broader and growing theoretical and empirical literature that demonstrates close associations between strong patent systems and economic growth and innovation (see, for example, here).

There is a major lesson here for U.S. (and foreign) antitrust enforcement agencies.  As I have previously pointed out (see, for example, here), in recent years, antitrust enforcers here and abroad have taken positions that tend to weaken patent rights.  Those positions typically are justified by the existence of “patent policy deficiencies” such as those that Professor Spulber’s paper debunks, as well as an alleged epidemic of low quality “probabilistic patents” (see, for example, here) – justifications that ignore the substantial economic benefits patents confer on society through contracting and commercialization.  It is high time for antitrust to accommodate the insights drawn from this new learning.  Specifically, government enforcers should change their approach and begin incorporating private law/contracting/commercialization considerations into patent-antitrust analysis, in order to advance the core goals of antitrust – the promotion of consumer welfare and efficiency.  Better yet, if the FTC and DOJ truly want to maximize the net welfare benefits of antitrust, they should undertake a more general “policy reboot” and adopt a “decision-theoretic” error cost approach to enforcement policy, rooted in cost-benefit analysis (see here) and consistent with the general thrust of Roberts Court antitrust jurisprudence (see here).

The Global Antitrust Institute (GAI) at George Mason University’s Antonin Scalia Law School released today a set of comments on the joint U.S. Department of Justice (DOJ) – Federal Trade Commission (FTC) August 12 Proposed Update to their 1995 Antitrust Guidelines for the Licensing of Intellectual Property (Proposed Update).  As has been the case with previous GAI filings (see here, for example), today’s GAI Comments are thoughtful and on the mark.

For those of you who are pressed for time, the latest GAI comments make these major recommendations (summary in italics):

Standard Essential Patents (SEPs):  The GAI Comments commended the DOJ and the FTC for preserving the principle that the antitrust framework is sufficient to address potential competition issues involving all IPRs—including both SEPs and non-SEPs.  In doing so, the DOJ and the FTC correctly rejected the invitation to adopt a special brand of antitrust analysis for SEPs in which effects-based analysis was replaced with unique presumptions and burdens of proof. 

o   The GAI Comments noted that, as FTC Chairman Edith Ramirez has explained, “the same key enforcement principles [found in the 1995 IP Guidelines] also guide our analysis when standard essential patents are involved.”

o   This is true because SEP holders, like other IP holders, do not necessarily possess market power in the antitrust sense, and conduct by SEP holders, including breach of a voluntary assurance to license its SEP on fair, reasonable, and nondiscriminatory (FRAND) terms, does not necessarily result in harm to the competitive process or to consumers. 

o   Again, as Chairwoman Ramirez has stated, “it is important to recognize that a contractual dispute over royalty terms, whether the rate or the base used, does not in itself raise antitrust concerns.”

Refusals to License:  The GAI Comments expressed concern that the statements regarding refusals to license in Sections 2.1 and 3 of the Proposed Update seem to depart from the general enforcement approach set forth in the 2007 DOJ-FTC IP Report in which those two agencies stated that “[a]ntitrust liability for mere unilateral, unconditional refusals to license patents will not play a meaningful part in the interface between patent rights and antitrust protections.”  The GAI recommended that the DOJ and the FTC incorporate this approach into the final version of their updated IP Guidelines.

“Unreasonable Conduct”:  The GAI Comments recommended that Section 2.2 of the Proposed Update be revised to replace the phrase “unreasonable conduct” with a clear statement that the agencies will only condemn licensing restraints when anticompetitive effects outweigh procompetitive benefits.

R&D Markets:  The GAI Comments urged the DOJ and the FTC to reconsider the inclusion (or, at the very least, substantially limit the use) of research and development (R&D) markets because: (1) the process of innovation is often highly speculative and decentralized, making it impossible to identify all market participants to be; (2) the optimal relationship between R&D and innovation is unknown; (3) the market structure most conducive to innovation is unknown; (4) the capacity to innovate is hard to monopolize given that the components of modern R&D—research scientists, engineers, software developers, laboratories, computer centers, etc.—are continuously available on the market; and (5) anticompetitive conduct can be challenged under the actual potential competition theory or at a later time.

While the GAI Comments are entirely on point, even if their recommendations are all adopted, much more needs to be done.  The Proposed Update, while relatively sound, should be viewed in the larger context of the Obama Administration’s unfortunate use of antitrust policy to weaken patent rights (see my article here, for example).  In addition to strengthening the revised Guidelines, as suggested by the GAI, the DOJ and the FTC should work with other component agencies of the next Administration – including the Patent Office and the White House – to signal enhanced respect for IP rights in general.  In short, a general turnaround in IP policy is called for, in order to spur American innovation, which has been all too lacking in recent years.

Section 5(a)(2) of the Federal Trade Commission (FTC) Act authorizes the FTC to “prevent persons, partnerships, or corporations, except . . . common carriers subject to the Acts to regulate commerce . . . from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.”  On August 29, in FTC v. AT&T, the Ninth Circuit issued a decision that exempts non-common carrier data services from U.S. Federal Trade Commission (FTC) jurisdiction, merely because they are offered by a company that has common carrier status.  This case involved an FTC allegation that AT&T had “throttled” data (slowed down Internet service) for “unlimited mobile data” customers without adequate consent or disclosures, in violation of Section 5 of the FTC Act.  The FTC had claimed that although AT&T mobile wireless voice services were a common carrier service, the company’s mobile wireless data services were not, and, thus, were subject to FTC oversight.  Reversing a federal district court’s refusal to grant AT&T’s motion to dismiss, the Ninth Circuit concluded that “when Congress used the term ‘common carrier’ in the FTC Act, [there is no indication] it could only have meant ‘common carrier to the extent engaged in common carrier activity.’”  The Ninth Circuit therefore determined that “a literal reading of the words Congress selected simply does comport with [the FTC’s] activity-based approach.”  The FTC’s pending case against AT&T in the Northern District of California (which is within the Ninth Circuit) regarding alleged unfair and deceptive advertising of satellite services by AT&T subsidiary DIRECTTV (see here) could be affected by this decision.

The Ninth Circuit’s AT&T holding threatens to further extend the FCC’s jurisdictional reach at the expense of the FTC.  It comes on the heels of the divided D.C. Circuit’s benighted and ill-reasoned decision (see here) upholding the FCC’s “Open Internet Order,” including its decision to reclassify Internet broadband service as a common carrier service.  That decision subjects broadband service to heavy-handed and costly FCC “consumer protection” regulation, including in the area of privacy.  The FCC’s overly intrusive approach stands in marked contrast to the economic efficiency considerations (albeit not always perfectly applied) that underlie FTC consumer protection mode of analysis.  As I explained in a May 2015 Heritage Foundation Legal Memorandum,  the FTC’s highly structured, analytic, fact-based methodology, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

I argued in this space in March 2016 that, should the D.C. Circuit uphold the FCC’s Open Internet Order, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.  The D.C. Circuit’s decision upholding that Order, combined with the Ninth Circuit’s latest ruling, makes the case for potential action by the next Congress even more urgent.

While it is at it, the next Congress should also weigh whether to repeal the FTC’s common carrier exemption, as well as all special exemptions for specified categories of institutions, such as banks, savings and loans, and federal credit unions (see here).  In so doing, Congress might also do away with the Consumer Financial Protection Bureau, an unaccountable bureaucracy whose consumer protection regulatory responsibilities should cease (see my February 2016 Heritage Legal Memorandum here).

Finally, as Heritage Foundation scholars have urged, Congress should look into enacting additional regulatory reform legislation, such as requiring congressional approval of new major regulations issued by agencies (including financial services regulators) and subjecting “independent” agencies (including the FCC) to executive branch regulatory review.

That’s enough for now.  Stay tuned.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

Yesterday a federal district court in Washington state granted the FTC’s motion for summary judgment against Amazon in FTC v. Amazon — the case alleging unfair trade practices in Amazon’s design of the in-app purchases interface for apps available in its mobile app store. The headlines score the decision as a loss for Amazon, and the FTC, of course, claims victory. But the court also granted Amazon’s motion for partial summary judgment on a significant aspect of the case, and the Commission’s win may be decidedly pyrrhic.

While the district court (very wrongly, in my view) essentially followed the FTC in deciding that a well-designed user experience doesn’t count as a consumer benefit for assessing substantial harm under the FTC Act, it rejected the Commission’s request for a permanent injunction against Amazon. It also called into question the FTC’s calculation of monetary damages. These last two may be huge. 

The FTC may have “won” the case, but it’s becoming increasingly apparent why it doesn’t want to take these cases to trial. First in Wyndham, and now in Amazon, courts have begun to chip away at the FTC’s expansive Section 5 discretion, even while handing the agency nominal victories.

The Good News

The FTC largely escapes judicial oversight in cases like these because its targets almost always settle (Amazon is a rare exception). These settlements — consent orders — typically impose detailed 20-year injunctions and give the FTC ongoing oversight of the companies’ conduct for the same period. The agency has wielded the threat of these consent orders as a powerful tool to micromanage tech companies, and it currently has at least one consent order in place with Twitter, Google, Apple, Facebook and several others.

As I wrote in a WSJ op-ed on these troubling consent orders:

The FTC prefers consent orders because they extend the commission’s authority with little judicial oversight, but they are too blunt an instrument for regulating a technology company. For the next 20 years, if the FTC decides that Google’s product design or billing practices don’t provide “express, informed consent,” the FTC could declare Google in violation of the new consent decree. The FTC could then impose huge penalties—tens or even hundreds of millions of dollars—without establishing that any consumer had actually been harmed.

Yesterday’s decision makes that outcome less likely. Companies will be much less willing to succumb to the FTC’s 20-year oversight demands if they know that courts may refuse the FTC’s injunction request and accept companies’ own, independent and market-driven efforts to address consumer concerns — without any special regulatory micromanagement.

In the same vein, while the court did find that Amazon was liable for repayment of unauthorized charges made without “express, informed authorization,” it also found the FTC’s monetary damages calculation questionable and asked for further briefing on the appropriate amount. If, as seems likely, it ultimately refuses to simply accept the FTC’s damages claims, that, too, will take some of the wind out of the FTC’s sails. Other companies have settled with the FTC and agreed to 20-year consent decrees in part, presumably, because of the threat of excessive damages if they litigate. That, too, is now less likely to happen.

Collectively, these holdings should help to force the FTC to better target its complaints to cases of still-ongoing and truly-harmful practices — the things the FTC Act was really meant to address, like actual fraud. Tech companies trying to navigate ever-changing competitive waters by carefully constructing their user interfaces and payment mechanisms (among other things) shouldn’t be treated the same way as fraudulent phishing scams.

The Bad News

The court’s other key holding is problematic, however. In essence, the court, like the FTC, seems to believe that regulators are better than companies’ product managers, designers and engineers at designing app-store user interfaces:

[A] clear and conspicuous disclaimer regarding in-app purchases and request for authorization on the front-end of a customer’s process could actually prove to… be more seamless than the somewhat unpredictable password prompt formulas rolled out by Amazon.

Never mind that Amazon has undoubtedly spent tremendous resources researching and designing the user experience in its app store. And never mind that — as Amazon is certainly aware — a consumer’s experience of a product is make-or-break in the cut-throat world of online commerce, advertising and search (just ask Jet).

Instead, for the court (and the FTC), the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible.

As I’ve written previously:

Amazon has built its entire business around the “1-click” concept — which consumers love — and implemented a host of notification and security processes hewing as much as possible to that design choice, but nevertheless taking account of the sorts of issues raised by in-app purchases. Moreover — and perhaps most significantly — it has implemented an innovative and comprehensive parental control regime (including the ability to turn off all in-app purchases) — Kindle Free Time — that arguably goes well beyond anything the FTC required in its Apple consent order.

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges.

Amazon began offering Kindle Free Time in 2012 as an innovative solution to a problem — children’s access to apps and in-app purchases — that affects only a small subset of Amazon’s customers. To dismiss that effort without considering that Amazon might have made a perfectly reasonable judgment that balanced consumer protection and product design disregards the cost-benefit balancing required by Section 5 of the FTC Act.

Moreover, the FTC Act imposes liability for harm only when they are not “reasonably avoidable.” Kindle Free Time is an outstanding example of an innovative mechanism that allows consumers at risk of unauthorized purchases by children to “reasonably avoid” harm. The court’s and the FTC’s disregard for it is inconsistent with the statute.

Conclusion

The court’s willingness to reinforce the FTC’s blackboard design “expertise” (such as it is) to second guess user-interface and other design decisions made by firms competing in real markets is unfortunate. But there’s a significant silver lining. By reining in the FTC’s discretion to go after these companies as if they were common fraudsters, the court has given consumers an important victory. After all, it is consumers who otherwise bear the costs (both directly and as a result of reduced risk-taking and innovation) of the FTC’s largely unchecked ability to extract excessive concessions from its enforcement targets.

The FCC doesn’t have authority over the edge and doesn’t want authority over the edge. Well, that is until it finds itself with no choice but to regulate the edge as a result of its own policies. As the FCC begins to explore its new authority to regulate privacy under the Open Internet Order (“OIO”), for instance, it will run up against policy conflicts and inconsistencies that will make it increasingly hard to justify forbearance from regulating edge providers.

Take for example the recently announced NPRM titled “Expanding Consumers’ Video Navigation Choices” — a proposal that seeks to force cable companies to provide video programming to third party set-top box manufacturers. Under the proposed rules, MVPD distributors would be required to expose three data streams to competitors: (1) listing information about what is available to particular customers; (2) the rights associated with accessing such content; and (3) the actual video content. As Geoff Manne has aptly noted, this seems to be much more of an effort to eliminate the “nightmare” of “too many remote controls” than it is to actually expand consumer choice in a market that is essentially drowning in consumer choice. But of course even so innocuous a goal—which is probably more about picking on cable companies because… “eww cable companies”—suggests some very important questions.

First, the market for video on cable systems is governed by a highly interdependent web of contracts that assures to a wide variety of parties that their bargained-for rights are respected. Among other things, channels negotiate for particular placements and channel numbers in a cable system’s lineup, IP rights holders bargain for content to be made available only at certain times and at certain locations, and advertisers pay for their ads to be inserted into channel streams and broadcasts.

Moreover, to a large extent, the content industry develops its content based on a stable regime of bargained-for contractual terms with cable distribution networks (among others). Disrupting the ability of cable companies to control access to their video streams will undoubtedly alter the underlying assumptions upon which IP companies rely when planning and investing in content development. And, of course, the physical networks and their related equipment have been engineered around the current cable-access regimes. Some non-trivial amount of re-engineering will have to take place to make the cable-networks compatible with a more “open” set-top box market.

The FCC nods to these concerns in its NPRM, when it notes that its “goal is to preserve the contractual arrangements between programmers and MVPDs, while creating additional opportunities for programmers[.]” But this aspiration is not clearly given effect in the NPRM, and, as noted, some contractual arrangements are simply inconsistent with the NPRM’s approach.

Second, the FCC proposes to bind third-party manufacturers to the public interest privacy commitments in §§ 629, 551 and 338(i) of the Communications Act (“Act”) through a self-certification process. MVPDs would be required to pass the three data streams to third-party providers only once such a certification is received. To the extent that these sections, enforced via self-certification, do not sufficiently curtail third-parties’ undesirable behavior, the FCC appears to believe that “the strictest state regulatory regime[s]” and the “European Union privacy regulations” will serve as the necessary regulatory gap fillers.

This seems hard to believe, however, particularly given the recently announced privacy and cybersecurity NPRM, through which the FCC will adopt rules detailing the agency’s new authority (under the OIO) to regulate privacy at the ISP level. Largely, these rules will grow out of §§ 222 and 201 of the Act, which the FCC in Terracom interpreted together to be a general grant of privacy and cybersecurity authority.

I’m apprehensive of the asserted scope of the FCC’s power over privacy — let alone cybersecurity — under §§ 222 and 201. In truth, the FCC makes an admirable showing in Terracom of demonstrating its reasoning; it does a far better job than the FTC in similar enforcement actions. But there remains a problem. The FTC’s authority is fundamentally cabined by the limitations contained within the FTC Act (even if it frequently chooses to ignore them, they are there and are theoretically a protection against overreach).

But the FCC’s enforcement decisions are restrained (if at all) by a vague “public interest” mandate, and a claim that it will enforce these privacy principles on a case-by-case basis. Thus, the FCC’s proposed regime is inherently one based on vast agency discretion. As in many other contexts, enforcers with wide discretion and a tremendous power to penalize exert a chilling effect on innovation and openness, as well as a frightening power over a tremendous swath of the economy. For the FCC to claim anything like an unbounded UDAP authority for itself has got to be outside of the archaic grant of authority from § 201, and is certainly a long stretch for the language of § 706 (a provision of the Act which it used as one of the fundamental justifications for the OIO)— leading very possibly to a bout of Chevron problems under precedent such as King v. Burwell and UARG v. EPA.

And there is a real risk here of, if not hypocrisy, then… deep conflict in the way the FCC will strike out on the set-top box and privacy NPRMs. The Commission has already noted in its NPRM that it will not be able to bind third-party providers of set-top boxes under the same privacy requirements that apply to current MVPD providers. Self-certification will go a certain length, but even there agitation from privacy absolutists will possibly sway the FCC to consider more stringent requirements. For instance, §§ 551 and 338 of the Act — which the FCC focuses on in the set-top box NPRM — are really only about disclosing intended uses of consumer data. And disclosures can come in many forms, including burying them in long terms of service that customers frequently do not read. Such “weak” guarantees of consumer privacy will likely become a frequent source of complaint (and FCC filings) for privacy absolutists.  

Further, many of the new set-top box entrants are going to be current providers of OTT video or devices that redistribute OTT video. And many of these providers make a huge share of their revenue from data mining and selling access to customer data. Which means one of two things: Either the FCC is going to just allow us to live in a world of double standards where these self-certifying entities are permitted significantly more leeway in their uses of consumer data than MVPD providers or, alternatively, the FCC is going to discover that it does in fact need to “do something.” If only there were a creative way to extend the new privacy authority under Title II to these providers of set-top boxes… . Oh! there is: bring edge providers into the regulation fold under the OIO.

It’s interesting that Wheeler’s announcement of the FCC’s privacy NPRM explicitly noted that the rules would not be extended to edge providers. That Wheeler felt the need to be explicit in this suggests that he believes that the FCC has the authority to extend the privacy regulations to edge providers, but that it will merely forbear (for now) from doing so.

If edge providers are swept into the scope of Title II they would be subject to the brand new privacy rules the FCC is proposing. Thus, despite itself (or perhaps not), the FCC may find itself in possession of a much larger authority over some edge providers than any of the pro-Title II folks would have dared admit was possible. And the hook (this time) could be the privacy concerns embedded in the FCC’s ill-advised attempt to “open” the set-top box market.

This is a complicated set of issues, and it’s contingent on a number of moving parts. This week, Chairman Wheeler will be facing an appropriations hearing where I hope he will be asked to unpack his thinking regarding the true extent to which the OIO may in fact be extended to the edge.

Thanks to the Truth on the Market bloggers for having me. I’m a long-time fan of the blog, and excited to be contributing.

The Third Circuit will soon review the appeal of generic drug manufacturer, Mylan Pharmaceuticals, in the latest case involving “product hopping” in the pharmaceutical industry — Mylan Pharmaceuticals v. Warner Chilcott.

Product hopping occurs when brand pharmaceutical companies shift their marketing efforts from an older version of a drug to a new, substitute drug in order to stave off competition from cheaper generics. This business strategy is the predictable business response to the incentives created by the arduous FDA approval process, patent law, and state automatic substitution laws. It costs brand companies an average of $2.6 billion to bring a new drug to market, but only 20 percent of marketed brand drugs ever earn enough to recoup these costs. Moreover, once their patent exclusivity period is over, brand companies face the likely loss of 80-90 percent of their sales to generic versions of the drug under state substitution laws that allow or require pharmacists to automatically substitute a generic-equivalent drug when a patient presents a prescription for a brand drug. Because generics are automatically substituted for brand prescriptions, generic companies typically spend very little on advertising, instead choosing to free ride on the marketing efforts of brand companies. Rather than hand over a large chunk of their sales to generic competitors, brand companies often decide to shift their marketing efforts from an existing drug to a new drug with no generic substitutes.

Generic company Mylan is appealing U.S. District Judge Paul S. Diamond’s April decision to grant defendant and brand company Warner Chilcott’s summary judgment motion. Mylan and other generic manufacturers contend that Defendants engaged in a strategy to impede generic competition for branded Doryx (an acne medication) by executing several product redesigns and ceasing promotion of prior formulations. Although the plaintiffs generally changed their products to keep up with the brand-drug redesigns, they contend that these redesigns were intended to circumvent automatic substitution laws, at least for the periods of time before the generic companies could introduce a substitute to new brand drug formulations. The plaintiffs argue that product redesigns that prevent generic manufacturers from benefitting from automatic substitution laws violate Section 2 of the Sherman Act.

Product redesign is not per se anticompetitive. Retiring an older branded version of a drug does not block generics from competing; they are still able to launch and market their own products. Product redesign only makes competition tougher because generics can no longer free ride on automatic substitution laws; instead they must either engage in their own marketing efforts or redesign their product to match the brand drug’s changes. Moreover, product redesign does not affect a primary source of generics’ customers—beneficiaries that are channeled to cheaper generic drugs by drug plans and pharmacy benefit managers.

The Supreme Court has repeatedly concluded that “the antitrust laws…were enacted for the protection of competition not competitors” and that even monopolists have no duty to help a competitor. The district court in Mylan generally agreed with this reasoning, concluding that the brand company Defendants did not exclude Mylan and other generics from competition: “Throughout this period, doctors remained free to prescribe generic Doryx; pharmacists remained free to substitute generics when medically appropriate; and patients remained free to ask their doctors and pharmacists for generic versions of the drug.” Instead, the court argued that Mylan was a “victim of its own business strategy”—a strategy that relied on free-riding off brand companies’ marketing efforts rather than spending any of their own money on marketing. The court reasoned that automatic substitution laws provide a regulatory “bonus” and denying Mylan the opportunity to take advantage of that bonus is not anticompetitive.

Product redesign should only give rise to anticompetitive claims if combined with some other wrongful conduct, or if the new product is clearly a “sham” innovation. Indeed, Senior Judge Douglas Ginsburg and then-FTC Commissioner Joshua D. Wright recently came out against imposing competition law sanctions on product redesigns that are not sham innovations. If lawmakers are concerned that product redesigns will reduce generic usage and the cost savings they create, they could follow the lead of several states that have broadened automatic substitution laws to allow the substitution of generics that are therapeutically-equivalent but not identical in other ways, such as dosage form or drug strength.

Mylan is now asking the Third Circuit to reexamine the case. If the Third Circuit reverses the lower courts decision, it would imply that brand drug companies have a duty to continue selling superseded drugs in order to allow generic competitors to take advantage of automatic substitution laws. If the Third Circuit upholds the district court’s ruling on summary judgment, it will likely create a circuit split between the Second and Third Circuits. In July 2015, the Second Circuit court upheld an injunction in NY v. Actavis that required a brand company to continue manufacturing and selling an obsolete drug until after generic competitors had an opportunity to launch their generic versions and capture a significant portion of the market through automatic substitution laws. I’ve previously written about the duty created in this case.

Regardless of whether the Third Circuit’s decision causes a split, the Supreme Court should take up the issue of product redesign in pharmaceuticals to provide guidance to brand manufacturers that currently operate in a world of uncertainty and under the constant threat of litigation for decisions they make when introducing new products.

On October 7, 2015, the Senate Judiciary Committee held a hearing on the “Standard Merger and Acquisition Reviews Through Equal Rules” (SMARTER) Act of 2015.  As former Antitrust Modernization Commission Chair (and former Acting Assistant Attorney General for Antitrust) Deborah Garza explained in her testimony, “t]he premise of the SMARTER Act is simple:  A merger should not be treated differently depending on which antitrust enforcement agency – DOJ or the FTC – happens to review it.  Regulatory outcomes should not be determined by a flip of the merger agency coin.”

Ms. Garza is clearly correct.  Both the U.S. Justice Department (DOJ) and the U.S. Federal Trade Commission (FTC) enforce the federal antitrust merger review provision, Section 7 of the Clayton Act, and employ a common set of substantive guidelines (last revised in 2010) to evaluate merger proposals.  Neutral “rule of law” principles indicate that private parties should expect to have their proposed mergers subject to the same methods of assessment and an identical standard of judicial review, regardless of which agency reviews a particular transaction.  (The two agencies decide by mutual agreement which agency will review any given merger proposal.)

Unfortunately, however, that is not the case today.  The FTC’s independent ability to challenge mergers administratively, combined with the difference in statutory injunctive standards that apply to FTC and DOJ merger reviews, mean that a particular merger application may face more formidable hurdles if reviewed by the FTC, rather than DOJ.  These two differences commendably would be eliminated by the SMARTER Act, which would subject the FTC to current DOJ standards.  The SMARTER Act would not deal with a third difference – the fact that DOJ merger consent decrees, but not FTC merger consent decrees, must be filed with a federal court for “public interest” review.  This commentary briefly addresses those three issues.  The first and second ones present significant “rule of law” problems, in that they involve differences in statutory language applied to the same conduct.  The third issue, the question of judicial review of settlements, is of a different nature, but nevertheless raises substantial policy concerns.

  1. FTC Administrative Authority

The first rule of law problem stems from the broader statutory authority the FTC possesses to challenge mergers.  In merger cases, while DOJ typically consolidates actions for a preliminary and permanent injunction in district court, the FTC merely seeks a preliminary injunction (which is easier to obtain than a permanent injunction) and “holds in its back pocket” the ability to challenge a merger in an FTC administrative proceeding – a power DOJ does not possess.  In short, the FTC subjects proposed mergers to a different and more onerous method of assessment than DOJ.  In Ms. Garza’s words (footnotes deleted):

“Despite the FTC’s legal ability to seek permanent relief from the district court, it prefers to seek a preliminary injunction only, to preserve the status quo while it proceeds with its administrative litigation.

This approach has great strategic significance. First, the standard for obtaining a preliminary injunction in government merger challenges is lower than the standard for obtaining a permanent injunction. That is, it is easier to get a preliminary injunction.

Second, as a practical matter, the grant of a preliminary injunction is typically sufficient to end the matter. In nearly every case, the parties will abandon their transaction rather than incur the heavy cost and uncertainty of trying to hold the merger together through further proceedings—which is why merging parties typically seek to consolidate proceedings for preliminary and permanent relief under Rule 65(a)(2). Time is of the essence. As one witness testified before the [Antitrust Modernization Commission], “it is a rare seller whose business can withstand the destabilizing effect of a year or more of uncertainty” after the issuance of a preliminary injunction.

Third, even if the court denies the FTC its preliminary injunction and the parties close their merger, the FTC can still continue to pursue an administrative challenge with an eye to undoing or restructuring the transaction. This is the “heads I win, tails you lose” aspect of the situation today. It is very difficult for the parties to get to the point of a full hearing in court given the effect of time on transactions, even with the FTC’s expedited administrative procedures adopted in about 2008. . . . 

[Moreover,] [while] [u]nder its new procedures, parties can move to dismiss an administrative proceeding if the FTC has lost a motion for preliminary injunction and the FTC will consider whether to proceed on a case-by-case basis[,] . . . th[is] [FTC] policy could just as easily change again, unless Congress speaks.”

Typically time is of the essence in proposed mergers, so substantial delays occasioned by extended reviews of those transactions may prevent many transactions from being consummated, even if they eventually would have passed antitrust muster.  Ms. Garza’s testimony, plus testimony by former Assistant Deputy Assistant Attorney General for Antitrust Abbott (Tad) Lipsky, document cases of substantial delay in FTC administrative reviews of merger proposals.  (As Mr. Lipsky explained, “[a]ntitrust practitioners have long perceived that the possibility of continued administrative litigation by the FTC following a court decision constitutes a significant disincentive for parties to invest resources in transaction planning and execution.”)  Congress should weigh these delay-specific costs, as well as the direct costs of any additional burdens occasioned by FTC administrative procedures, in deciding whether to require the FTC (like DOJ) to rely solely on federal court proceedings.

  1. Differences Between FTC and DOJ Injunctive Standards

The second rule of law problem arises from the lighter burden the FTC must satisfy to obtain injunctive relief in federal court.  Under Section 13(b) of the FTC Act, an injunction shall be granted the FTC “[u]pon a proper showing that, weighing the equities and considering the Commission’s likelihood of success, such action would be in the public interest.”  The D.C. Circuit (in FTC v. H.J. Heinz Co. and in FTC v. Whole Foods Market, Inc.) has stated that, to meet this burden, the FTC need merely have raised questions “so serious, substantial, difficult and doubtful as to make them fair ground for further investigation.”  By contrast, as Ms. Garza’s testimony points out, “under Section 15 of the Clayton Act, courts generally apply a traditional equities test requiring DOJ to show a reasonable likelihood of success on the merits—not merely that there is ‘fair ground for further investigation.’”  In a similar vein, Mr. Lipsky’s testimony stated that “[t]he cumulative effect of several recent contested merger decisions has been to allow the FTC to argue that it needn’t show likelihood of success in order to win a preliminary injunction; specifically these decisions suggest that the Commission need only show ‘serious, substantial, difficult and doubtful’ questions regarding the merits.”  Although some commentators have contended that, in reality, the two standards generally will be interpreted in a similar fashion (“whatever theoretical difference might exist between the FTC and DOJ standards has no practical significance”), there is no doubt that the language of the two standards is different – and basic principles of statutory construction indicate that differences in statutory language should be given meaning and not ignored.  Accordingly, merging parties face the real prospect that they might fare worse under federal court review of an FTC challenge to their merger proposal than they would have fared had DOJ challenged the same transaction.  Such an outcome, even if it is rare, would be at odds with neutral application of the rule of law.

  1. The Tunney Act

Finally, helpful as it is, the SMARTER Act does not entirely eliminate the disparate treatment of proposed mergers by DOJ and the FTC.  The Tunney Act, 15 U.S.C. § 16, enacted in 1974, which applies to DOJ but not to the FTC, requires that DOJ submit all proposed consent judgments under the antitrust laws (including Section 7 of the Clayton Act) to a federal district court for 60 days of public comment prior to being entered.

a.  Economic Costs (and Potential Benefits) of the Tunney Act

The Tunney Act potentially interjects uncertainty into the nature of the “deal” struck between merging parties and DOJ in merger cases.  It does this by subjecting proposed DOJ merger settlements (and other DOJ non-merger civil antitrust settlements) to a 60 day public review period, requiring federal judges to determine whether a proposed settlement is “in the public interest” before entering it, and instructing the court to consider the impact of the entry of judgment “upon competition and upon the public generally.”  Leading antitrust practitioners have noted that this uncertainty “could affect shareholders, customers, or even employees. Moreover, the merged company must devote some measure of resources to dealing with the Tunney Act review—resources that instead could be devoted to further integration of the two companies or generation of any planned efficiencies or synergies.”  More specifically:

“[W]hile Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power. Such a distortion in conduct probably was not contemplated by the Tunney Act’s drafters, but merger partners will need to be cognizant of how their post-close actions may be perceived during Tunney Act review. . . .  [And, in addition,] while Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power.”

Although the Tunney Act has been justified on traditional “public interest” grounds, even its scholarly supporters (a DOJ antitrust attorney), in praising its purported benefits, have acknowledged its potential for abuse:

“Properly interpreted and applied, the Tunney Act serves a number of related, useful functions. The disclosure provisions and judicial approval requirement for decrees can help identify, and more importantly deter, “influence peddling” and other abuses. The notice-and-comment procedures force the DOJ to explain its rationale for the settlement and provide its answers to objections, thus providing transparency. They also provide a mechanism for third-party input, and, thus, a way to identify and correct potentially unnoticed problems in a decree. Finally, the court’s public interest review not only helps ensure that the decree benefits the public, it also allows the court to protect itself against ambiguous provisions and enforcement problems and against an objectionable or pointless employment of judicial power. Improperly applied, the Tunney Act does more harm than good. When a district court takes it upon itself to investigate allegations not contained in a complaint, or attempts to “re-settle” a case to provide what it views as stronger, better relief, or permits lengthy, unfocused proceedings, the Act is turned from a useful check to an unpredictable, costly burden.”

The justifications presented by the author are open to serious question.  Whether “influence peddling” can be detected merely from the filing of proposed decree terms is doubtful – corrupt deals to settle a matter presumably would be done “behind the scenes” in a manner not available to public scrutiny.  The economic expertise and detailed factual knowledge that informs a DOJ merger settlement cannot be fully absorbed by a judge (who may fall prey to his or her personal predilections as to what constitutes good policy) during a brief review period.  “Transparency” that facilitates “third-party input” can too easily be manipulated by rent-seeking competitors who will “trump up” justifications for blocking an efficient merger.  Moreover, third parties who are opposed to mergers in general may also be expected to file objections to efficient arrangements.  In short, the “sunshine” justification for Tunney Act filings is more likely to cloud the evaluation of DOJ policy calls than to provide clarity.

b.  Constitutional Issues Raised by the Tunney Act

In addition to potential economic inefficiencies, the judicial review feature of the Tunney Act raises serious separation of powers issues, as emphasized by the DOJ Office of Legal Counsel (OLC, which advises the Attorney General and the President on questions of constitutional interpretation) in a 1989 opinion regarding qui tam provisions of the False Claims Act:

“There are very serious doubts as to the constitutionality . . . of the Tunney Act:  it intrudes into the Executive power and requires the courts to decide upon the public interest – that is, to exercise a policy discretion normally reserved to the political branches.  Three Justices of the Supreme Court questioned the constitutionality of the Tunney Act in Maryland v. United States, 460 U.S. 1001 (1983) (Rehnquist, J., joined by Burger, C.J., and White, J., dissenting).”

Notably, this DOJ critique of the Tunney Act was written before the 2004 amendments to that statute that specifically empower courts to consider the impact of proposed settlements “upon competition and upon the public generally” – language that significantly trenches upon Executive Branch prerogatives.  Admittedly, the Tunney Act has withstood judicial scrutiny – no court has ruled it unconstitutional.   Moreover, a federal judge can only accept or reject a Tunney Act settlement, not rewrite it, somewhat ameliorating its affront to the separation of powers.  In short, even though it may not be subject to serious constitutional challenge in the courts, the Tunney Act is problematic as a matter of sound constitutional policy.

c.  Congressional Reexamination of the Tunney Act

These economic and constitutional policy concerns suggest that Congress may wish to carefully reexamine the merits of the Tunney Act.  Any such reexamination, however, should be independent of, and not delay expedited consideration of, the SMARTER Act.  The Tunney Act, although of undoubted significance, is only a tangential aspect of the divergent legal standards that apply to FTC and DOJ merger reviews.  It is beyond the scope of current legislative proposals but it merits being taken up at an appropriate time – perhaps in the next Congress.  When Congress turns to the Tunney Act, it may wish to consider four options:  (1) repealing the Act in its entirety; (2) retaining the Act as is; (3) partially repealing it only with respect to merger reviews; or, (4) applying it in full force to the FTC.  A detailed evaluation of those options is beyond the scope of this commentary.

Conclusion

In sum, in order to eliminate inconsistencies between FTC and DOJ standards for reviewing proposed mergers, Congress should give serious consideration to enacting the SMARTER Act, which would both eliminate FTC administrative review of merger proposals and subject the FTC to the same injunctive standard as the DOJ in judicial review of those proposals.  Moreover, if the SMARTER Act is enacted, Congress should also consider going further and amending the Tunney Act to make it apply to FTC as well as to DOJ merger settlements – or, alternatively, to have it not apply at all to any merger settlements (a result which would better respect the constitutional separation of powers and reduce a potential source of economic inefficiency).

Applying antitrust law to combat “hold-up” attempts (involving demands for “anticompetitively excessive” royalties) or injunctive actions brought by standard essential patent (SEP) owners is inherently problematic, as explained by multiple scholars (see here and here, for example).  Disputes regarding compensation to SEP holders are better handled in patent infringement and breach of contract lawsuits, and adding antitrust to the mix imposes unnecessary costs and may undermine involvement in standard setting and harm innovation.  What’s more, as FTC Commissioner Maureen Ohlhausen and former FTC Commissioner Joshua Wright have pointed out (citing research), empirical evidence suggests there is no systematic problem with hold-up.  Indeed, to the contrary, a recent empirical study by Professors from Stanford, Berkeley, and the University of the Andes, accepted for publication in the Journal of Competition Law and Economics, finds that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy – a result totally at odds with theories of SEP-related competitive harm.  Thus, application of a cost-benefit approach that seeks to maximize the welfare benefits of antitrust enforcement strongly militates against continuing to pursue “SEP abuse” cases.  Enforcers should instead focus on more traditional investigations that seek to ferret out conduct that is far more likely to be welfare-inimical, if they are truly concerned about maximizing consumer welfare.

But are the leaders at the U.S. Department of Justice Antitrust Division (DOJ) and the Federal Trade paying any attention?  The most recent public reports are not encouraging.

In a very recent filing with the U.S. International Trade Commission (ITC), FTC Chairwoman Edith Ramirez stated that “the danger that bargaining conducted in the shadow of an [ITC] exclusion order will lead to patent hold-up is real.”  (Comparable to injunctions, ITC exclusion orders preclude the importation of items that infringe U.S. patents.  They are the only effective remedy the ITC can give for patent infringement, since the ITC cannot assess damages or royalties.)  She thus argued that, before issuing an exclusion order, the ITC should require an SEP holder to show that the infringer is unwilling or unable to enter into a patent license on “fair, reasonable, and non-discriminatory” (FRAND) terms – a new and major burden on the vindication of patent rights.  In justifying this burden, Chairwoman Ramirez pointed to Motorola’s allegedly excessive SEP royalty demands from Microsoft – $6-$8 per gaming console, as opposed to a federal district court finding that pennies per console was the appropriate amount.  She also cited LSI Semiconductor’s demand for royalties that exceeded the selling price of Realtek’s standard-compliant product, whereas a federal district court found the appropriate royalty to be only .19% of the product’s selling price.  But these two examples do not support Chairwoman Ramirez’s point – quite the contrary.  The fact that high initial royalty requests subsequently are slashed by patent courts shows that the patent litigation system is working, not that antitrust enforcement is needed, or that a special burden of proof must be placed on SEP holders.  Moreover, differences in bargaining positions are to be expected as part of the normal back-and-forth of bargaining.  Indeed, if anything, the extremely modest judicial royalty assessments in these cases raise the concern that SEP holders are being undercompensated, not overcompensated.

A recent speech by DOJ Assistant Attorney General for Antitrust (AAG) William J. Baer, delivered at the International Bar Association’s Competition Conference, suffers from the same sort of misunderstanding as Chairman Ramirez’s ITC filing.  Stating that “[h]old up concerns are real”, AAG Baer cited the two examples described by Chairwoman Ramirez.  He also mentioned the fact that Innovatio requested a royalty rate of over $16 per smart tablet for its SEP portfolio, but was awarded a rate of less than 10 cents per unit by the court.  While admitting that the implementers “proved victorious in court” in those cases, he asserted that “not every implementer has the wherewithal to litigate”, that “[s]ometimes implementers accede to licensors’ demands, fearing exclusion and costly litigation”, that “consumers can be harmed and innovation incentives are distorted”, and that therefore “[a] future of exciting new products built atop existing technology may be . . . deferred”.  These theoretical concerns are belied by the lack of empirical support for hold-up, and are contradicted by the recent finding, previously noted, that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy.  (In addition, the implementers of patented technology tend to be large corporations; AAG Baer’s assertion that some may not have “the wherewithal to litigate” is a bare proposition unsupported by empirical evidence or more nuanced analysis.)  In short, DOJ, like FTC, is advancing an argument that undermines, rather than bolsters, the case for applying antitrust to SEP holders’ efforts to defend their patent rights.

Ideally the FTC and DOJ should reevaluate their recent obsession with allegedly abusive unilateral SEP behavior and refocus their attention on truly serious competitive problems.  (Chairwoman Ramirez and AAG Baer are both outstanding and highly experienced lawyers who are well-versed in policy analysis; one would hope that they would be open to reconsidering current FTC and DOJ policy toward SEPs, in light of hard evidence.)  Doing so would benefit consumer welfare and innovation – which are, after all, the goals that those important agencies are committed to promote.