As the Federal Communications (FCC) prepares to revoke its economically harmful “net neutrality” order and replace it with a free market-oriented “Restoring Internet Freedom Order,” the FCC and the Federal Trade Commission (FTC) commendably have announced a joint policy for cooperation on online consumer protection.  According to a December 11 FTC press release:

The Federal Trade Commission and Federal Communications Commission (FCC) announced their intent to enter into a Memorandum of Understanding (MOU) under which the two agencies would coordinate online consumer protection efforts following the adoption of the Restoring Internet Freedom Order.

“The Memorandum of Understanding will be a critical benefit for online consumers because it outlines the robust process by which the FCC and FTC will safeguard the public interest,” said FCC Chairman Ajit Pai. “Instead of saddling the Internet with heavy-handed regulations, we will work together to take targeted action against bad actors. This approach protected a free and open Internet for many years prior to the FCC’s 2015 Title II Order and it will once again following the adoption of the Restoring Internet Freedom Order.”

“The FTC is committed to ensuring that Internet service providers live up to the promises they make to consumers,” said Acting FTC Chairman Maureen K. Ohlhausen. “The MOU we are developing with the FCC, in addition to the decades of FTC law enforcement experience in this area, will help us carry out this important work.”

The draft MOU, which is being released today, outlines a number of ways in which the FCC and FTC will work together to protect consumers, including:

The FCC will review informal complaints concerning the compliance of Internet service providers (ISPs) with the disclosure obligations set forth in the new transparency rule. Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization, and congestion management. Should an ISP fail to make the required disclosures—either in whole or in part—the FCC will take enforcement action.

The FTC will investigate and take enforcement action as appropriate against ISPs concerning the accuracy of those disclosures, as well as other deceptive or unfair acts or practices involving their broadband services.

The FCC and the FTC will broadly share legal and technical expertise, including the secure sharing of informal complaints regarding the subject matter of the Restoring Internet Freedom Order. The two agencies also will collaborate on consumer and industry outreach and education.

The FCC’s proposed Restoring Internet Freedom Order, which the agency is expected to vote on at its December 14 meeting, would reverse a 2015 agency decision to reclassify broadband Internet access service as a Title II common carrier service. This previous decision stripped the FTC of its authority to protect consumers and promote competition with respect to Internet service providers because the FTC does not have jurisdiction over common carrier activities.

The FCC’s Restoring Internet Freedom Order would return jurisdiction to the FTC to police the conduct of ISPs, including with respect to their privacy practices. Once adopted, the order will also require broadband Internet access service providers to disclose their network management practices, performance, and commercial terms of service. As the nation’s top consumer protection agency, the FTC will be responsible for holding these providers to the promises they make to consumers.

Particularly noteworthy is the suggestion that the FCC and FTC will work to curb regulatory duplication and competitive empire building – a boon to Internet-related businesses that would be harmed by regulatory excess and uncertainty.  Stay tuned for future developments.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

Unexpectedly, on the day that the white copy of the upcoming repeal of the 2015 Open Internet Order was published, a mobile operator in Portugal with about 7.5 million subscribers is garnering a lot of attention. Curiously, it’s not because Portugal is a beautiful country (Iker Casillas’ Instagram feed is dope) nor because Portuguese is a beautiful romance language.

Rather it’s because old-fashioned misinformation is being peddled to perpetuate doomsday images that Portuguese ISPs have carved the Internet into pieces — and if the repeal of the 2015 Open Internet Order passes, the same butchery is coming to an AT&T store near you.

Much ado about data

This tempest in the teacup is about mobile data plans, specifically the ability of mobile subscribers to supplement their data plan (typically ranging from 200 MB to 3 GB per month) with additional 10 GB data packages containing specific bundles of apps – messaging apps, social apps, video apps, music apps, and email and cloud apps. Each additional 10 GB data package costs EUR 6.99 per month and Meo (the mobile operator) also offers its own zero rated apps. Similar plans have been offered in Portugal since at least 2012.

Screen Shot 2017-11-22 at 3.39.21 PM

These data packages are a clear win for mobile subscribers, especially pre-paid subscribers who tend to be at a lower income level than post-paid subscribers. They allow consumers to customize their plan beyond their mobile broadband subscription, enabling them to consume data in ways that are better attuned to their preferences. Without access to these data packages, consuming an additional 10 GB of data would cost each user an additional EUR 26 per month and require her to enter into a two year contract.

These discounted data packages also facilitate product differentiation among mobile operators that offer a variety of plans. Keeping with the Portugal example, Vodafone Portugal offers 20 GB of additional data for certain apps (Facebook, Instagram, SnapChat, and Skype, among others) with the purchase of a 3 GB mobile data plan. Consumers can pick which operator offers the best plan for them.

In addition, data packages like the ones in question here tend to increase the overall consumption of content, reduce users’ cost of obtaining information, and allow for consumers to experiment with new, less familiar apps. In short, they are overwhelmingly pro-consumer.

Even if Portugal actually didn’t have net neutrality rules, this would be the furthest thing from the apocalypse critics make it out to be.

Screen Shot 2017-11-22 at 6.51.36 PM

Net Neutrality in Portugal

But, contrary to activists’ misinformation, Portugal does have net neutrality rules. The EU implemented its net neutrality framework in November 2015 as a regulation, meaning that the regulation became the law of the EU when it was enacted, and national governments, including Portugal, did not need to transpose it into national legislation.

While the regulation was automatically enacted in Portugal, the regulation and the 2016 EC guidelines left the decision of whether to allow sponsored data and zero rating plans (the Regulation likely classifies data packages at issue here to be zero rated plans because they give users a lot of data for a low price) in the hands of national regulators. While Portugal is still formulating the standard it will use to evaluate sponsored data and zero rating under the EU’s framework, there is little reason to think that this common practice would be disallowed in Portugal.

On average, in fact, despite its strong net neutrality regulation, the EU appears to be softening its stance toward zero rating. This was evident in a recent EC competition policy authority (DG-Comp) study concluding that there is little reason to believe that such data practices raise concerns.

The activists’ willful misunderstanding of clearly pro-consumer data plans and purposeful mischaracterization of Portugal as not having net neutrality rules are inflammatory and deceitful. Even more puzzling for activists (but great for consumers) is their position given there is nothing in the 2015 Open Internet Order that would prevent these types of data packages from being offered in the US so long as ISPs are transparent with consumers.

On November 27, the U.S. Supreme Court will turn once again to patent law, hearing cases addressing the constitutionality of Patent Trial and Appeal Board (PTAB) “inter partes” review (Oil States Energy v. Greene), and whether PTAB must issue a final written decision as to every claim challenged by the petitioner in an inter partes review (SAS Institute v. Matal).

As the Justices peruse the bench memos and amicus curiae briefs concerning these cases, their minds will, of course, be focused on legal questions of statutory and constitutional interpretation.  Lurking in the background of these and other patent cases, however, is an overarching economic policy issue – have recent statutory changes and case law interpretations weakened U.S. patent protection in a manner that seriously threatens future American economic growth and innovation?  In a recent Heritage Foundation Legal Memorandum, I responded in the affirmative to this question, and argued that significant statutory reforms are needed to restore the American patent system to a position of global leadership that is key to U.S. economic prosperity.  (Among other things, I noted severe constitutional problems raised by PTAB’s actions, and urged that Congress consider passing legislation to reform PTAB, if the Supreme Court upholds the constitutionality of inter partes review.)

A timely opinion article published yesterday in the Wall Street Journal emphasizes that the decline in American patent protection also has profound negative consequences for American international economic competitiveness.  Journalist David Kline, author of the commentary (“Fear American Complacency, Not China”), succinctly contrasts unfortunate U.S. patent policy developments with the recent strengthening of the Chinese patent system (a matter of high priority to the Chinese Government):

China’s entrepreneurs have been fueled by reforms in recent years that strengthened intellectual property rights—ironic for a country long accused of stealing trade secrets and ignoring IP protections. Today Chinese companies are filing for more patents than American ones. The patent application and examination process has been streamlined, and China has established specialized intellectual property courts and tribunals to adjudicate lawsuits and issue injunctions against infringers. “IP infringers will pay a heavy price,” President Xi Jinping warned this summer. . . .

In the U.S., by contrast, a series of legislative actions and Supreme Court rulings have weakened patent rights, especially for startups. A new way of challenging patents called “inter partes review” results in at least one patent claim being thrown out in roughly 80% of cases, according to an analysis by Adam Mossoff, a law professor at George Mason University. Unsurprisingly, many of these cases were brought by defendants facing patent infringement lawsuits in federal court.

This does not bode well for America’s global competitiveness. The U.S. used to rank first among nations in the strength of its intellectual property rights. But the 2017 edition of the Global IP Index places the U.S. 10th—tied with Hungary.

The Supreme Court may not be able to take judicial notice of this policy reality (although strong purely legal arguments would support a holding that PTAB inter partes review is unconstitutional), but Congress certainly can take legislative notice of it.  Let us hope that Congress acts decisively to strengthen the American patent system – in the interests of a strong, innovative, and internationally competitive American economy.

The latest rankings of trade freedom around the world will be set forth and assessed in the 24th annual edition of the Heritage Foundation annual Index of Economic Freedom (Index), which will be published in January 2018.  Today Heritage published a sneak preview of the 2018 Index’s analysis of freedom to trade, which merits public attention.  First, though, a bit of background on the Index’s philosophy and methodology is appropriate.

The nature and measurement of economic freedom are explained in the 2017 Index:

Economic freedom is the fundamental right of every human to control his or her own labor and property. In an economically free society, individuals are free to work, produce, consume, and invest in any way they please. In economically free societies, governments allow labor, capital, and goods to move freely, and refrain from coercion or constraint of liberty beyond the extent necessary to protect and maintain liberty itself. . . .  

[The Freedom Index] measure[s] economic freedom based on 12 quantitative and qualitative factors, grouped into four broad categories, or pillars, of economic freedom:

  1. Rule of Law (property rights, government integrity, judicial effectiveness)
  2. Government Size (government spending, tax burden, fiscal health)
  3. Regulatory Efficiency (business freedom, labor freedom, monetary freedom)
  4. Open Markets (trade freedom, investment freedom, financial freedom)

Each of the twelve economic freedoms within these categories is graded on a scale of 0 to 100. A country’s overall score is derived by averaging these twelve economic freedoms, with equal weight being given to each. More information on the grading and methodology can be found in the appendix.

As was the case in previous versions, the 2018 Index explores various aspects of economic freedom in several essays that accompany its rankings.  In particular, with respect to international trade, the 2018 Index demonstrates that citizens of countries that embrace free trade are better off than those in countries that do not.  The data show a strong correlation between trade freedom and a variety of positive indicators, including economic prosperity, unpolluted environments, food security, gross national income per capita, and the absence of politically motivated violence or unrest.  Reducing trade barriers remains a proven recipe for prosperity that a majority of Americans support.

The 2018 Index’s three key trade-related takeaways are:

  1. A comparison of economic performance and trade scores in the 2018 Index shows how trade freedom increases prosperity and overall well-being.
  2. Countries with the most trade freedom have much higher per capita incomes, greater food security, cleaner environments, and less politically motivated violence.
  3. Free trade policies also encourage freedom in general. Most Americans support free trade, and believe its benefits outweigh any disadvantages.

Follow this space for further updates on the 2018 Index.

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

My new book, How to Regulate: A Guide for Policymakers, is now available on Amazon.  Inform Santa!

The book, published by Cambridge University Press, attempts to fill what I think is a huge hole in legal education:  It focuses on the substance of regulation and sets forth principles for designing regulatory approaches that will maximize social welfare.

Lawyers and law professors obsess over process.  (If you doubt that, sit in on a law school faculty meeting sometime!) That obsession may be appropriate; process often determines substance.  Rarely, though, do lawyers receive training in how to design the substance of a rule or standard to address some welfare-reducing defect in private ordering.  That’s a shame, because lawyers frequently take the lead in crafting regulatory approaches.  They need to understand (1) why the unfortunate situation is occurring, (2) what options are available for addressing it, and (3) what are the downsides to each of the options.

Economists, of course, study those things.  But economists have their own blind spots.  Being unfamiliar with legal and regulatory processes, they often fail to comprehend how (1) government officials’ informational constraints and (2) special interests’ tendency to manipulate government power for private ends can impair a regulatory approach’s success.  (Economists affiliated with the Austrian and Public Choice schools are more attuned to those matters, but their insights are often ignored by the economists advising on regulatory approaches — see, e.g., the fine work of the Affordable Care Act architects.)

Enter How to Regulate.  The book endeavors to provide economic training to the lawyers writing rules and a sense of the “limits of law” to the economists advising them.

The book begins by setting forth an overarching goal for regulation (minimize the sum of error and decision costs) and a general plan for achieving that goal (think like a physician–identify the adverse symptom, diagnose the disease, consider the range of available remedies, and assess the side effects of each).  It then marches through six major bases for regulating: externalities, public goods, market power, information asymmetry, agency costs, and the cognitive and volitional quirks observed by behavioral economists.  For each of those bases for regulation, the book considers the symptoms that might justify a regulatory approach, the disease causing those symptoms (i.e., the underlying economics), the range of available remedies (the policy tools available), and the side effects of each (e.g., public choice concerns, mistakes from knowledge limitations).

I have been teaching How to Regulate this semester, and it’s been a blast.  Unfortunately, all of my students are in their last year of law school.  The book would be most meaningful, I think, to an upcoming second-year student.  It really lays out the basis for a number of areas of law beyond the common law:  environmental law, antitrust, corporate law, securities regulation, food labeling laws, consumer protection statutes, etc.

I was heartened to receive endorsements from a couple of very fine thinkers on regulation, both of whom have headed the Office of Information and Regulatory Affairs (the White House’s chief regulatory review body).  They also happen to occupy different spots on the ideological spectrum.

Judge Douglas Ginsburg of the D.C. Circuit wrote that the book “will be valuable for all policy wonks, not just policymakers.  It provides an organized and rigorous framework for analyzing whether and how inevitably imperfect regulation is likely to improve upon inevitably imperfect market outcomes.”

Harvard Law School’s Cass Sunstein wrote:  “This may well be the best guide, ever, to the regulatory state.  It’s brilliant, sharp, witty, and even-handed — and it’s so full of insights that it counts as a major contribution to both theory and practice.  Indispensable reading for policymakers all over the world, and also for teachers, students, and all those interested in what the shouting is really about.”

Bottom line:  There’s something for everybody in this book.  I wrote it because I think the ideas are important and under-studied.  And I really tried to make it as accessible (and occasionally funny!) as possible.

If you’re a professor and would be interested in a review copy for potential use in a class, or if you’re a potential reviewer, shoot me an email and I’ll request a review copy for you.

Canada’s large merchants have called on the government to impose price controls on interchange fees, claiming this would benefit not only merchants but also consumers. But experience elsewhere contradicts this claim.

In a recently released Macdonald Laurier Institute report, Julian Morris, Geoffrey A. Manne, Ian Lee, and Todd J. Zywicki detail how price controls on credit card interchange fees would result in reduced reward earnings and higher annual fees on credit cards, with adverse effects on consumers, many merchants and the economy as a whole.

This study draws on the experience with fee caps imposed in other jurisdictions, highlighting in particular the effects in Australia, where interchange fees were capped in 2003. There, the caps resulted in a significant decrease in the rewards earned per dollar spent and an increase in annual card fees. If similar restrictions were imposed in Canada, resulting in a 40 percent reduction in interchange fees, the authors of the report anticipate that:

  1. On average, each adult Canadian would be worse off to the tune of between $89 and $250 per year due to a loss of rewards and increase in annual card fees:
    1. For an individual or household earning $40,000, the net loss would be $66 to $187; and
    2. for an individual or household earning $90,000, the net loss would be $199 to $562.
  2. Spending at merchants in aggregate would decline by between $1.6 billion and $4.7 billion, resulting in a net loss to merchants of between $1.6 billion and $2.8 billion.
  3. GDP would fall by between 0.12 percent and 0.19 percent per year.
  4. Federal government revenue would fall by between 0.14 percent and 0.40 percent.

Moreover, tighter fee caps would “have a more dramatic negative effect on middle class households and the economy as a whole.”

You can read the full report here.

On November 10, at the University of Southern California Law School, Assistant Attorney General for Antitrust Makan Delrahim delivered an extremely important policy address on the antitrust treatment of standard setting organizations (SSOs).  Delrahim’s remarks outlined a dramatic shift in the Antitrust Division’s approach to controversies concerning the licensing of standard essential patents (SEPs, patents that “read on” SSO technical standards) that are often subject to “fair, reasonable, and non-discriminatory” (FRAND) licensing obligations imposed by SSOs.  In particular, while Delrahim noted the theoretical concerns of possible “holdups” by SEP holders (when SEP holders threaten to delay licensing until their royalty demands are met), he cogently explained why the problem of “holdouts” by implementers of SEP technologies (when implementers threaten to under-invest in the implementation of a standard, or threaten not to take a license at all, until their royalty demands are met) is a far more serious antitrust concern.  More generally, Delrahim stressed the centrality of patents as property rights, and the need for enforcers not to interfere with the legitimate unilateral exploitation of those rights (whether through licensing, refusals to license, or the filing of injunctive actions).  Underlying Delrahim’s commentary is the understanding that innovation is vitally important to the American economy, and the concern that antitrust enforcers’ efforts in recent years have threatened to undermine innovation by inappropriately interfering in free market licensing negotiations between patentees and licensees.

Important “takeaways” from Delrahim’s speech (with key quotations) are set forth below.

  • Thumb on the scale in favor of implementers: “In particular, I worry that we as enforcers have strayed too far in the direction of accommodating the concerns of technology implementers who participate in standard setting bodies, and perhaps risk undermining incentives for IP creators, who are entitled to an appropriate reward for developing break-through technologies.”
  • Striking the right balance through market forces (as opposed to government-issued best practices): “The dueling interests of innovators and implementers always are in tension, and the tension is resolved through the free market, typically in the form of freely negotiated licensing agreements for royalties or reciprocal licenses.”
  • Holdup as theoretical concern with no evidence that it’s a systemic or widespread problem: He praises Professor Carl Shapiro for his theoretical model of holdup, but stresses that “many of the proposed [antitrust] ‘solutions’ to the hold-up problem are often anathema to the policies underlying the intellectual property system envisioned by our forefathers.”
  • Rejects prior position that antitrust is only concerned with the patent-holder side of the holdup equation, stating that he’s more concerned with holdout given the nature of investments: “Too often lost in the debate over the hold-up problem is recognition of a more serious risk:  the hold-out problem. . . . I view the collective hold-out problem as a more serious impediment to innovation.  Here is why: most importantly, the hold-up and hold-out problems are not symmetric.  What do I mean by that?  It is important to recognize that innovators make an investment before they know whether that investment will ever pay off.  If the implementers hold out, the innovator has no recourse, even if the innovation is successful.  In contrast, the implementer has some buffer against the risk of hold-up because at least some of its investments occur after royalty rates for new technology could have been determined.  Because this asymmetry exists, under-investment by the innovator should be of greater concern than under-investment by the implementer.”
  • What’s at stake: “Every incremental shift in bargaining leverage toward implementers of new technologies acting in concert can undermine incentives to innovate.  I therefore view policy proposals with a one-sided focus on the hold-up issue with great skepticism because they can pose a serious threat to the innovative process.”
  • Breach of FRAND as primarily a contract or fraud, not antitrust issue: “There is a growing trend supporting what I would view as a misuse of antitrust or competition law, purportedly motivated by the fear of so-called patent hold-up, to police private commitments that IP holders make in order to be considered for inclusion in a standard.  This trend is troublesome.  If a patent holder violates its commitments to an SSO, the first and best line of defense, I submit, is the SSO itself and its participants. . . . If a patent holder is alleged to have violated a commitment to a standard setting organization, that action may have some impact on competition.  But, I respectfully submit, that does not mean the heavy hand of antitrust necessarily is the appropriate remedy for the would-be licensee—or the enforcement agency.  There are perfectly adequate and more appropriate common law and statutory remedies available to the SSO or its members.”
  • Recommends that unilateral refusals to license should be per se lawful: “The enforcement of valid patent rights should not be a violation of antitrust law.  A patent holder cannot violate the antitrust laws by properly exercising the rights patents confer, such as seeking an injunction or refusing to license such a patent.  Set aside whether taking these actions might violate the common law.  Under the antitrust laws, I humbly submit that a unilateral refusal to license a valid patent should be per se legal.  Indeed, just this Monday, Chief Judge Diane Wood, a former Deputy Assistant Attorney General at the Antitrust Division, stated that “[e]ven monopolists are almost never required to assist their competitors.”
  • Intent to investigate buyers’ cartel behavior in SSOs: “The prospect of hold-out offers implementers a crucial bargaining chip.  Unlike the unilateral hold-up problem, implementers can impose this leverage before they make significant investments in new technology.  . . . The Antitrust Division will carefully scrutinize what appears to be cartel-like anticompetitive behavior among SSO participants, either on the innovator or implementer side.  The old notion that ‘openness’ alone is sufficient to guard against cartel-like behavior in SSOs may be outdated, given the evolution of SSOs beyond strictly objective technical endeavors. . . . I likewise urge SSOs to be proactive in evaluating their own rules, both at the inception of the organization, and routinely thereafter.  In fact, SSOs would be well advised to implement and maintain internal antitrust compliance programs and regularly assess whether their rules, or the application of those rules, are or may become anticompetitive.”
  • Basing royalties on the “smallest salable component” as a requirement by a concerted agreement of implementers is a possible antitrust violation: “If an SSO pegs its definition of “reasonable” royalties to a single Georgia-Pacific factor that heavily favors either implementers or innovators, then the process that led to such a rule deserves close antitrust scrutiny.  While the so-called ‘smallest salable component’ rule may be a useful tool among many in determining patent infringement damages for multi-component products, its use as a requirement by a concerted agreement of implementers as the exclusive determinant of patent royalties may very well warrant antitrust scrutiny.”
  • Right to Injunctive Relief and holdout incentives: “Patents are a form of property, and the right to exclude is one of the most fundamental bargaining rights a property owner possesses.  Rules that deprive a patent holder from exercising this right—whether imposed by an SSO or by a court—undermine the incentive to innovate and worsen the problem of hold-out.  After all, without the threat of an injunction, the implementer can proceed to infringe without a license, knowing that it is only on the hook only for reasonable royalties.”
  • Seeking or Enforcing Injunctive Relief Generally a Contract Not Antitrust Issue: “It is just as important to recognize that a violation by a patent holder of an SSO rule that restricts a patent-holder’s right to seek injunctive relief should be appropriately the subject of a contract or fraud action, and rarely if ever should be an antitrust violation.”
  • FRAND is Not a Compulsory Licensing Scheme: “We should not transform commitments to license on FRAND terms into a compulsory licensing scheme.  Indeed, we have had strong policies against compulsory licensing, which effectively devalues intellectual property rights, including in most of our trade agreements, such as the TRIPS agreement of the WTO.  If an SSO requires innovators to submit to such a scheme as a condition for inclusion in a standard, we should view the SSO’s rule and the process leading to it with suspicion, and certainly not condemn the use of such injunctive relief as an antitrust violation where a contract remedy is perfectly adequate.”