Archives For standard setting

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

One baleful aspect of U.S. antitrust enforcers’ current (and misguided) focus on the unilateral exercise of patent rights is an attack on the ability of standard essential patent (SEP) holders to obtain a return that incentivizes them to participate in collective standard setting.  (This philosophy is manifested, for example, in a relatively recent U.S. Justice Department “business review letter” that lends support to the devaluation of SEPs.)  Enforcers accept the view that FRAND royalty rates should compensate licensees only for the value of the incremental difference between the first- and second-best technologies in a hypothetical ex ante competition among patent holders to have their patented technologies included in a proposed standard – a methodology that yields relatively low royalty rates (tending toward zero when the first- and second-best technologies are very close substitutes).  Tied to this perspective is enforcers’ concern with higher royalty rates as reflecting unearned “hold-up value” due to the “lock in” effects of a standard (the premium implementers are willing to pay patent holders whose technologies are needed to practice an established standard).  As a result, strategies by which SEP holders unilaterally seek to maximize returns to their SEP-germane intellectual property, such as threatening lawsuits seeking injunctions for patent infringement, are viewed askance.

The ex ante “incremental value” approach, far from being economically optimal, is inherently flawed.  It is at odds with elementary economic logic, which indicates that “ratcheting down” returns to SEPs in line with an “ex ante competition among technologies” model will lower incentives to invest in patented technologies offered up for consideration by SSOs in a standard- setting exercise.  That disincentive effect will in turn diminish the quality of patents that end up as SEPs – thereby reducing the magnitude of the welfare benefits stemming from standards.  In fact, the notion that FRAND principles should be applied in a manner that guarantees minimal returns to patent holders is inherently at odds with the justification for establishing a patent system in the first place.  That is because the patent system is designed to generously reward large-scale dynamic gains that stem from innovation, while the niggardly “incremental value” yardstick is a narrow static welfare measure that ignores incentive effects (much as the “marginal cost pricing” ideal of neoclassical price theory is inconsistent with Austrian and other dynamic perspectives on marketplace interactions).

Recently, lawyer-economist Greg Sidak outlined an approach to SEP FRAND-based pricing that is far more in line with economic reality – one based on golf tournament prizes.  In a paper to be delivered at the November 5 2015 “Patents in Telecoms” Conference at George Washington University, Sidak explains that collective standard-setting through a standard-setting organization (SSO) is analogous to establishing and running a professional golf tournament.  Like golf tournament organizers, SSOs may be expected to award a substantial prize to the winner that reflects a significant spread between the winner and the runner-up, in order to maximize the benefits flowing from their enterprise.  Relevant excerpts from Sidak’s draft paper (with footnotes omitted and hyperlink added) follow:

“If an inventor could receive only a pittance for his investment in developing his technology and in contributing it to a standard, he would cease contributing proprietary technologies to collective standards and instead pursue more profitable outside options.  That reasoning is even more compelling if the inventor is a publicly traded firm, answerable to its shareholders.  Therefore, modeling standard setting as a static Bertrand pricing game [reflected in the incremental value approach] without any differentiation among the competing technologies and without any outside option for the inventors would predict that every inventor loses—that is, no inventor could possibly recoup his investment in innovation and therefore would quickly exit the market.  Standard setting would be a sucker’s game for inventors.  . . .

[J]ust as the organizer of a golf tournament seeks to ensure that all contestants exert maximum effort to win the tournament, so as to ensure a competitive and entertaining tournament, the SSO must give each participant the incentive to offer the SSO its best technologies. . . .

The rivalrous process—the tournament—by which an SSO identifies and then adopts a particular technology for the standard incidentally produces something else of profound value, something which the economists who invoke static Bertrand competition to model a FRAND royalty manage to obscure.  The high level of inventor participation that a standard-setting tournament is able to elicit by virtue of its payoff structure reveals valuable information about both the inventors and the technologies that might make subsequent rounds of innovation far more socially productive (for example, by identifying dead ends that future inventors need not invest time and money in exploring).  In contrast, the alternative portrayal of standard setting as static Bertrand competition among technologies leads . . . to the dismal prediction that standard setting is essentially a lottery.  The alternative technologies are assumed to be unlimited in number and undifferentiated in quality.  All are equally mediocre. If the standard were instead a motion picture and the competing inventions were instead actors, there would be no movie stars—only extras from central casting, all equally suitable to play the leading role.  In short, a model of competition for adoption of a technology into the standard that, in practical effect, randomly selects its winner and therefore does not aggregate and reveal information is a model that ignores what Nobel laureate Friedrich Hayek long ago argued is the quintessential virtue of a market mechanism.

The economic literature finds that a tournament is efficient when the cost of measuring the absolute output of each participant sufficiently exceeds the cost of measuring the relative output of each participant compared with the other participants.  That condition obtains in the context of SEPs and SSOs.  Measuring the actual output or value of each competing technology for a standard is notoriously difficult.  However, it is much easier to ascertain the relative value of each technology.  SEP holders and implementers routinely make these ordinal comparisons in FRAND royalty disputes. Given the similarities between tournaments and collective standard setting, and the fact that it is far easier to measure the relative value of an SEP than its absolute value, it is productive to analyze the standard-setting process as if it were a tournament. . . .

[I]n addition to guaranteeing participation, the prize structure must provide a sufficient incentive to encourage participants to exert a high level of effort.  In a standard setting context, a “high level of effort” means investing significant capital and other resources to develop new technologies that have commercial value.  The economic literature . . . suggests that the level of effort that a participant exerts depends on the spread, or difference, between the prize for winning the tournament and the next-best prize.  Furthermore, . . . ‘as the spread increases, the incentive to devote additional resources to improving one’s probability of winning increases.’  That result implies that the first-place prize must exceed the second-place prize and that, the greater the disparity between those two prizes, the greater the incentive that participants have to invest in developing new and innovative technologies.”

Sidak’s latest insights are in line with the former bipartisan U.S. antitrust consensus (expressed in the 1995 U.S. Justice Department – Federal Trade Commission IP-Antitrust Guidelines) that antitrust enforcers should focus on targeting schemes that reduce competition among patented technologies, and not challenge unilateral efforts by patentees to maximize returns to their legally-protected property right.  U.S. antitrust enforcers (and their foreign counterparts) would be well-advised to readopt that consensus and abandon efforts to limit returns to SEPs – an approach that is inimical to innovation and to welfare-enhancing dynamic competition in technology markets.

Applying antitrust law to combat “hold-up” attempts (involving demands for “anticompetitively excessive” royalties) or injunctive actions brought by standard essential patent (SEP) owners is inherently problematic, as explained by multiple scholars (see here and here, for example).  Disputes regarding compensation to SEP holders are better handled in patent infringement and breach of contract lawsuits, and adding antitrust to the mix imposes unnecessary costs and may undermine involvement in standard setting and harm innovation.  What’s more, as FTC Commissioner Maureen Ohlhausen and former FTC Commissioner Joshua Wright have pointed out (citing research), empirical evidence suggests there is no systematic problem with hold-up.  Indeed, to the contrary, a recent empirical study by Professors from Stanford, Berkeley, and the University of the Andes, accepted for publication in the Journal of Competition Law and Economics, finds that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy – a result totally at odds with theories of SEP-related competitive harm.  Thus, application of a cost-benefit approach that seeks to maximize the welfare benefits of antitrust enforcement strongly militates against continuing to pursue “SEP abuse” cases.  Enforcers should instead focus on more traditional investigations that seek to ferret out conduct that is far more likely to be welfare-inimical, if they are truly concerned about maximizing consumer welfare.

But are the leaders at the U.S. Department of Justice Antitrust Division (DOJ) and the Federal Trade paying any attention?  The most recent public reports are not encouraging.

In a very recent filing with the U.S. International Trade Commission (ITC), FTC Chairwoman Edith Ramirez stated that “the danger that bargaining conducted in the shadow of an [ITC] exclusion order will lead to patent hold-up is real.”  (Comparable to injunctions, ITC exclusion orders preclude the importation of items that infringe U.S. patents.  They are the only effective remedy the ITC can give for patent infringement, since the ITC cannot assess damages or royalties.)  She thus argued that, before issuing an exclusion order, the ITC should require an SEP holder to show that the infringer is unwilling or unable to enter into a patent license on “fair, reasonable, and non-discriminatory” (FRAND) terms – a new and major burden on the vindication of patent rights.  In justifying this burden, Chairwoman Ramirez pointed to Motorola’s allegedly excessive SEP royalty demands from Microsoft – $6-$8 per gaming console, as opposed to a federal district court finding that pennies per console was the appropriate amount.  She also cited LSI Semiconductor’s demand for royalties that exceeded the selling price of Realtek’s standard-compliant product, whereas a federal district court found the appropriate royalty to be only .19% of the product’s selling price.  But these two examples do not support Chairwoman Ramirez’s point – quite the contrary.  The fact that high initial royalty requests subsequently are slashed by patent courts shows that the patent litigation system is working, not that antitrust enforcement is needed, or that a special burden of proof must be placed on SEP holders.  Moreover, differences in bargaining positions are to be expected as part of the normal back-and-forth of bargaining.  Indeed, if anything, the extremely modest judicial royalty assessments in these cases raise the concern that SEP holders are being undercompensated, not overcompensated.

A recent speech by DOJ Assistant Attorney General for Antitrust (AAG) William J. Baer, delivered at the International Bar Association’s Competition Conference, suffers from the same sort of misunderstanding as Chairman Ramirez’s ITC filing.  Stating that “[h]old up concerns are real”, AAG Baer cited the two examples described by Chairwoman Ramirez.  He also mentioned the fact that Innovatio requested a royalty rate of over $16 per smart tablet for its SEP portfolio, but was awarded a rate of less than 10 cents per unit by the court.  While admitting that the implementers “proved victorious in court” in those cases, he asserted that “not every implementer has the wherewithal to litigate”, that “[s]ometimes implementers accede to licensors’ demands, fearing exclusion and costly litigation”, that “consumers can be harmed and innovation incentives are distorted”, and that therefore “[a] future of exciting new products built atop existing technology may be . . . deferred”.  These theoretical concerns are belied by the lack of empirical support for hold-up, and are contradicted by the recent finding, previously noted, that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy.  (In addition, the implementers of patented technology tend to be large corporations; AAG Baer’s assertion that some may not have “the wherewithal to litigate” is a bare proposition unsupported by empirical evidence or more nuanced analysis.)  In short, DOJ, like FTC, is advancing an argument that undermines, rather than bolsters, the case for applying antitrust to SEP holders’ efforts to defend their patent rights.

Ideally the FTC and DOJ should reevaluate their recent obsession with allegedly abusive unilateral SEP behavior and refocus their attention on truly serious competitive problems.  (Chairwoman Ramirez and AAG Baer are both outstanding and highly experienced lawyers who are well-versed in policy analysis; one would hope that they would be open to reconsidering current FTC and DOJ policy toward SEPs, in light of hard evidence.)  Doing so would benefit consumer welfare and innovation – which are, after all, the goals that those important agencies are committed to promote.