Archives For standard setting

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

One baleful aspect of U.S. antitrust enforcers’ current (and misguided) focus on the unilateral exercise of patent rights is an attack on the ability of standard essential patent (SEP) holders to obtain a return that incentivizes them to participate in collective standard setting.  (This philosophy is manifested, for example, in a relatively recent U.S. Justice Department “business review letter” that lends support to the devaluation of SEPs.)  Enforcers accept the view that FRAND royalty rates should compensate licensees only for the value of the incremental difference between the first- and second-best technologies in a hypothetical ex ante competition among patent holders to have their patented technologies included in a proposed standard – a methodology that yields relatively low royalty rates (tending toward zero when the first- and second-best technologies are very close substitutes).  Tied to this perspective is enforcers’ concern with higher royalty rates as reflecting unearned “hold-up value” due to the “lock in” effects of a standard (the premium implementers are willing to pay patent holders whose technologies are needed to practice an established standard).  As a result, strategies by which SEP holders unilaterally seek to maximize returns to their SEP-germane intellectual property, such as threatening lawsuits seeking injunctions for patent infringement, are viewed askance.

The ex ante “incremental value” approach, far from being economically optimal, is inherently flawed.  It is at odds with elementary economic logic, which indicates that “ratcheting down” returns to SEPs in line with an “ex ante competition among technologies” model will lower incentives to invest in patented technologies offered up for consideration by SSOs in a standard- setting exercise.  That disincentive effect will in turn diminish the quality of patents that end up as SEPs – thereby reducing the magnitude of the welfare benefits stemming from standards.  In fact, the notion that FRAND principles should be applied in a manner that guarantees minimal returns to patent holders is inherently at odds with the justification for establishing a patent system in the first place.  That is because the patent system is designed to generously reward large-scale dynamic gains that stem from innovation, while the niggardly “incremental value” yardstick is a narrow static welfare measure that ignores incentive effects (much as the “marginal cost pricing” ideal of neoclassical price theory is inconsistent with Austrian and other dynamic perspectives on marketplace interactions).

Recently, lawyer-economist Greg Sidak outlined an approach to SEP FRAND-based pricing that is far more in line with economic reality – one based on golf tournament prizes.  In a paper to be delivered at the November 5 2015 “Patents in Telecoms” Conference at George Washington University, Sidak explains that collective standard-setting through a standard-setting organization (SSO) is analogous to establishing and running a professional golf tournament.  Like golf tournament organizers, SSOs may be expected to award a substantial prize to the winner that reflects a significant spread between the winner and the runner-up, in order to maximize the benefits flowing from their enterprise.  Relevant excerpts from Sidak’s draft paper (with footnotes omitted and hyperlink added) follow:

“If an inventor could receive only a pittance for his investment in developing his technology and in contributing it to a standard, he would cease contributing proprietary technologies to collective standards and instead pursue more profitable outside options.  That reasoning is even more compelling if the inventor is a publicly traded firm, answerable to its shareholders.  Therefore, modeling standard setting as a static Bertrand pricing game [reflected in the incremental value approach] without any differentiation among the competing technologies and without any outside option for the inventors would predict that every inventor loses—that is, no inventor could possibly recoup his investment in innovation and therefore would quickly exit the market.  Standard setting would be a sucker’s game for inventors.  . . .

[J]ust as the organizer of a golf tournament seeks to ensure that all contestants exert maximum effort to win the tournament, so as to ensure a competitive and entertaining tournament, the SSO must give each participant the incentive to offer the SSO its best technologies. . . .

The rivalrous process—the tournament—by which an SSO identifies and then adopts a particular technology for the standard incidentally produces something else of profound value, something which the economists who invoke static Bertrand competition to model a FRAND royalty manage to obscure.  The high level of inventor participation that a standard-setting tournament is able to elicit by virtue of its payoff structure reveals valuable information about both the inventors and the technologies that might make subsequent rounds of innovation far more socially productive (for example, by identifying dead ends that future inventors need not invest time and money in exploring).  In contrast, the alternative portrayal of standard setting as static Bertrand competition among technologies leads . . . to the dismal prediction that standard setting is essentially a lottery.  The alternative technologies are assumed to be unlimited in number and undifferentiated in quality.  All are equally mediocre. If the standard were instead a motion picture and the competing inventions were instead actors, there would be no movie stars—only extras from central casting, all equally suitable to play the leading role.  In short, a model of competition for adoption of a technology into the standard that, in practical effect, randomly selects its winner and therefore does not aggregate and reveal information is a model that ignores what Nobel laureate Friedrich Hayek long ago argued is the quintessential virtue of a market mechanism.

The economic literature finds that a tournament is efficient when the cost of measuring the absolute output of each participant sufficiently exceeds the cost of measuring the relative output of each participant compared with the other participants.  That condition obtains in the context of SEPs and SSOs.  Measuring the actual output or value of each competing technology for a standard is notoriously difficult.  However, it is much easier to ascertain the relative value of each technology.  SEP holders and implementers routinely make these ordinal comparisons in FRAND royalty disputes. Given the similarities between tournaments and collective standard setting, and the fact that it is far easier to measure the relative value of an SEP than its absolute value, it is productive to analyze the standard-setting process as if it were a tournament. . . .

[I]n addition to guaranteeing participation, the prize structure must provide a sufficient incentive to encourage participants to exert a high level of effort.  In a standard setting context, a “high level of effort” means investing significant capital and other resources to develop new technologies that have commercial value.  The economic literature . . . suggests that the level of effort that a participant exerts depends on the spread, or difference, between the prize for winning the tournament and the next-best prize.  Furthermore, . . . ‘as the spread increases, the incentive to devote additional resources to improving one’s probability of winning increases.’  That result implies that the first-place prize must exceed the second-place prize and that, the greater the disparity between those two prizes, the greater the incentive that participants have to invest in developing new and innovative technologies.”

Sidak’s latest insights are in line with the former bipartisan U.S. antitrust consensus (expressed in the 1995 U.S. Justice Department – Federal Trade Commission IP-Antitrust Guidelines) that antitrust enforcers should focus on targeting schemes that reduce competition among patented technologies, and not challenge unilateral efforts by patentees to maximize returns to their legally-protected property right.  U.S. antitrust enforcers (and their foreign counterparts) would be well-advised to readopt that consensus and abandon efforts to limit returns to SEPs – an approach that is inimical to innovation and to welfare-enhancing dynamic competition in technology markets.

Applying antitrust law to combat “hold-up” attempts (involving demands for “anticompetitively excessive” royalties) or injunctive actions brought by standard essential patent (SEP) owners is inherently problematic, as explained by multiple scholars (see here and here, for example).  Disputes regarding compensation to SEP holders are better handled in patent infringement and breach of contract lawsuits, and adding antitrust to the mix imposes unnecessary costs and may undermine involvement in standard setting and harm innovation.  What’s more, as FTC Commissioner Maureen Ohlhausen and former FTC Commissioner Joshua Wright have pointed out (citing research), empirical evidence suggests there is no systematic problem with hold-up.  Indeed, to the contrary, a recent empirical study by Professors from Stanford, Berkeley, and the University of the Andes, accepted for publication in the Journal of Competition Law and Economics, finds that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy – a result totally at odds with theories of SEP-related competitive harm.  Thus, application of a cost-benefit approach that seeks to maximize the welfare benefits of antitrust enforcement strongly militates against continuing to pursue “SEP abuse” cases.  Enforcers should instead focus on more traditional investigations that seek to ferret out conduct that is far more likely to be welfare-inimical, if they are truly concerned about maximizing consumer welfare.

But are the leaders at the U.S. Department of Justice Antitrust Division (DOJ) and the Federal Trade paying any attention?  The most recent public reports are not encouraging.

In a very recent filing with the U.S. International Trade Commission (ITC), FTC Chairwoman Edith Ramirez stated that “the danger that bargaining conducted in the shadow of an [ITC] exclusion order will lead to patent hold-up is real.”  (Comparable to injunctions, ITC exclusion orders preclude the importation of items that infringe U.S. patents.  They are the only effective remedy the ITC can give for patent infringement, since the ITC cannot assess damages or royalties.)  She thus argued that, before issuing an exclusion order, the ITC should require an SEP holder to show that the infringer is unwilling or unable to enter into a patent license on “fair, reasonable, and non-discriminatory” (FRAND) terms – a new and major burden on the vindication of patent rights.  In justifying this burden, Chairwoman Ramirez pointed to Motorola’s allegedly excessive SEP royalty demands from Microsoft – $6-$8 per gaming console, as opposed to a federal district court finding that pennies per console was the appropriate amount.  She also cited LSI Semiconductor’s demand for royalties that exceeded the selling price of Realtek’s standard-compliant product, whereas a federal district court found the appropriate royalty to be only .19% of the product’s selling price.  But these two examples do not support Chairwoman Ramirez’s point – quite the contrary.  The fact that high initial royalty requests subsequently are slashed by patent courts shows that the patent litigation system is working, not that antitrust enforcement is needed, or that a special burden of proof must be placed on SEP holders.  Moreover, differences in bargaining positions are to be expected as part of the normal back-and-forth of bargaining.  Indeed, if anything, the extremely modest judicial royalty assessments in these cases raise the concern that SEP holders are being undercompensated, not overcompensated.

A recent speech by DOJ Assistant Attorney General for Antitrust (AAG) William J. Baer, delivered at the International Bar Association’s Competition Conference, suffers from the same sort of misunderstanding as Chairman Ramirez’s ITC filing.  Stating that “[h]old up concerns are real”, AAG Baer cited the two examples described by Chairwoman Ramirez.  He also mentioned the fact that Innovatio requested a royalty rate of over $16 per smart tablet for its SEP portfolio, but was awarded a rate of less than 10 cents per unit by the court.  While admitting that the implementers “proved victorious in court” in those cases, he asserted that “not every implementer has the wherewithal to litigate”, that “[s]ometimes implementers accede to licensors’ demands, fearing exclusion and costly litigation”, that “consumers can be harmed and innovation incentives are distorted”, and that therefore “[a] future of exciting new products built atop existing technology may be . . . deferred”.  These theoretical concerns are belied by the lack of empirical support for hold-up, and are contradicted by the recent finding, previously noted, that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy.  (In addition, the implementers of patented technology tend to be large corporations; AAG Baer’s assertion that some may not have “the wherewithal to litigate” is a bare proposition unsupported by empirical evidence or more nuanced analysis.)  In short, DOJ, like FTC, is advancing an argument that undermines, rather than bolsters, the case for applying antitrust to SEP holders’ efforts to defend their patent rights.

Ideally the FTC and DOJ should reevaluate their recent obsession with allegedly abusive unilateral SEP behavior and refocus their attention on truly serious competitive problems.  (Chairwoman Ramirez and AAG Baer are both outstanding and highly experienced lawyers who are well-versed in policy analysis; one would hope that they would be open to reconsidering current FTC and DOJ policy toward SEPs, in light of hard evidence.)  Doing so would benefit consumer welfare and innovation – which are, after all, the goals that those important agencies are committed to promote.