Archives For licensing

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

Underpinning many policy disputes is a frequently rehearsed conflict of visions: Should we experiment with policies that are likely to lead to superior, but unknown, solutions, or should we should stick to well-worn policies, regardless of how poorly they fit current circumstances? 

This conflict is clearly visible in the debate over whether DOJ should continue to enforce its consent decrees with the major music performing rights organizations (“PROs”), ASCAP and BMI—or terminate them. 

As we note in our recently filed comments with the DOJ, summarized below, the world has moved on since the decrees were put in place in the early twentieth century. Given the changed circumstances, the DOJ should terminate the consent decrees. This would allow entrepreneurs, armed with modern technology, to facilitate a true market for public performance rights.

The consent decrees

In the early days of radio, it was unclear how composers and publishers could effectively monitor and enforce their copyrights. Thousands of radio stations across the nation were playing the songs that tens of thousands of composers had written. Given the state of technology, there was no readily foreseeable way to enable bargaining between the stations and composers for license fees associated with these plays.

In 1914, a group of rights holders established the American Society of Composers Authors and Publishers (ASCAP) as a way to overcome these transactions costs by negotiating with radio stations on behalf of all of its members.

Even though ASCAP’s business was clearly aimed at ensuring that rightsholders’ were appropriately compensated for the use of their works, which logically would have incentivized greater output of licensable works, the nonstandard arrangement it embodied was unacceptable to the antitrust enforcers of the era. Not long after it was created, the Department of Justice began investigating ASCAP for potential antitrust violations.

While the agglomeration of rights under a single entity had obvious benefits for licensors and licensees of musical works, a power struggle nevertheless emerged between ASCAP and radio broadcasters over the terms of those licenses. Eventually this struggle led to the formation of a new PRO, the broadcaster-backed BMI, in 1939. The following year, the DOJ challenged the activities of both PROs in dual criminal antitrust proceedings. The eventual result was a set of consent decrees in 1941 that, with relatively minor modifications over the years, still regulate the music industry.

Enter the Internet

The emergence of new ways to distribute music has, perhaps unsurprisingly, resulted in renewed interest from artists in developing alternative ways to license their material. In 2014, BMI and ASCAP asked the DOJ to modify their consent decrees to permit music publishers partially to withdraw from the PROs, which would have enabled those partially-withdrawing publishers to license their works to digital services under separate agreements (and prohibited the PROs from licensing their works to those same services). However, the DOJ rejected this request and insisted that the consent decree requires “full-work” licenses — a result that would have not only entrenched the status quo, but also erased the competitive differences that currently exist between the PROs. (It might also have created other problems, such as limiting collaborations between artists who currently license through different PROs.)

This episode demonstrates a critical flaw in how the consent decrees currently operate. Imposing full-work license obligations on PROs would have short-circuited the limited market that currently exists, to the detriment of creators, competition among PROs, and, ultimately, consumers. Paradoxically these harms flow directly from a  presumption that administrative officials, seeking to enforce antitrust law — the ultimate aim of which is to promote competition and consumer welfare — can dictate through top-down regulatory intervention market terms better than participants working together. 

If a PRO wants to offer full-work licenses to its licensee-customers, it should be free to do so (including, e.g., by contracting with other PROs in cases where the PRO in question does not own the work outright). These could be a great boon to licensees and the market. But such an innovation would flow from a feedback mechanism in the market, and would be subject to that same feedback mechanism. 

However, for the DOJ as a regulatory overseer to intervene in the market and assert a preference that it deemed superior (but that was clearly not the result of market demand, or subject to market discipline) is fraught with difficulty. And this is the emblematic problem with the consent decrees and the mandated licensing regimes. It allows regulators to imagine that they have both the knowledge and expertise to manage highly complicated markets. But, as Mark Lemley has observed, “[g]one are the days when there was any serious debate about the superiority of a market-based economy over any of its traditional alternatives, from feudalism to communism.” 

It is no knock against the DOJ that it patently does not have either the knowledge or expertise to manage these markets: no one does. That’s the entire point of having markets, which facilitate the transmission and effective utilization of vast amounts of disaggregated information, including subjective preferences, that cannot be known to anyone other than the individual who holds them. When regulators can allow this process to work, they should.

Letting the market move forward

Some advocates of the status quo have recommended that the consent orders remain in place, because 

Without robust competition in the music licensing market, consumers could face higher prices, less choice, and an increase in licensing costs that could render many vibrant public spaces silent. In the absence of a truly competitive market in which PROs compete to attract services and other licensees, the consent decrees must remain in place to prevent ASCAP and BMI from abusing their substantial market power.

This gets to the very heart of the problem with the conflict of visions that undergirds policy debates. Advocating for the status quo in this manner is based on a static view of “markets,” one that is, moreover, rooted in an early twentieth-century conception of the relevant industries. The DOJ froze the licensing market in time with the consent decrees — perhaps justifiably in 1941 given the state of technology and the very high transaction costs involved. But technology and business practices have evolved and are now much more capable of handling the complex, distributed set of transactions necessary to make the performance license market a reality.

Believing that the absence of the consent decrees will force the performance licensing market to collapse into an anticompetitive wasteland reflects a failure of imagination and suggests a fundamental distrust in the power of the market to uncover novel solutions—against the overwhelming evidence to the contrary

Yet, those of a dull and pessimistic mindset need not fear unduly the revocation of the consent decrees. For if evidence emerges that the market participants (including the PROs and whatever other entities emerge) are engaging in anticompetitive practices to the detriment of consumer welfare, the DOJ can sue those entities. The threat of such actions should be sufficient in itself to deter such anticompetitive practices but if it is not, then the sword of antitrust, including potentially the imposition of consent decrees, can once again be wielded. 

Meanwhile, those of us with an optimistic, imaginative mindset, look forward to a time in the near future when entrepreneurs devise innovative and cost-effective solutions to the problem of highly-distributed music licensing. In some respects their job is made easier by the fact that an increasing proportion of music is  streamed via a small number of large companies (Spotify, Pandora, Apple, Amazon, Tencent, YouTube, Tidal, etc.). But it is quite feasible that in the absence of the consent decrees new licensing systems will emerge, using modern database technologies, blockchain and other distributed ledgers, that will enable much more effective usage-based licenses applicable not only to these streaming services but others too. 

We hope the DOJ has the foresight to allow such true competition to enter this market and the strength to believe enough in our institutions that it can permit some uncertainty while entrepreneurs experiment with superior methods of facilitating music licensing.

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

In its February 25 North Carolina Dental decision, the U.S. Supreme Court, per Justice Anthony Kennedy, held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state.  In so ruling, the Court struck a significant blow against protectionist rent-seeking and for economic liberty.  (As I stated in a recent Heritage Foundation legal memorandum, “[a] Supreme Court decision accepting this [active supervision] principle might help to curb special-interest favoritism conferred through state law.  At the very least, it could complicate the efforts of special interests to protect themselves from competition through regulation.”)

A North Carolina law subjects the licensing of dentistry to a North Carolina State Board of Dental Examiners (Board), six of whose eight members must be licensed dentists.  After dentists complained to the Board that non-dentists were charging lower prices than dentists for teeth whitening, the Board sent cease-and-desist letter to non-dentist teeth whitening providers, warning that the unlicensed practice dentistry is a crime.  This led non-dentists to cease teeth whitening services in North Carolina.  The Federal Trade Commission (FTC) held that the Board’s actions violated Section 5 of the FTC Act, which prohibits unfair methods of competition, the Fourth Circuit agreed, and the Court affirmed the Fourth Circuit’s decision.

In its decision, the Court rejected the claim that state action immunity, which confers immunity on the anticompetitive conduct of states acting in their sovereign capacity, applied to the Board’s actions.  The Court stressed that where a state delegates control over a market to a non-sovereign actor, immunity applies only if the state accepts political accountability by actively supervising that actor’s decisions.  The Court applied its Midcal test, which requires (1) clear state articulation and (2) active state supervision of decisions by non-sovereign actors for immunity to attach.  The Court held that entities designated as state agencies are not exempt from active supervision when they are controlled by market participants, because allowing an exemption in such circumstances would pose the risk of self-dealing that the second prong of Midcal was created to address.

Here, the Board did not contend that the state exercised any (let alone active) supervision over its anticompetitive conduct.  The Court closed by summarizing “a few constant requirements of active supervision,” namely, (1) the supervisor must review the substance of the anticompetitive decision, (2) the supervisor must have the power to veto or modify particular decisions for consistency with state policy, (3) “the mere potential for state supervision is not an adequate substitute for a decision by the State,” and (4) “the state supervisor may not itself be an active market participant.”  The Court cautioned, however, that “the adequacy of supervision otherwise will depend on all the circumstances of a case.”

Justice Samuel Alito, joined by Justices Antonin Scalia and Clarence Thomas, dissented, arguing that the Court ignored precedent that state agencies created by the state legislature (“[t]he Board is not a private or ‘nonsovereign’ entity”) are shielded by the state action doctrine.  “By straying from this simple path” and assessing instead whether individual agencies are subject to regulatory capture, the Court spawned confusion, according to the dissenters.  Midcal was inapposite, because it involved a private trade association.  The dissenters feared that the majority’s decision may require states “to change the composition of medical, dental, and other boards, but it is not clear what sort of changes are needed to satisfy the test that the Court now adopts.”  The dissenters concluded “that determining when regulatory capture has occurred is no simple task.  That answer provides a reason for relieving courts from the obligation to make such determinations at all.  It does not explain why it is appropriate for the Court to adopt the rather crude test for capture that constitutes the holding of today’s decision.”

The Court’s holding in North Carolina Dental helpfully limits the scope of the Court’s infamous Parker v. Brown decision (which shielded from federal antitrust attack a California raisin producers’ cartel overseen by a state body), without excessively interfering in sovereign state prerogatives.  State legislatures may still choose to create self-interested professional regulatory bodies – their sovereignty is not compromised.  Now, however, they will have to (1) make it clearer up front that they intend to allow those bodies to displace competition, and (2) subject those bodies to disinterested third party review.  These changes should make it far easier for competition advocates (including competition agencies) to spot and publicize welfare-inimical regulatory schemes, and weaken the incentive and ability of rent-seekers to undermine competition through state regulatory processes.  All told, the burden these new judicially-imposed constraints will impose on the states appears relatively modest, and should be far outweighed by the substantial welfare benefits they are likely to generate.

Microsoft and its allies (the Microsoft-funded trade organization FairSearch and the prolific Google critic Ben Edelman) have been highly critical of Google’s use of “secret” contracts to license its proprietary suite of mobile apps, Google Mobile Services, to device manufacturers.

I’ve written about this at length before. As I said previously,

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.”

Microsoft (via another of its front groups, ICOMP) responded in predictable fashion.

While the hysteria over private, mutually beneficial contracts negotiated between sophisticated corporations was always patently absurd (who ever heard of sensitive commercial contracts that weren’t confidential?), Edelman’s claim that the Google MADAs operate to “suppress competition” with “no plausible pro-consumer benefits” was the subject of my previous post.

I won’t rehash all of those arguments here, but rather point to another indication that such contract terms are not anticompetitive: The recent revelation that they are used by others in the same industry — including, we’ve learned (to no one’s surprise), Microsoft.

Much like the release of Google’s MADAs in an unrelated lawsuit, the ongoing patent licensing contract dispute between Microsoft and Samsung has obliged the companies to release their own agreements. As it happens, they are at least as restrictive as the Google agreements criticized by Edelman — and, in at least one way, even more so.

Some quick background: As I said in my previous post, it is no secret that equipment manufacturers have the option to license a free set of Google apps (Google Mobile Services) and set Google as the default search engine. However, Google allows OEMs to preinstall other competing search engines as they see fit. Indeed, no matter which applications come pre-installed, the user can easily download Yahoo!, Microsoft’s Bing, Yandex, Naver, DuckDuckGo and other search engines for free from the Google Play Store.

But Microsoft has sought to impose even-more stringent constraints on its device partners. One of the agreements disclosed in the Microsoft-Samsung contract litigation, the “Microsoft-Samsung Business Collaboration Agreement,” requires Samsung to set Bing as the search default for all Windows phones and precludes Samsung from pre-installing any other search applications on Windows-based phones. Samsung must configure all of its Windows Phones to use Microsoft Search Services as the

default Web Search  . . . in all instances on such properties where Web Search can be launched or a Query submitted directly by a user (including by voice command) or automatically (including based on location or context).

Interestingly, the agreement also requires Samsung to install Microsoft Search Services as a non-default search option on all of Samsung’s non-Microsoft Android devices (to the extent doing so does not conflict with other contracts).

Of course, the Microsoft-Samsung contract is expressly intended to remain secret: Its terms are declared to be “Confidential Information,” prohibiting Samsung from making “any public statement regarding the specific terms of [the] Agreement” without Microsoft’s consent.

Meanwhile, the accompanying Patent License Agreement provides that

all terms and conditions in this Agreement, including the payment amount [and the] specific terms and conditions in this Agreement (including, without limitation, the amount of any fees and any other amounts payable to Microsoft under this Agreement) are confidential and shall not be disclosed by either Party.

In addition to the confidentiality terms spelled out in these two documents, there is a separate Non-Disclosure Agreement—to further dispel any modicum of doubt on that score. Perhaps this is why Edelman was unaware of the ubiquity of such terms (and their confidentiality) when he issued his indictment of the Google agreements but neglected to mention Microsoft’s own.

In light of these revelations, Edelman’s scathing contempt for the “secrecy” of Google’s MADAs seems especially disingenuous:

MADA secrecy advances Google’s strategic objectives. By keeping MADA restrictions confidential and little-known, Google can suppress the competitive response…Relatedly, MADA secrecy helps prevent standard market forces from disciplining Google’s restriction. Suppose consumers understood that Google uses tying and full-line-forcing to prevent manufacturers from offering phones with alternative apps, which could drive down phone prices. Then consumers would be angry and would likely make their complaints known both to regulators and to phone manufacturers. Instead, Google makes the ubiquitous presence of Google apps and the virtual absence of competitors look like a market outcome, falsely suggesting that no one actually wants to have or distribute competing apps.

If, as Edelman claims, Google’s objectionable contract terms “serve both to help Google expand into areas where competition could otherwise occur, and to prevent competitors from gaining traction,” then what are the very same sorts of terms doing in Microsoft’s contracts with Samsung? The revelation that Microsoft employs contracts similar to — and similarly confidential to — Google’s highlights the hypocrisy of claims that such contracts serve anticompetitive aims.

In fact, as I discussed in my previous post, there are several pro-competitive justifications for such agreements, whether undertaken by a market leader or a newer entrant intent on catching up. Most obviously, such contracts help to ensure that consumers receive the user experience they demand on devices manufactured by third parties. But more to the point, the fact that such arrangements permeate the market and are adopted by both large and small competitors is strong indication that such terms are pro-competitive.

At the very least, they absolutely demonstrate that such practices do not constitute prima facie evidence of the abuse of market power.

[Reminder: See the “Disclosures” page above. ICLE has received financial support from Google in the past, and I formerly worked at Microsoft. Of course, the views here are my own, although I encourage everyone to agree with them.]

The free market position on telecom reform has become rather confused of late. Erstwhile conservative Senator Thune is now cosponsoring a version of Senator Rockefeller’s previously proposed video reform bill, bundled into satellite legislation (the Satellite Television Access and Viewer Rights Act or “STAVRA”) that would also include a provision dubbed “Local Choice.” Some free marketeers have defended the bill as a step in the right direction.

Although it looks as if the proposal may be losing steam this Congress, the legislation has been described as a “big and bold idea,” and it’s by no means off the menu. But it should be.

It has been said that politics makes for strange bedfellows. Indeed, people who disagree on just about everything can sometimes unite around a common perceived enemy. Take carriage disputes, for instance. Perhaps because, for some people, a day without The Bachelor is simply a day lost, an unlikely alliance of pro-regulation activists like Public Knowledge and industry stalwarts like Dish has emerged to oppose the ability of copyright holders to withhold content as part of carriage negotiations.

Senator Rockefeller’s Online Video Bill was the catalyst for the Local Choice amendments to STAVRA. Rockefeller’s bill did, well, a lot of terrible things, from imposing certain net neutrality requirements, to overturning the Supreme Court’s Aereo decision, to adding even more complications to the already Byzantine morass of video programming regulations.

But putting Senator Thune’s lipstick on Rockefeller’s pig can’t save the bill, and some of the worst problems from Senator Rockefeller’s original proposal remain.

Among other things, the new bill is designed to weaken the ability of copyright owners to negotiate with distributors, most notably by taking away their ability to withhold content during carriage disputes and by forcing TV stations to sell content on an a la carte basis.

Video distribution issues are complicated — at least under current law. But at root these are just commercial contracts and, like any contracts, they rely on a couple of fundamental principles.

First is the basic property right. The Supreme Court (at least somewhat) settled this for now (in Aereo), by protecting the right of copyright holders to be compensated for carriage of their content. With this baseline, distributors must engage in negotiations to obtain content, rather than employing technological workarounds and exploiting legal loopholes.

Second is the related ability of contracts to govern the terms of trade. A property right isn’t worth much if its owner can’t control how it is used, governed or exchanged.

Finally, and derived from these, is the issue of bargaining power. Good-faith negotiations require both sides not to act strategically by intentionally causing negotiations to break down. But if negotiations do break down, parties need to be able to protect their rights. When content owners are not able to withhold content in carriage disputes, they are put in an untenable bargaining position. This invites bad faith negotiations by distributors.

The STAVRA/Local Choice proposal would undermine the property rights and freedom of contract that bring The Bachelor to your TV, and the proposed bill does real damage by curtailing the scope of the property right in TV programming and restricting the range of contracts available for networks to license their content.

The bill would require that essentially all broadcast stations that elect retrans make their content available a la carte — thus unbundling some of the proverbial sticks that make up the traditional property right. It would also establish MVPD pass-through of each local affiliate. Subscribers would pay a fee determined by the affiliate, and the station must be offered on an unbundled basis, without any minimum tier required – meaning an MVPD has to offer local stations to its customers with no markup, on an a la carte basis, if the station doesn’t elect must-carry. It would also direct the FCC to open a rulemaking to determine whether broadcasters should be prohibited from withholding their content online during a dispute with an MPVD.

“Free market” supporters of the bill assert something like “if we don’t do this to stop blackouts, we won’t be able to stem the tide of regulation of broadcasters.” Presumably this would end blackouts of broadcast programming: If you’re an MVPD subscriber, and you pay the $1.40 (or whatever) for CBS, you get it, period. The broadcaster sets an annual per-subscriber rate; MVPDs pass it on and retransmit only to subscribers who opt in.

But none of this is good for consumers.

When transaction costs are positive, negotiations sometimes break down. If the original right is placed in the wrong hands, then contracting may not assure the most efficient outcome. I think it was Coase who said that.

But taking away the ability of content owners to restrict access to their content during a bargaining dispute effectively places the right to content in the hands of distributors. Obviously, this change in bargaining position will depress the value of content. Placing the rights in the hands of distributors reduces the incentive to create content in the first place; this is why the law protects copyright to begin with. But it also reduces the ability of content owners and distributors to reach innovative agreements and contractual arrangements (like certain promotional deals) that benefit consumers, distributors and content owners alike.

The mandating of a la carte licensing doesn’t benefit consumers, either. Bundling is generally pro-competitive and actually gives consumers more content than they would otherwise have. The bill’s proposal to force programmers to sell content to consumers a la carte may actually lead to higher overall prices for less content. Not much of a bargain.

There are plenty of other ways this is bad for consumers, even if it narrowly “protects” them from blackouts. For example, the bill would prohibit a network from making a deal with an MVPD that provides a discount on a bundle including carriage of both its owned broadcast stations as well as the network’s affiliated cable programming. This is not a worthwhile — or free market — trade-off; it is an ill-advised and economically indefensible attack on vertical distribution arrangements — exactly the same thing that animates many net neutrality defenders.

Just as net neutrality’s meddling in commercial arrangements between ISPs and edge providers will ensure a host of unintended consequences, so will the Rockefeller/Thune bill foreclose a host of welfare-increasing deals. In the end, in exchange for never having to go three days without CBS content, the bill will make that content more expensive, limit the range of programming offered, and lock video distribution into a prescribed business model.

Former FCC Commissioner Rob McDowell sees the same hypocritical connection between net neutrality and broadcast regulation like the Local Choice bill:

According to comments filed with the FCC by Time Warner Cable and the National Cable and Telecommunications Association, broadcasters should not be allowed to take down or withhold the content they produce and own from online distribution even if subscribers have not paid for it—as a matter of federal law. In other words, edge providers should be forced to stream their online content no matter what. Such an overreach, of course, would lay waste to the economics of the Internet. It would also violate the First Amendment’s prohibition against state-mandated, or forced, speech—the flip side of censorship.

It is possible that the cable companies figure that subjecting powerful broadcasters to anti-free speech rules will shift the political momentum in the FCC and among the public away from net neutrality. But cable’s anti-free speech arguments play right into the hands of the net-neutrality crowd. They want to place the entire Internet ecosystem, physical networks, content and apps, in the hands of federal bureaucrats.

While cable providers have generally opposed net neutrality regulation, there is, apparently, some support among them for regulations that would apply to the edge. The Rockefeller/Thune proposal is just a replay of this constraint — this time by forcing programmers to allow retransmission of broadcast content under terms set by Congress. While “what’s good for the goose is good for the gander” sounds appealing in theory, here it is simply doubling down on a terrible idea.

What it reveals most of all is that true neutrality advocates don’t want government control to be limited to ISPs — rather, progressives like Rockefeller (and apparently some conservatives, like Thune) want to subject the whole apparatus — distribution and content alike — to intrusive government oversight in order to “protect” consumers (a point Fred Campbell deftly expands upon here and here).

You can be sure that, if the GOP supports broadcast a la carte, it will pave the way for Democrats (and moderates like McCain who back a la carte) to expand anti-consumer unbundling requirements to cable next. Nearly every economic analysis has concluded that mandated a la carte pricing of cable programming would be harmful to consumers. There is no reason to think that applying it to broadcast channels would be any different.

What’s more, the logical extension of the bill is to apply unbundling to all MVPD channels and to saddle them with contract restraints, as well — and while we’re at it, why not unbundle House of Cards from Orange is the New Black? The Rockefeller bill may have started in part as an effort to “protect” OVDs, but there’ll be no limiting this camel once its nose is under the tent. Like it or not, channel unbundling is arbitrary — why not unbundle by program, episode, studio, production company, etc.?

There is simply no principled basis for the restraints in this bill, and thus there will be no limit to its reach. Indeed, “free market” defenders of the Rockefeller/Thune approach may well be supporting a bill that ultimately leads to something like compulsory, a la carte licensing of all video programming. As I noted in my testimony last year before the House Commerce Committee on the satellite video bill:

Unless we are prepared to bear the consumer harm from reduced variety, weakened competition and possibly even higher prices (and absolutely higher prices for some content), there is no economic justification for interfering in these business decisions.

So much for property rights — and so much for vibrant video programming.

That there is something wrong with the current system is evident to anyone who looks at it. As Gus Hurwitz noted in recent testimony on Rockefeller’s original bill,

The problems with the existing regulatory regime cannot be understated. It involves multiple statutes implemented by multiple agencies to govern technologies developed in the 60s, 70s, and 80s, according to policy goals from the 50s, 60s, and 70s. We are no longer living in a world where the Rube Goldberg of compulsory licenses, must carry and retransmission consent, financial interest and syndication exclusivity rules, and the panoply of Federal, state, and local regulations makes sense – yet these are the rules that govern the video industry.

While video regulation is in need of reform, this bill is not an improvement. In the short run it may ameliorate some carriage disputes, but it will do so at the expense of continued programming vibrancy and distribution innovations. The better way to effect change would be to abolish the Byzantine regulations that simultaneously attempt to place thumbs of both sides of the scale, and to rely on free market negotiations with a copyright baseline and antitrust review for actual abuses.

But STAVRA/Local Choice is about as far from that as you can get.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

[First posted to the CPIP Blog on June 17, 2014]

Last Thursday, Elon Musk, the founder and CEO of Tesla Motors, issued an announcement on the company’s blog with a catchy title: “All Our Patent Are Belong to You.” Commentary in social media and on blogs, as well as in traditional newspapers, jumped to the conclusion that Tesla is abandoning its patents and making them “freely” available to the public for whomever wants to use them. As with all things involving patented innovation these days, the reality of Tesla’s new patent policy does not match the PR spin or the buzz on the Internet.

The reality is that Tesla is not disclaiming its patent rights, despite Musk’s title to his announcement or his invocation in his announcement of the tread-worn cliché today that patents impede innovation. In fact, Tesla’s new policy is an example of Musk exercising patent rights, not abandoning them.

If you’re not puzzled by Tesla’s announcement, you should be. This is because patents are a type of property right that secures the exclusive rights to make, use, or sell an invention for a limited period of time. These rights do not come cheap — inventions cost time, effort, and money to create and companies like Tesla then exploit these property rights in spending even more time, effort and money in converting inventions into viable commercial products and services sold in the marketplace. Thus, if Tesla’s intention is to make its ideas available for public use, why, one may wonder, did it bother to expend the tremendous resources in acquiring the patents in the first place?

The key to understanding this important question lies in a single phrase in Musk’s announcement that almost everyone has failed to notice: “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” (emphasis added)

What does “in good faith” mean in this context? Fortunately, one intrepid reporter at the L.A. Times asked this question, and the answer from Musk makes clear that this new policy is not an abandonment of patent rights in favor of some fuzzy notion of the public domain, but rather it’s an exercise of his company’s patent rights: “Tesla will allow other manufacturers to use its patents in “good faith” – essentially barring those users from filing patent-infringement lawsuits against [Tesla] or trying to produce knockoffs of Tesla’s cars.” In the legalese known to patent lawyers and inventors the world over, this is not an abandonment of Tesla’s patents, this is what is known as a cross license.

In plain English, here’s the deal that Tesla is offering to manufacturers and users of its electrical car technology: in exchange for using Tesla’s patents, the users of Tesla’s patents cannot file patent infringement lawsuits against Tesla if Tesla uses their other patents. In other words, this is a classic deal made between businesses all of the time — you can use my property and I can use your property, and we cannot sue each other. It’s a similar deal to that made between two neighbors who agree to permit each other to cross each other’s backyard. In the context of patented innovation, this agreement is more complicated, but it is in principle the same thing: if automobile manufacturer X decides to use Tesla’s patents, and Tesla begins infringing X’s patents on other technology, then X has agreed through its prior use of Tesla’s patents that it cannot sue Tesla. Thus, each party has licensed the other to make, use and sell their respective patented technologies; in patent law parlance, it’s a “cross license.”

The only thing unique about this cross licensing offer is that Tesla publicly announced it as an open offer for anyone willing to accept it. This is not a patent “free for all,” and it certainly is not tantamount to Tesla “taking down the patent wall.” These are catchy sound bites, but they in fact obfuscate the clear business-minded nature of this commercial decision.

For anyone perhaps still doubting what is happening here, the same L.A Times story further confirms that Tesla is not abandoning the patent system. As stated to the reporter: “Tesla will continue to seek patents for its new technology to prevent others from poaching its advancements.” So much for the much ballyhooed pronouncements last week of how Tesla’s new patent (licensing) policy “reminds us of the urgent need for patent reform”! Musk clearly believes that the patent system is working just great for the new technological innovation his engineers are creating at Tesla right now.

For those working in the innovation industries, Tesla’s decision to cross license its old patents makes sense. Tesla Motors has already extracted much of the value from these old patents: Musk was able to secure venture capital funding for his startup company and he was able to secure for Tesla a dominant position in the electrical car market through his exclusive use of this patented innovation. (Venture capitalists consistently rely on patents in making investment decisions, and for anyone who doubts this need to watch only a few episodes of Shark Tank.) Now that everyone associates radical, cutting-edge innovation with Tesla, Musk can shift in his strategic use of his company’s assets, including his intellectual property rights, such as relying more heavily on the goodwill associated with the Tesla trademark. This is clear, for instance, from the statement to the LA Times that companies or individuals agreeing to the “good faith” terms of Tesla’s license agree not to make “knockoffs of Tesla’s cars.”

There are other equally important commercial reasons for Tesla adopting its new cross-licensing policy, but the point has been made. Tesla’s new cross-licensing policy for its old patents is not Musk embracing “the open source philosophy” (as he asserts in his announcement). This may make good PR given the overheated rhetoric today about the so-called “broken patent system,” but it’s time people recognize the difference between PR and a reasonable business decision that reflects a company that has used (old) patents to acquire a dominant market position and is now changing its business model given these successful developments.

At a minimum, people should recognize that Tesla is not declaring that it will not bring patent infringement lawsuits, but only that it will not sue people with whom it has licensed its patented innovation. This is not, contrary to one law professor’s statement, a company “refrain[ing] from exercising their patent rights to the fullest extent of the law.” In licensing its patented technology, Tesla is in fact exercising its patent rights to the fullest extent of the law, and that is exactly what the patent system promotes in the myriad business models and innovative

U.S. antitrust law focuses primarily on private anticompetitive restraints, leaving the most serious impediments to a vibrant competitive process – government-initiated restraints – relatively free to flourish.  Thus the Federal Trade Commission (FTC) should be commended for its July 16 congressional testimony that spotlights a fast-growing and particularly pernicious species of (largely state) government restriction on competition – occupational licensing requirements.  Today such disciplines (to name just a few) as cat groomers, flower arrangers, music therapists, tree trimmers, frozen dessert retailers, eyebrow threaders, massage therapists (human and equine), and “shampoo specialists,” in addition to the traditional categories of doctors, lawyers, and accountants, are subject to professional licensure.  Indeed, since the 1950s, the coverage of such rules has risen dramatically, as the percentage of Americans requiring government authorization to do their jobs has risen from less than five percent to roughly 30 percent.

Even though some degree of licensing responds to legitimate health and safety concerns (i.e., no fly-by-night heart surgeons), much occupational regulation creates unnecessary barriers to entry into a host of jobs.  Excessive licensing confers unwarranted benefits on fortunate incumbents, while effectively barring large numbers of capable individuals from the workforce.  (For example, many individuals skilled in natural hair braiding simply cannot afford the 2,100 hours required to obtain a license in Iowa, Nebraska, and South Dakota.)  It also imposes additional economic harms, as the FTC’s testimony explains:  “[Occupational licensure] regulations may lead to higher prices, lower quality services and products, and less convenience for consumers.  In the long term, they can cause lasting damage to competition and the competitive process by rendering markets less responsive to consumer demand and by dampening incentives for innovation in products, services, and business models.”  Licensing requirements are often enacted in tandem with other occupational regulations that unjustifiably limit the scope of beneficial services particular professionals can supply – for instance, a ban on tooth cleaning by dental hygienists not acting under a dentist’s supervision that boosts dentists’ income but denies treatment to poor children who have no access to dentists.

What legal and policy tools are available to chip away at these pernicious and costly laws and regulations, which largely are the fruit of successful special interest lobbying?  The FTC’s competition advocacy program, which responds to requests from legislators and regulators to assess the economic merits of proposed laws and regulations, has focused on unwarranted regulatory restrictions in such licensed professions as real estate brokers, electricians, accountants, lawyers, dentists, dental hygienists, nurses, eye doctors, opticians, and veterinarians.  Retrospective reviews of FTC advocacy efforts suggest it may have helped achieve some notable reforms (for example, 74% of requestors, regulators, and bill sponsors surveyed responded that FTC advocacy initiatives influenced outcomes).  Nevertheless, advocacy’s reach and effectiveness inherently are limited by FTC resource constraints, by the need to obtain “invitations” to submit comments, and by the incentive and ability of licensing scheme beneficiaries to oppose regulatory and legislative reforms.

Former FTC Chairman Kovacic and James Cooper (currently at George Mason University’s Law and Economics Center) have suggested that federal and state antitrust experts could be authorized to have ex ante input into regulatory policy making.  As the authors recognize, however, several factors sharply limit the effectiveness of such an initiative.  In particular, “the political feasibility of this approach at the legislative level is slight”, federal mandates requiring ex ante reviews would raise serious federalism concerns, and resource constraints would loom large.

Antitrust law challenges to anticompetitive licensing schemes likewise offer little solace.  They are limited by the antitrust “state action” doctrine, which shields conduct undertaken pursuant to “clearly articulated” state legislative language that displaces competition – a category that generally will cover anticompetitive licensing requirements.  Even a Supreme Court decision next term (in North Carolina Dental v. FTC) that state regulatory boards dominated by self-interested market participants must be actively supervised to enjoy state action immunity would have relatively little bite.  It would not limit states from issuing simple statutory commands that create unwarranted occupational barriers, nor would it prevent states from implementing “adequate” supervisory schemes that are designed to approve anticompetitive state board rules.

What then is to be done?

Constitutional challenges to unjustifiable licensing strictures may offer the best long-term solution to curbing this regulatory epidemic.  As Clark Neily points out in Terms of Engagement, there is a venerable constitutional tradition of protecting the liberty interest to earn a living, reflected in well-reasoned late 19th and early 20th century “Lochner-era” Supreme Court opinions.  Even if Lochner is not rehabilitated, however, there are a few recent jurisprudential “straws in the wind” that support efforts to rein in “irrational” occupational licensure barriers.  Perhaps acting under divine inspiration, the Fifth Circuit in St. Joseph Abbey (2013) ruled that Louisiana statutes that required all casket manufacturers to be licensed funeral directors – laws that prevented monks from earning a living by making simple wooden caskets – served no other purpose than to protect the funeral industry, and, as such, violated the 14th Amendment’s Equal Protection and Due Process Clauses.  In particular, the Fifth Circuit held that protectionism, standing alone, is not a legitimate state interest sufficient to establish a “rational basis” for a state statute, and that absent other legitimate state interests, the law must fall.  Since the Sixth and Ninth Circuits also have held that intrastate protectionism standing alone is not a legitimate purpose for rational basis review, but the Tenth Circuit has held to the contrary, the time may soon be ripe for the Supreme Court to review this issue and, hopefully, delegitimize pure economic protectionism.  Such a development would place added pressure on defenders of protectionist occupational licensing schemes.  Other possible avenues for constitutional challenges to protectionist licensing regimes (perhaps, for example, under the Dormant Commerce Clause) also merit being explored, of course.  The Institute of Justice already is performing yeoman’s work in litigating numerous cases involving unjustified licensing and other encroachments on economic liberty; perhaps their example can prove an inspiration for pro bono efforts by others.

Eliminating anticompetitive occupational licensing rules – and, more generally, vindicating economic liberties that too long have been neglected – is obviously a long-term project, and far-reaching reform will not happen in the near term.  Nevertheless, while we the currently living may in the long run be dead (pace Keynes), our posterity will be alive, and we owe it to them to pursue the vindication of economic liberties under the Constitution.

UPDATE: I’ve been reliably informed that Vint Cerf coined the term “permissionless innovation,” and, thus, that he did so with the sorts of private impediments discussed below in mind rather than government regulation. So consider the title of this post changed to “Permissionless innovation SHOULD not mean ‘no contracts required,'” and I’ll happily accept that my version is the “bastardized” version of the term. Which just means that the original conception was wrong and thank god for disruptive innovation in policy memes!

Can we dispense with the bastardization of the “permissionless innovation” concept (best developed by Adam Thierer) to mean “no contracts required”? I’ve been seeing this more and more, but it’s been around for a while. Some examples from among the innumerable ones out there:

Vint Cerf on net neutrality in 2009:

We believe that the vast numbers of innovative Internet applications over the last decade are a direct consequence of an open and freely accessible Internet. Many now-successful companies have deployed their services on the Internet without the need to negotiate special arrangements with Internet Service Providers, and it’s crucial that future innovators have the same opportunity. We are advocates for “permissionless innovation” that does not impede entrepreneurial enterprise.

Net neutrality is replete with this sort of idea — that any impediment to edge providers (not networks, of course) doing whatever they want to do at a zero price is a threat to innovation.

Chet Kanojia (Aereo CEO) following the Aereo decision:

It is troubling that the Court states in its decision that, ‘to the extent commercial actors or other interested entities may be concerned with the relationship between the development and use of such technologies and the Copyright Act, they are of course free to seek action from Congress.’ (Majority, page 17)That begs the question: Are we moving towards a permission-based system for technology innovation?

At least he puts it in the context of the Court’s suggestion that Congress pass a law, but what he really wants is to not have to ask “permission” of content providers to use their content.

Mike Masnick on copyright in 2010:

But, of course, the problem with all of this is that it goes back to creating permission culture, rather than a culture where people freely create. You won’t be able to use these popular or useful tools to build on the works of others — which, contrary to the claims of today’s copyright defenders, is a key component in almost all creativity you see out there — without first getting permission.

Fair use is, by definition, supposed to be “permissionless.” But the concept is hardly limited to fair use, is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright (see, e.g., Mike Masnick again), which otherwise requires those pernicious licenses (i.e., permission) from others.

The point is, when we talk about permissionless innovation for Tesla, Uber, Airbnb, commercial drones, online data and the like, we’re talking (or should be) about ex ante government restrictions on these things — the “permission” at issue is permission from the government, it’s the “permission” required to get around regulatory roadblocks imposed via rent-seeking and baseless paternalism. As Gordon Crovitz writes, quoting Thierer:

“The central fault line in technology policy debates today can be thought of as ‘the permission question,'” Mr. Thierer writes. “Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations?”

But it isn’t (or shouldn’t be) about private contracts.

Just about all human (commercial) activity requires interaction with others, and that means contracts and licenses. You don’t see anyone complaining about the “permission” required to rent space from a landlord. But that some form of “permission” may be required to use someone else’s creative works or other property (including broadband networks) is no different. And, in fact, it is these sorts of contracts (and, yes, the revenue that may come with them) that facilitates people engaging with other commercial actors to produce things of value in the first place. The same can’t be said of government permission.

Don’t get me wrong – there may be some net welfare-enhancing regulatory limits that might require forms of government permission. But the real concern is the pervasive abuse of these limits, imposed without anything approaching a rigorous welfare determination. There might even be instances where private permission, imposed, say, by a true monopolist, might be problematic.

But this idea that any contractual obligation amounts to a problematic impediment to innovation is absurd, and, in fact, precisely backward. Which is why net neutrality is so misguided. Instead of identifying actual, problematic impediments to innovation, it simply assumes that networks threaten edge innovation, without any corresponding benefit and with such certainty (although no actual evidence) that ex ante common carrier regulations are required.

“Permissionless innovation” is a great phrase and, well developed (as Adam Thierer has done), a useful concept. But its bastardization to justify interference with private contracts is unsupported and pernicious.

Below is the text of my oral testimony to the Senate Commerce, Science and Transportation Committee, the Consumer Protection, Product Safety, and Insurance Subcommittee, at its November 7, 2013 hearing on “Demand Letters and Consumer Protection: Examining Deceptive Practices by Patent Assertion Entities.” Information on the hearing is here, including an archived webcast of the hearing. My much longer and more indepth written testimony is here.

Please note that I am incorrectly identified on the hearing website as speaking on behalf of the Center for the Protection of Intellectual Property (CPIP). In fact, I was invited to testify soley in my personal capacity as a Professor of Law at George Mason University School of Law, given my academic research into the history of the patent system and the role of licensing and commercialization in the distribution of patented innovation. I spoke for neither George Mason University nor CPIP, and thus I am solely responsible for the content of my research and remarks.

Chairman McCaskill, Ranking Member Heller, and Members of the Subcommittee:

Thank you for this opportunity to speak with you today.

There certainly are bad actors, deceptive demand letters, and frivolous litigation in the patent system. The important question, though, is whether there is a systemic problem requiring further systemic revisions to the patent system. There is no answer to this question, and this is the case for three reasons.

Harm to Innovation

First, the calls to rush to enact systemic revisions to the patent system are being made without established evidence there is in fact systemic harm to innovation, let alone any harm to the consumers that Section 5 authorizes the FTC to protect. As the Government Accountability Office found in its August 2013 report on patent litigation, the frequently-cited studies claiming harms are actually “nonrandom and nongeneralizable,” which means they are unscientific and unreliable.

These anecdotal reports and unreliable studies do not prove there is a systemic problem requiring a systemic revision to patent licensing practices.

Of even greater concern is that the many changes to the patent system Congress is considering, incl. extending the FTC’s authority over demand letters, would impose serious costs on real innovators and thus do actual harm to America’s innovation economy and job growth.

From Charles Goodyear and Thomas Edison in the nineteenth century to IBM and Microsoft today, patent licensing has been essential in bringing patented innovation to the marketplace, creating economic growth and a flourishing society.  But expanding FTC authority to regulate requests for licensing royalties under vague evidentiary and legal standards only weakens patents and create costly uncertainty.

This will hamper America’s innovation economy—causing reduced economic growth, lost jobs, and reduced standards of living for everyone, incl. the consumers the FTC is charged to protect.

Existing Tools

Second, the Patent and Trademark Office (PTO) and courts have long had the legal tools to weed out bad patents and punish bad actors, and these tools were massively expanded just two years ago with the enactment of the America Invents Act.

This is important because the real concern with demand letters is that the underlying patents are invalid.

No one denies that owners of valid patents have the right to license their property or to sue infringers, or that patent owners can even make patent licensing their sole business model, as did Charles Goodyear and Elias Howe in the mid-nineteenth century.

There are too many of these tools to discuss in my brief remarks, but to name just a few: recipients of demand letters can sue patent owners in courts through declaratory judgment actions and invalidate bad patents. And the PTO now has four separate programs dedicated solely to weeding out bad patents.

For those who lack the knowledge or resources to access these legal tools, there are now numerous legal clinics, law firms and policy organizations that actively offer assistance.

Again, further systemic changes to the patent system are unwarranted because there are existing legal tools with established legal standards to address the bad actors and their bad patents.

If Congress enacts a law this year, then it should secure full funding for the PTO. Weakening patents and creating more uncertainties in the licensing process is not the solution.

Rhetoric

Lastly, Congress is being driven to revise the patent system on the basis of rhetoric and anecdote instead of objective evidence and reasoned explanations. While there are bad actors in the patent system, terms like PAE or patent troll constantly shift in meaning. These terms have been used to cover anyone who licenses patents, including universities, startups, companies that engage in R&D, and many others.

Classic American innovators in the nineteenth century like Thomas Edison, Charles Goodyear, and Elias Howe would be called PAEs or patent trolls today. In fact, they and other patent owners made royalty demands against thousands of end users.

Congress should exercise restraint when it is being asked to enact systemic legislative or regulatory changes on the basis of pejorative labels that would lead us to condemn or discriminate against classic innovators like Edison who have contributed immensely to America’s innovation economy.

Conclusion

In conclusion, the benefits or costs of patent licensing to the innovation economy is an important empirical and policy question, but systemic changes to the patent system should not be based on rhetoric, anecdotes, invalid studies, and incorrect claims about the historical and economic significance of patent licensing

As former PTO Director David Kappos stated last week in his testimony before the House Judiciary Committee: “we are reworking the greatest innovation engine the world has ever known, almost instantly after it has just been significantly overhauled. If there were ever a case where caution is called for, this is it.”

Thank you.

The Federalist Society has started a new program, The Executive Branch Review, which focuses on the myriad fields in which the Executive Branch acts outside of the constitutional and legal limits imposed on it, either by Executive Orders or by the plethora of semi-independent administrative agencies’ regulatory actions.

I recently posted on the Federal Trade Commission’s (FTC) ongoing investigations into the patent licensing business model and the actions (“consent decrees”) taken by the FTC against Bosch and Google.  These “consent decrees” constrain Bosch’s and Google’s rights in enforce patents they have committed to standard setting organizations (these patents are called “standard essential patents”). Here’s a brief taste:

One of the most prominent participants at the FTC-DOJ workshop back in December, former DOJ antitrust official and UC-Berkeley economics professor Carl Shapiro, explained in his opening speech that there was still insufficient data on patent licensing companies and their effects on the market.  This is true; for instance, a prominent study cited by Google et al. in support of their request to the FTC to investigate patent licensing companies has been described as being fundamentally flawed on both substantive and methodological grounds. Even more important, Professor Shapiro expressed skepticism at the workshop that, even if there was properly acquired, valid data, the FTC lacked the legal authority to sanction patent licensing firms for being allegedly anti-competitive.

Commentators have long noted that courts and agencies have a lousy historical track record when it comes to assessing the merits of new innovation, whether in new products or new business models. They maintain that the FTC should not continue such mistakes by letting its decision-making today be driven by rhetoric or by the widespread animus against certain commercial firms. Restraint and fact-gathering, institutional virtues reflected in a government animated by the rule of law and respect for individual rights, are key to preventing regulatory overreach and harm to future innovation.

Go read the whole thing, and, while you’re at it, check out Commissioner Joshua Wright’s similar comments on the FTC’s investigations of patent licensing companies, which the FTC calls “patent assertion entities.”