Archives For licensing

In a constructive development, the Federal Trade Commission has joined its British counterpart in investigating Nvidia’s proposed $40 billion acquisition of chip designer Arm, a subsidiary of Softbank. Arm provides the technological blueprints for wireless communications devices and, subject to a royalty fee, makes those crown-jewel assets available to all interested firms. Notwithstanding Nvidia’s stated commitment to keep the existing policy in place, there is an obvious risk that the new parent, one of the world’s leading chip makers, would at some time modify this policy with adverse competitive effects.

Ironically, the FTC is likely part of the reason that the Nvidia-Arm transaction is taking place.

Since the mid-2000s, the FTC and other leading competition regulators (except for the U.S. Department of Justice’s Antitrust Division under the leadership of former Assistant Attorney General Makan Delrahim) have intervened extensively in licensing arrangements in wireless device markets, culminating in the FTC’s recent failed suit against Qualcomm. The Nvidia-Arm transaction suggests that these actions may simply lead chip designers to abandon the licensing model and shift toward structures that monetize chip-design R&D through integrated hardware and software ecosystems. Amazon and Apple are already undertaking chip innovation through this model. Antitrust action that accelerates this movement toward in-house chip design is likely to have adverse effects for the competitive health of the wireless ecosystem.

How IP Licensing Promotes Market Access

Since its inception, the wireless communications market has relied on a handful of IP licensors to supply device producers and other intermediate users with a common suite of technology inputs. The result has been an efficient division of labor between firms that specialize in upstream innovation and firms that specialize in production and other downstream functions. Contrary to the standard assumption that IP rights limit access, this licensing-based model ensures technology access to any firm willing to pay the royalty fee.

Efforts by regulators to reengineer existing relationships between innovators and implementers endanger this market structure by inducing innovators to abandon licensing-based business models, which now operate under a cloud of legal insecurity, for integrated business models in which returns on R&D investments are captured internally through hardware and software products. Rather than expanding technology access and intensifying competition, antitrust restraints on licensing freedom are liable to limit technology access and increase market concentration.

Regulatory Intervention and Market Distortion

This interventionist approach has relied on the assertion that innovators can “lock in” producers and extract a disproportionate fee in exchange for access. This prediction has never found support in fact. Contrary to theoretical arguments that patent owners can impose double-digit “royalty stacks” on device producers, empirical researchers have repeatedly found that the estimated range of aggregate rates lies in the single digits. These findings are unsurprising given market performance over more than two decades: adoption has accelerated as quality-adjusted prices have fallen and innovation has never ceased. If rates were exorbitant, market growth would have been slow, and the smartphone would be a luxury for the rich.

Despite these empirical infirmities, the FTC and other competition regulators have persisted in taking action to mitigate “holdup risk” through policy statements and enforcement actions designed to preclude IP licensors from seeking injunctive relief. The result is a one-sided legal environment in which the world’s largest device producers can effectively infringe patents at will, knowing that the worst-case scenario is a “reasonable royalty” award determined by a court, plus attorneys’ fees. Without any credible threat to deny access even after a favorable adjudication on the merits, any IP licensor’s ability to negotiate a royalty rate that reflects the value of its technology contribution is constrained.

Assuming no change in IP licensing policy on the horizon, it is therefore not surprising that an IP licensor would seek to shift toward an integrated business model in which IP is not licensed but embedded within an integrated suite of products and services. Or alternatively, an IP licensor entity might seek to be acquired by a firm that already has such a model in place. Hence, FTC v. Qualcomm leads Arm to Nvidia.

The Error Costs of Non-Evidence-Based Antitrust

These counterproductive effects of antitrust intervention demonstrate the error costs that arise when regulators act based on unverified assertions of impending market failure. Relying on the somewhat improbable assumption that chip suppliers can dictate licensing terms to device producers that are among the world’s largest companies, competition regulators have placed at risk the legal predicates of IP rights and enforceable contracts that have made the wireless-device market an economic success. As antitrust risk intensifies, the return on licensing strategies falls and competitive advantage shifts toward integrated firms that can monetize R&D internally through stand-alone product and service ecosystems.

Far from increasing competitiveness, regulators’ current approach toward IP licensing in wireless markets is likely to reduce it.

The European Court of Justice issued its long-awaited ruling Dec. 9 in the Groupe Canal+ case. The case centered on licensing agreements in which Paramount Pictures granted absolute territorial exclusivity to several European broadcasters, including Canal+.

Back in 2015, the European Commission charged six U.S. film studios, including Paramount,  as well as British broadcaster Sky UK Ltd., with illegally limiting access to content. The crux of the EC’s complaint was that the contractual agreements to limit cross-border competition for content distribution ran afoul of European Union competition law. Paramount ultimately settled its case with the commission and agreed to remove the problematic clauses from its contracts. This affected third parties like Canal+, who lost valuable contractual protections. 

While the ECJ ultimately upheld the agreements on what amounts to procedural grounds (Canal+ was unduly affected by a decision to which it was not a party), the case provides yet another example of the European Commission’s misguided stance on absolute territorial licensing, sometimes referred to as “geo-blocking.”

The EC’s long-running efforts to restrict geo-blocking emerge from its attempts to harmonize trade across the EU. Notably, in its Digital Single Market initiative, the Commission envisioned

[A] Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can s​eamlessly access and exercise online activities under conditions of f​air competition,​ and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

This policy stance has been endorsed consistently by the European Court of Justice. In the 2011 Murphy decision, for example, the court held that agreements between rights holders and broadcasters infringe European competition when they categorically prevent the latter from supplying “decoding devices” to consumers located in other member states. More precisely, while rights holders can license their content on a territorial basis, they cannot restrict so-called “passive sales”; broadcasters can be prevented from actively chasing consumers in other member states, but not from serving them altogether. If this sounds Kafkaesque, it’s because it is.

The problem with the ECJ’s vision is that it elides the complex factors that underlie a healthy free-trade zone. Geo-blocking frequently is misunderstood or derided by consumers as an unwarranted restriction on their consumption preferences. It doesn’t feel “fair” or “seamless” when a rights holder can decide who can access their content and on what terms. But that doesn’t mean geo-blocking is a nefarious or socially harmful practice. Quite the contrary: allowing creators to create different sets of distribution options offers both a return to the creators as well as more choice in general to consumers. 

In economic terms, geo-blocking allows rights holders to engage in third-degree price discrimination; that is, they have the ability to charge different prices for different sets of consumers. This type of pricing will increase total welfare so long as it increases output. As Hal Varian puts it:

If a new market is opened up because of price discrimination—a market that was not previously being served under the ordinary monopoly—then we will typically have a Pareto improving welfare enhancement.

Another benefit of third-degree price discrimination is that, by shifting some economic surplus from consumers to firms, it can stimulate investment in much the same way copyright and patents do. Put simply, the prospect of greater economic rents increases the maximum investment firms will be willing to make in content creation and distribution.

For these reasons, respecting parties’ freedom to license content as they see fit is likely to produce much more efficient outcomes than annulling those agreements through government-imposed “seamless access” and “fair competition” rules. Part of the value of copyright law is in creating space to contract by protecting creators’ property rights. Without geo-blocking, the enforcement of licensing agreements would become much more difficult. Laws restricting copyright owners’ ability to contract freely reduce allocational efficiency, as well as the incentives to create in the first place. Further, when individual creators have commercial and creative autonomy, they gain a degree of predictability that can ensure they will continue to produce content in the future. 

The European Union would do well to adopt a more nuanced understanding of the contractual relationships between producers and distributors. 

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

Underpinning many policy disputes is a frequently rehearsed conflict of visions: Should we experiment with policies that are likely to lead to superior, but unknown, solutions, or should we should stick to well-worn policies, regardless of how poorly they fit current circumstances? 

This conflict is clearly visible in the debate over whether DOJ should continue to enforce its consent decrees with the major music performing rights organizations (“PROs”), ASCAP and BMI—or terminate them. 

As we note in our recently filed comments with the DOJ, summarized below, the world has moved on since the decrees were put in place in the early twentieth century. Given the changed circumstances, the DOJ should terminate the consent decrees. This would allow entrepreneurs, armed with modern technology, to facilitate a true market for public performance rights.

The consent decrees

In the early days of radio, it was unclear how composers and publishers could effectively monitor and enforce their copyrights. Thousands of radio stations across the nation were playing the songs that tens of thousands of composers had written. Given the state of technology, there was no readily foreseeable way to enable bargaining between the stations and composers for license fees associated with these plays.

In 1914, a group of rights holders established the American Society of Composers Authors and Publishers (ASCAP) as a way to overcome these transactions costs by negotiating with radio stations on behalf of all of its members.

Even though ASCAP’s business was clearly aimed at ensuring that rightsholders’ were appropriately compensated for the use of their works, which logically would have incentivized greater output of licensable works, the nonstandard arrangement it embodied was unacceptable to the antitrust enforcers of the era. Not long after it was created, the Department of Justice began investigating ASCAP for potential antitrust violations.

While the agglomeration of rights under a single entity had obvious benefits for licensors and licensees of musical works, a power struggle nevertheless emerged between ASCAP and radio broadcasters over the terms of those licenses. Eventually this struggle led to the formation of a new PRO, the broadcaster-backed BMI, in 1939. The following year, the DOJ challenged the activities of both PROs in dual criminal antitrust proceedings. The eventual result was a set of consent decrees in 1941 that, with relatively minor modifications over the years, still regulate the music industry.

Enter the Internet

The emergence of new ways to distribute music has, perhaps unsurprisingly, resulted in renewed interest from artists in developing alternative ways to license their material. In 2014, BMI and ASCAP asked the DOJ to modify their consent decrees to permit music publishers partially to withdraw from the PROs, which would have enabled those partially-withdrawing publishers to license their works to digital services under separate agreements (and prohibited the PROs from licensing their works to those same services). However, the DOJ rejected this request and insisted that the consent decree requires “full-work” licenses — a result that would have not only entrenched the status quo, but also erased the competitive differences that currently exist between the PROs. (It might also have created other problems, such as limiting collaborations between artists who currently license through different PROs.)

This episode demonstrates a critical flaw in how the consent decrees currently operate. Imposing full-work license obligations on PROs would have short-circuited the limited market that currently exists, to the detriment of creators, competition among PROs, and, ultimately, consumers. Paradoxically these harms flow directly from a  presumption that administrative officials, seeking to enforce antitrust law — the ultimate aim of which is to promote competition and consumer welfare — can dictate through top-down regulatory intervention market terms better than participants working together. 

If a PRO wants to offer full-work licenses to its licensee-customers, it should be free to do so (including, e.g., by contracting with other PROs in cases where the PRO in question does not own the work outright). These could be a great boon to licensees and the market. But such an innovation would flow from a feedback mechanism in the market, and would be subject to that same feedback mechanism. 

However, for the DOJ as a regulatory overseer to intervene in the market and assert a preference that it deemed superior (but that was clearly not the result of market demand, or subject to market discipline) is fraught with difficulty. And this is the emblematic problem with the consent decrees and the mandated licensing regimes. It allows regulators to imagine that they have both the knowledge and expertise to manage highly complicated markets. But, as Mark Lemley has observed, “[g]one are the days when there was any serious debate about the superiority of a market-based economy over any of its traditional alternatives, from feudalism to communism.” 

It is no knock against the DOJ that it patently does not have either the knowledge or expertise to manage these markets: no one does. That’s the entire point of having markets, which facilitate the transmission and effective utilization of vast amounts of disaggregated information, including subjective preferences, that cannot be known to anyone other than the individual who holds them. When regulators can allow this process to work, they should.

Letting the market move forward

Some advocates of the status quo have recommended that the consent orders remain in place, because 

Without robust competition in the music licensing market, consumers could face higher prices, less choice, and an increase in licensing costs that could render many vibrant public spaces silent. In the absence of a truly competitive market in which PROs compete to attract services and other licensees, the consent decrees must remain in place to prevent ASCAP and BMI from abusing their substantial market power.

This gets to the very heart of the problem with the conflict of visions that undergirds policy debates. Advocating for the status quo in this manner is based on a static view of “markets,” one that is, moreover, rooted in an early twentieth-century conception of the relevant industries. The DOJ froze the licensing market in time with the consent decrees — perhaps justifiably in 1941 given the state of technology and the very high transaction costs involved. But technology and business practices have evolved and are now much more capable of handling the complex, distributed set of transactions necessary to make the performance license market a reality.

Believing that the absence of the consent decrees will force the performance licensing market to collapse into an anticompetitive wasteland reflects a failure of imagination and suggests a fundamental distrust in the power of the market to uncover novel solutions—against the overwhelming evidence to the contrary

Yet, those of a dull and pessimistic mindset need not fear unduly the revocation of the consent decrees. For if evidence emerges that the market participants (including the PROs and whatever other entities emerge) are engaging in anticompetitive practices to the detriment of consumer welfare, the DOJ can sue those entities. The threat of such actions should be sufficient in itself to deter such anticompetitive practices but if it is not, then the sword of antitrust, including potentially the imposition of consent decrees, can once again be wielded. 

Meanwhile, those of us with an optimistic, imaginative mindset, look forward to a time in the near future when entrepreneurs devise innovative and cost-effective solutions to the problem of highly-distributed music licensing. In some respects their job is made easier by the fact that an increasing proportion of music is  streamed via a small number of large companies (Spotify, Pandora, Apple, Amazon, Tencent, YouTube, Tidal, etc.). But it is quite feasible that in the absence of the consent decrees new licensing systems will emerge, using modern database technologies, blockchain and other distributed ledgers, that will enable much more effective usage-based licenses applicable not only to these streaming services but others too. 

We hope the DOJ has the foresight to allow such true competition to enter this market and the strength to believe enough in our institutions that it can permit some uncertainty while entrepreneurs experiment with superior methods of facilitating music licensing.

An important but unheralded announcement was made on October 10, 2018: The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) released a draft CEN CENELAC Workshop Agreement (CWA) on the licensing of Standard Essential Patents (SEPs) for 5G/Internet of Things (IoT) applications. The final agreement, due to be published in early 2019, is likely to have significant implications for the development and roll-out of both 5G and IoT applications.

CEN and CENELAC, which along with the European Telecommunications Standards Institute (ETSI) are the officially recognized standard setting bodies in Europe, are private international non profit organizations with a widespread network consisting of technical experts from industry, public administrations, associations, academia and societal organizations. This first Workshop brought together representatives of the 5G/Internet of Things (IoT) technology user and provider communities to discuss licensing best practices and recommendations for a code of conduct for licensing of SEPs. The aim was to produce a CWA that reflects and balances the needs of both communities.

The final consensus outcome of the Workshop will be published as a CEN-CENELEC Workshop Agreement (CWA). The draft, which is available for public comments, comprises principles and guidelines that prepare a foundation for future licensing of standard essential patents for fifth generation (5G) technologies. The draft also contains a section on Q&A to help aid new implementers and patent holders.

The IoT ecosystem is likely to have over 20 billion interconnected devices by 2020 and represent a market of $17 trillion (about the same as the current GDP of the U.S.). The data collected by one device, such as a smart thermostat that learns what time the consumer is likely to be at home, can be used to increase the performance of another connected device, such as a smart fridge. Cellular technologies are a core component of the IoT ecosystem, alongside applications, devices, software etc., as they provide connectivity within the IoT system. 5G technology, in particular, is expected to play a key role in complex IoT deployments, which will transcend the usage of cellular networks from smart phones to smart home appliances, autonomous vehicles, health care facilities etc. in what has been aptly described as the fourth industrial revolution.

Indeed, the role of 5G to IoT is so significant that the proposed $117 billion takeover bid for U.S. tech giant Qualcomm by Singapore-based Broadcom was blocked by President Trump, citing national security concerns. (A letter sent by the Committee on Foreign Investment in the US suggested that Broadcom might starve Qualcomm of investment, preventing it from competing effectively against foreign competitors–implicitly those in China.)

While commercial roll-out of 5G technology has not yet fully begun, several efforts are being made by innovator companies, standard setting bodies and governments to maximize the benefits from such deployment.

The draft CWA Guidelines (hereinafter “the guidelines”) are consistent with some of the recent jurisprudence on SEPs on various issues. While there is relatively less guidance specifically in relation to 5G SEPs, it provides clarifications on several aspects of SEP licensing which will be useful, particularly, the negotiating process and conduct of both parties.

The guidelines contain 6 principles followed by some questions pertaining to SEP licensing. The principles deal with:

  1. The obligation of SEP holders to license the SEPs on Fair, Reasonable and Non-Discriminatory (FRAND) terms;
  2. The obligation on both parties to conduct negotiations in good faith;
  3. The obligation of both parties to provide necessary information (subject to confidentiality) to facilitate timely conclusion of the licensing negotiation;
  4. Compensation that is “fair and reasonable” and achieves the right balance between incentives to contribute technology and the cost of accessing that technology;
  5. A non-discriminatory obligation on the SEP holder for similarly situated licensees even though they don’t need to be identical; and
  6. Recourse to a third party FRAND determination either by court or arbitration if the negotiations fail to conclude in a timely manner.

There are 22 questions and answers, as well, which define basic terms and touch on issues such as: what amounts as good faith conduct of negotiating parties, global portfolio licensing, FRAND royalty rates, patent pooling, dispute resolution, injunctions, and other issues relevant to FRAND licensing policy in general.

Below are some significant contributions that the draft report makes on issues such as the supply chain level at which licensing is best done, treatment of small and medium enterprises (SMEs), non disclosure agreements, good faith negotiations and alternative dispute resolution.

Typically in the IoT ecosystem, many technologies will be adopted of which several will be standardized. The guidelines offer help to product and service developers in this regard and suggest that one may need to obtain licenses from SEP owners for product or services incorporating communications technology like 3G UMTS, 4G LTE, Wi-Fi, NB-IoT, 31 Cat-M or video codecs such as H.264. The guidelines, however, clarify that with the deployment of IoT, licenses for several other standards may be needed and developers should be mindful of these complexities when starting out in order to avoid potential infringements.

Notably, the guidelines suggest that in order to simplify licensing, reduce costs for all parties and maintain a level playing field between licensees, SEP holders should license at one level. While this may vary between different industries, for communications technology, the licensing point is often at the end-user equipment level. There has been a fair bit of debate on this issue and the recent order by Judge Koh granting FTC’s partial summary motion deals with some of this.

In the judgment delivered on November 6, Judge Koh relied primarily on the 9th circuit decisions in Microsoft v Motorola (2012 and 2015)  to rule on the core issue of the scope of the FRAND commitments–specifically on the question of whether licensing extends to all levels or is confined to the end device level. The court interpreted the pro- competitive principles behind the non-discrimination requirement to mean that such commitments are “sweeping” and essentially that an SEP holder has to license to anyone willing to offer a FRAND rate globally. It also cited Ericsson v D-Link, where the Federal Circuit held that “compliant devices necessarily infringe certain claims in patents that cover technology incorporated into the standard and so practice of the standard is impossible without licenses to all incorporated SEP technology.”

The guidelines speak about the importance of non-disclosure agreements (NDAs) in such licensing agreements given that some of the information exchanged between parties during negotiation, such as claim charts etc., may be sensitive and confidential. Therefore, an undue delay in agreeing to an NDA, without well-founded reasons, might be taken as evidence of a lack of good faith in negotiations rendering such a licensee as unwilling.

They also provide quite a boost for small and medium enterprises (SMEs) in licensing negotiations by addressing the duty of SEP owners to be mindful of SMEs that may be less experienced and therefore lack information from which to draw assurance that proposed terms are FRAND. The guidelines provide that SEP owners should provide whatever information they can under NDA to help the negotiation process. Equally, the same obligation applies on a licensee who is more experienced in dealing with a SEP owner who is an SME.

There is some clarity on time frames for negotiations and the guidelines provide a maximum time that parties should take to respond to offers and counter offers, which could extend up to several months in complex cases involving hundreds of patents. The guidelines also prescribe conduct of potential licensees on receiving an offer and how to make counter-offers in a timely manner.

Furthermore, the guidelines lay down the various ways in which royalty rates may be structured and clarify that there is no one fixed way in which this may be done. Similarly, they offer myriad ways in which potential licensees may be able to determine for themselves if the rates offered to them are fair and reasonable, such as third party patent landscape reports, public announcements, expert advice etc.

Finally, in the case that a negotiation reaches an impasse, the guidelines endorse an alternative dispute mechanism such as mediation or arbitration for the parties to resolve the issue. Bodies such as International Chamber of Commerce and World Intellectual Property Organization may provide useful platforms in this regard.

Almost 20 years have passed since technology pioneer Kevin Ashton first coined the phrase Internet of Things. While companies are gearing up to participate in the market of IoT, regulation and policy in the IoT world seems far from a predictable framework to follow. There are a lot of guesses about how rules and standards are likely to shape up, with little or no guidance for companies on how to prepare themselves for what faces them very soon. Therefore concrete efforts such as these are rather welcome. The draft guidelines do attempt to offer some much needed clarity and are now open for public comments due by December 13. It will be good to see what the final CWA report on licensing of SEPs for 5G and IoT looks like.

 

Imagine if you will… that a federal regulatory agency were to decide that the iPhone ecosystem was too constraining and too expensive; that consumers — who had otherwise voted for iPhones with their dollars — were being harmed by the fact that the platform was not “open” enough.

Such an agency might resolve (on the basis of a very generous reading of a statute), to force Apple to make its iOS software available to any hardware platform that wished to have it, in the process making all of the apps and user data accessible to the consumer via these new third parties, on terms set by the agency… for free.

Difficult as it may be to picture this ever happening, it is exactly the sort of Twilight Zone scenario that FCC Chairman Tom Wheeler is currently proposing with his new set-top box proposal.

Based on the limited information we have so far (a fact sheet and an op-ed), Chairman Wheeler’s new proposal does claw back some of the worst excesses of his initial draft (which we critiqued in our comments and reply comments to that proposal).

But it also appears to reinforce others — most notably the plan’s disregard for the right of content creators to control the distribution of their content. Wheeler continues to dismiss the complex business models, relationships, and licensing terms that have evolved over years of competition and innovation. Instead, he offers  a one-size-fits-all “solution” to a “problem” that market participants are already falling over themselves to provide.

Plus ça change…

To begin with, Chairman Wheeler’s new proposal is based on the same faulty premise: that consumers pay too much for set-top boxes, and that the FCC is somehow both prescient enough and Congressionally ordained to “fix” this problem. As we wrote in our initial comments, however,

[a]lthough the Commission asserts that set-top boxes are too expensive, the history of overall MVPD prices tells a remarkably different story. Since 1994, per-channel cable prices including set-top box fees have fallen by 2 percent, while overall consumer prices have increased by 54 percent. After adjusting for inflation, this represents an impressive overall price decrease.

And the fact is that no one buys set-top boxes in isolation; rather, the price consumers pay for cable service includes the ability to access that service. Whether the set-top box fee is broken out on subscribers’ bills or not, the total price consumers pay is unlikely to change as a result of the Commission’s intervention.

As we have previously noted, the MVPD set-top box market is an aftermarket; no one buys set-top boxes without first (or simultaneously) buying MVPD service. And as economist Ben Klein (among others) has shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive:

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

Engineering the set-top box aftermarket to bring more direct competition to bear may redistribute profits, but it’s unlikely to change what consumers pay.

Stripped of its questionable claims regarding consumer prices and placed in the proper context — in which consumers enjoy more ways to access more video content than ever before — Wheeler’s initial proposal ultimately rested on its promise to “pave the way for a competitive marketplace for alternate navigation devices, and… end the need for multiple remote controls.” Weak sauce, indeed.

He now adds a new promise: that “integrated search” will be seamlessly available for consumers across the new platforms. But just as universal remotes and channel-specific apps on platforms like Apple TV have already made his “multiple remotes” promise a hollow one, so, too, have competitive pressures already begun to deliver integrated search.

Meanwhile, such marginal benefits come with a host of substantial costs, as others have pointed out. Do we really need the FCC to grant itself more powers and create a substantial and coercive new regulatory regime to mandate what the market is already poised to provide?

From ignoring copyright to obliterating copyright

Chairman Wheeler’s first proposal engendered fervent criticism for the impossible position in which it placed MVPDs — of having to disregard, even outright violate, their contractual obligations to content creators.

Commendably, the new proposal acknowledges that contractual relationships between MVPDs and content providers should remain “intact.” Thus, the proposal purports to enable programmers and MVPDs to maintain “their channel position, advertising and contracts… in place.” MVPDs will retain “end-to-end” control of the display of content through their apps, and all contractually guaranteed content protection mechanisms will remain, because the “pay-TV’s software will manage the full suite of linear and on-demand programming licensed by the pay-TV provider.”

But, improved as it is, the new proposal continues to operate in an imagined world where the incredibly intricate and complex process by which content is created and distributed can be reduced to the simplest of terms, dictated by a regulator and applied uniformly across all content and all providers.

According to the fact sheet, the new proposal would “[p]rotect[] copyrights and… [h]onor[] the sanctity of contracts” through a “standard license”:

The proposed final rules require the development of a standard license governing the process for placing an app on a device or platform. A standard license will give device manufacturers the certainty required to bring innovative products to market… The license will not affect the underlying contracts between programmers and pay-TV providers. The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.

But programming is distributed under a diverse range of contract terms. The only way a single, “standard license” could possibly honor these contracts is by forcing content providers to license all of their content under identical terms.

Leaving aside for a moment the fact that the FCC has no authority whatever to do this, for such a scheme to work, the agency would necessarily have to strip content holders of their right to govern the terms on which their content is accessed. After all, if MVPDs are legally bound to redistribute content on fixed terms, they have no room to permit content creators to freely exercise their rights to specify terms like windowing, online distribution restrictions, geographic restrictions, and the like.

In other words, the proposal simply cannot deliver on its promise that “[t]he license will not affect the underlying contracts between programmers and pay-TV providers.”

But fear not: According to the Fact Sheet, “[p]rogrammers will have a seat at the table to ensure that content remains protected.” Such largesse! One would be forgiven for assuming that the programmers’ (single?) seat will surrounded by those of other participants — regulatory advocates, technology companies, and others — whose sole objective will be to minimize content companies’ ability to restrict the terms on which their content is accessed.

And we cannot ignore the ominous final portion of the Fact Sheet’s “Standard License” description: “The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.” Such an arrogation of ultimate authority by the FCC doesn’t bode well for that programmer’s “seat at the table” amounting to much.

Unfortunately, we can only imagine the contours of the final proposal that will describe the many ways by which distribution licenses can “harm the marketplace for competitive devices.” But an educated guess would venture that there will be precious little room for content creators and MVPDs to replicate a large swath of the contract terms they currently employ. “Any content owner can have its content painted any color that it wants, so long as it is black.”

At least we can take solace in the fact that the FCC has no authority to do what Wheeler wants it to do

And, of course, this all presumes that the FCC will be able to plausibly muster the legal authority in the Communications Act to create what amounts to a de facto compulsory licensing scheme.

A single license imposed upon all MVPDs, along with the necessary restrictions this will place upon content creators, does just as much as an overt compulsory license to undermine content owners’ statutory property rights. For every license agreement that would be different than the standard agreement, the proposed standard license would amount to a compulsory imposition of terms that the rights holders and MVPDs would not otherwise have agreed to. And if this sounds tedious and confusing, just wait until the Commission starts designing its multistakeholder Standard Licensing Oversight Process (“SLOP”)….

Unfortunately for Chairman Wheeler (but fortunately for the rest of us), the FCC has neither the legal authority, nor the requisite expertise, to enact such a regime.

Last month, the Copyright Office was clear on this score in its letter to Congress commenting on the Chairman’s original proposal:  

[I]t is important to remember that only Congress, through the exercise of its power under the Copyright Clause, and not the FCC or any other agency, has the constitutional authority to create exceptions and limitations in copyright law. While Congress has enacted compulsory licensing schemes, they have done so in response to demonstrated market failures, and in a carefully circumscribed manner.

Assuming that Section 629 of the Communications Act — the provision that otherwise empowers the Commission to promote a competitive set-top box market — fails to empower the FCC to rewrite copyright law (which is assuredly the case), the Commission will be on shaky ground for the inevitable torrent of lawsuits that will follow the revised proposal.

In fact, this new proposal feels more like an emergency pivot by a panicked Chairman than an actual, well-grounded legal recommendation. While the new proposal improves upon the original, it retains at its core the same ill-informed, ill-advised and illegal assertion of authority that plagued its predecessor.

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

In its February 25 North Carolina Dental decision, the U.S. Supreme Court, per Justice Anthony Kennedy, held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state.  In so ruling, the Court struck a significant blow against protectionist rent-seeking and for economic liberty.  (As I stated in a recent Heritage Foundation legal memorandum, “[a] Supreme Court decision accepting this [active supervision] principle might help to curb special-interest favoritism conferred through state law.  At the very least, it could complicate the efforts of special interests to protect themselves from competition through regulation.”)

A North Carolina law subjects the licensing of dentistry to a North Carolina State Board of Dental Examiners (Board), six of whose eight members must be licensed dentists.  After dentists complained to the Board that non-dentists were charging lower prices than dentists for teeth whitening, the Board sent cease-and-desist letter to non-dentist teeth whitening providers, warning that the unlicensed practice dentistry is a crime.  This led non-dentists to cease teeth whitening services in North Carolina.  The Federal Trade Commission (FTC) held that the Board’s actions violated Section 5 of the FTC Act, which prohibits unfair methods of competition, the Fourth Circuit agreed, and the Court affirmed the Fourth Circuit’s decision.

In its decision, the Court rejected the claim that state action immunity, which confers immunity on the anticompetitive conduct of states acting in their sovereign capacity, applied to the Board’s actions.  The Court stressed that where a state delegates control over a market to a non-sovereign actor, immunity applies only if the state accepts political accountability by actively supervising that actor’s decisions.  The Court applied its Midcal test, which requires (1) clear state articulation and (2) active state supervision of decisions by non-sovereign actors for immunity to attach.  The Court held that entities designated as state agencies are not exempt from active supervision when they are controlled by market participants, because allowing an exemption in such circumstances would pose the risk of self-dealing that the second prong of Midcal was created to address.

Here, the Board did not contend that the state exercised any (let alone active) supervision over its anticompetitive conduct.  The Court closed by summarizing “a few constant requirements of active supervision,” namely, (1) the supervisor must review the substance of the anticompetitive decision, (2) the supervisor must have the power to veto or modify particular decisions for consistency with state policy, (3) “the mere potential for state supervision is not an adequate substitute for a decision by the State,” and (4) “the state supervisor may not itself be an active market participant.”  The Court cautioned, however, that “the adequacy of supervision otherwise will depend on all the circumstances of a case.”

Justice Samuel Alito, joined by Justices Antonin Scalia and Clarence Thomas, dissented, arguing that the Court ignored precedent that state agencies created by the state legislature (“[t]he Board is not a private or ‘nonsovereign’ entity”) are shielded by the state action doctrine.  “By straying from this simple path” and assessing instead whether individual agencies are subject to regulatory capture, the Court spawned confusion, according to the dissenters.  Midcal was inapposite, because it involved a private trade association.  The dissenters feared that the majority’s decision may require states “to change the composition of medical, dental, and other boards, but it is not clear what sort of changes are needed to satisfy the test that the Court now adopts.”  The dissenters concluded “that determining when regulatory capture has occurred is no simple task.  That answer provides a reason for relieving courts from the obligation to make such determinations at all.  It does not explain why it is appropriate for the Court to adopt the rather crude test for capture that constitutes the holding of today’s decision.”

The Court’s holding in North Carolina Dental helpfully limits the scope of the Court’s infamous Parker v. Brown decision (which shielded from federal antitrust attack a California raisin producers’ cartel overseen by a state body), without excessively interfering in sovereign state prerogatives.  State legislatures may still choose to create self-interested professional regulatory bodies – their sovereignty is not compromised.  Now, however, they will have to (1) make it clearer up front that they intend to allow those bodies to displace competition, and (2) subject those bodies to disinterested third party review.  These changes should make it far easier for competition advocates (including competition agencies) to spot and publicize welfare-inimical regulatory schemes, and weaken the incentive and ability of rent-seekers to undermine competition through state regulatory processes.  All told, the burden these new judicially-imposed constraints will impose on the states appears relatively modest, and should be far outweighed by the substantial welfare benefits they are likely to generate.

Microsoft and its allies (the Microsoft-funded trade organization FairSearch and the prolific Google critic Ben Edelman) have been highly critical of Google’s use of “secret” contracts to license its proprietary suite of mobile apps, Google Mobile Services, to device manufacturers.

I’ve written about this at length before. As I said previously,

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.”

Microsoft (via another of its front groups, ICOMP) responded in predictable fashion.

While the hysteria over private, mutually beneficial contracts negotiated between sophisticated corporations was always patently absurd (who ever heard of sensitive commercial contracts that weren’t confidential?), Edelman’s claim that the Google MADAs operate to “suppress competition” with “no plausible pro-consumer benefits” was the subject of my previous post.

I won’t rehash all of those arguments here, but rather point to another indication that such contract terms are not anticompetitive: The recent revelation that they are used by others in the same industry — including, we’ve learned (to no one’s surprise), Microsoft.

Much like the release of Google’s MADAs in an unrelated lawsuit, the ongoing patent licensing contract dispute between Microsoft and Samsung has obliged the companies to release their own agreements. As it happens, they are at least as restrictive as the Google agreements criticized by Edelman — and, in at least one way, even more so.

Some quick background: As I said in my previous post, it is no secret that equipment manufacturers have the option to license a free set of Google apps (Google Mobile Services) and set Google as the default search engine. However, Google allows OEMs to preinstall other competing search engines as they see fit. Indeed, no matter which applications come pre-installed, the user can easily download Yahoo!, Microsoft’s Bing, Yandex, Naver, DuckDuckGo and other search engines for free from the Google Play Store.

But Microsoft has sought to impose even-more stringent constraints on its device partners. One of the agreements disclosed in the Microsoft-Samsung contract litigation, the “Microsoft-Samsung Business Collaboration Agreement,” requires Samsung to set Bing as the search default for all Windows phones and precludes Samsung from pre-installing any other search applications on Windows-based phones. Samsung must configure all of its Windows Phones to use Microsoft Search Services as the

default Web Search  . . . in all instances on such properties where Web Search can be launched or a Query submitted directly by a user (including by voice command) or automatically (including based on location or context).

Interestingly, the agreement also requires Samsung to install Microsoft Search Services as a non-default search option on all of Samsung’s non-Microsoft Android devices (to the extent doing so does not conflict with other contracts).

Of course, the Microsoft-Samsung contract is expressly intended to remain secret: Its terms are declared to be “Confidential Information,” prohibiting Samsung from making “any public statement regarding the specific terms of [the] Agreement” without Microsoft’s consent.

Meanwhile, the accompanying Patent License Agreement provides that

all terms and conditions in this Agreement, including the payment amount [and the] specific terms and conditions in this Agreement (including, without limitation, the amount of any fees and any other amounts payable to Microsoft under this Agreement) are confidential and shall not be disclosed by either Party.

In addition to the confidentiality terms spelled out in these two documents, there is a separate Non-Disclosure Agreement—to further dispel any modicum of doubt on that score. Perhaps this is why Edelman was unaware of the ubiquity of such terms (and their confidentiality) when he issued his indictment of the Google agreements but neglected to mention Microsoft’s own.

In light of these revelations, Edelman’s scathing contempt for the “secrecy” of Google’s MADAs seems especially disingenuous:

MADA secrecy advances Google’s strategic objectives. By keeping MADA restrictions confidential and little-known, Google can suppress the competitive response…Relatedly, MADA secrecy helps prevent standard market forces from disciplining Google’s restriction. Suppose consumers understood that Google uses tying and full-line-forcing to prevent manufacturers from offering phones with alternative apps, which could drive down phone prices. Then consumers would be angry and would likely make their complaints known both to regulators and to phone manufacturers. Instead, Google makes the ubiquitous presence of Google apps and the virtual absence of competitors look like a market outcome, falsely suggesting that no one actually wants to have or distribute competing apps.

If, as Edelman claims, Google’s objectionable contract terms “serve both to help Google expand into areas where competition could otherwise occur, and to prevent competitors from gaining traction,” then what are the very same sorts of terms doing in Microsoft’s contracts with Samsung? The revelation that Microsoft employs contracts similar to — and similarly confidential to — Google’s highlights the hypocrisy of claims that such contracts serve anticompetitive aims.

In fact, as I discussed in my previous post, there are several pro-competitive justifications for such agreements, whether undertaken by a market leader or a newer entrant intent on catching up. Most obviously, such contracts help to ensure that consumers receive the user experience they demand on devices manufactured by third parties. But more to the point, the fact that such arrangements permeate the market and are adopted by both large and small competitors is strong indication that such terms are pro-competitive.

At the very least, they absolutely demonstrate that such practices do not constitute prima facie evidence of the abuse of market power.

[Reminder: See the “Disclosures” page above. ICLE has received financial support from Google in the past, and I formerly worked at Microsoft. Of course, the views here are my own, although I encourage everyone to agree with them.]

The free market position on telecom reform has become rather confused of late. Erstwhile conservative Senator Thune is now cosponsoring a version of Senator Rockefeller’s previously proposed video reform bill, bundled into satellite legislation (the Satellite Television Access and Viewer Rights Act or “STAVRA”) that would also include a provision dubbed “Local Choice.” Some free marketeers have defended the bill as a step in the right direction.

Although it looks as if the proposal may be losing steam this Congress, the legislation has been described as a “big and bold idea,” and it’s by no means off the menu. But it should be.

It has been said that politics makes for strange bedfellows. Indeed, people who disagree on just about everything can sometimes unite around a common perceived enemy. Take carriage disputes, for instance. Perhaps because, for some people, a day without The Bachelor is simply a day lost, an unlikely alliance of pro-regulation activists like Public Knowledge and industry stalwarts like Dish has emerged to oppose the ability of copyright holders to withhold content as part of carriage negotiations.

Senator Rockefeller’s Online Video Bill was the catalyst for the Local Choice amendments to STAVRA. Rockefeller’s bill did, well, a lot of terrible things, from imposing certain net neutrality requirements, to overturning the Supreme Court’s Aereo decision, to adding even more complications to the already Byzantine morass of video programming regulations.

But putting Senator Thune’s lipstick on Rockefeller’s pig can’t save the bill, and some of the worst problems from Senator Rockefeller’s original proposal remain.

Among other things, the new bill is designed to weaken the ability of copyright owners to negotiate with distributors, most notably by taking away their ability to withhold content during carriage disputes and by forcing TV stations to sell content on an a la carte basis.

Video distribution issues are complicated — at least under current law. But at root these are just commercial contracts and, like any contracts, they rely on a couple of fundamental principles.

First is the basic property right. The Supreme Court (at least somewhat) settled this for now (in Aereo), by protecting the right of copyright holders to be compensated for carriage of their content. With this baseline, distributors must engage in negotiations to obtain content, rather than employing technological workarounds and exploiting legal loopholes.

Second is the related ability of contracts to govern the terms of trade. A property right isn’t worth much if its owner can’t control how it is used, governed or exchanged.

Finally, and derived from these, is the issue of bargaining power. Good-faith negotiations require both sides not to act strategically by intentionally causing negotiations to break down. But if negotiations do break down, parties need to be able to protect their rights. When content owners are not able to withhold content in carriage disputes, they are put in an untenable bargaining position. This invites bad faith negotiations by distributors.

The STAVRA/Local Choice proposal would undermine the property rights and freedom of contract that bring The Bachelor to your TV, and the proposed bill does real damage by curtailing the scope of the property right in TV programming and restricting the range of contracts available for networks to license their content.

The bill would require that essentially all broadcast stations that elect retrans make their content available a la carte — thus unbundling some of the proverbial sticks that make up the traditional property right. It would also establish MVPD pass-through of each local affiliate. Subscribers would pay a fee determined by the affiliate, and the station must be offered on an unbundled basis, without any minimum tier required – meaning an MVPD has to offer local stations to its customers with no markup, on an a la carte basis, if the station doesn’t elect must-carry. It would also direct the FCC to open a rulemaking to determine whether broadcasters should be prohibited from withholding their content online during a dispute with an MPVD.

“Free market” supporters of the bill assert something like “if we don’t do this to stop blackouts, we won’t be able to stem the tide of regulation of broadcasters.” Presumably this would end blackouts of broadcast programming: If you’re an MVPD subscriber, and you pay the $1.40 (or whatever) for CBS, you get it, period. The broadcaster sets an annual per-subscriber rate; MVPDs pass it on and retransmit only to subscribers who opt in.

But none of this is good for consumers.

When transaction costs are positive, negotiations sometimes break down. If the original right is placed in the wrong hands, then contracting may not assure the most efficient outcome. I think it was Coase who said that.

But taking away the ability of content owners to restrict access to their content during a bargaining dispute effectively places the right to content in the hands of distributors. Obviously, this change in bargaining position will depress the value of content. Placing the rights in the hands of distributors reduces the incentive to create content in the first place; this is why the law protects copyright to begin with. But it also reduces the ability of content owners and distributors to reach innovative agreements and contractual arrangements (like certain promotional deals) that benefit consumers, distributors and content owners alike.

The mandating of a la carte licensing doesn’t benefit consumers, either. Bundling is generally pro-competitive and actually gives consumers more content than they would otherwise have. The bill’s proposal to force programmers to sell content to consumers a la carte may actually lead to higher overall prices for less content. Not much of a bargain.

There are plenty of other ways this is bad for consumers, even if it narrowly “protects” them from blackouts. For example, the bill would prohibit a network from making a deal with an MVPD that provides a discount on a bundle including carriage of both its owned broadcast stations as well as the network’s affiliated cable programming. This is not a worthwhile — or free market — trade-off; it is an ill-advised and economically indefensible attack on vertical distribution arrangements — exactly the same thing that animates many net neutrality defenders.

Just as net neutrality’s meddling in commercial arrangements between ISPs and edge providers will ensure a host of unintended consequences, so will the Rockefeller/Thune bill foreclose a host of welfare-increasing deals. In the end, in exchange for never having to go three days without CBS content, the bill will make that content more expensive, limit the range of programming offered, and lock video distribution into a prescribed business model.

Former FCC Commissioner Rob McDowell sees the same hypocritical connection between net neutrality and broadcast regulation like the Local Choice bill:

According to comments filed with the FCC by Time Warner Cable and the National Cable and Telecommunications Association, broadcasters should not be allowed to take down or withhold the content they produce and own from online distribution even if subscribers have not paid for it—as a matter of federal law. In other words, edge providers should be forced to stream their online content no matter what. Such an overreach, of course, would lay waste to the economics of the Internet. It would also violate the First Amendment’s prohibition against state-mandated, or forced, speech—the flip side of censorship.

It is possible that the cable companies figure that subjecting powerful broadcasters to anti-free speech rules will shift the political momentum in the FCC and among the public away from net neutrality. But cable’s anti-free speech arguments play right into the hands of the net-neutrality crowd. They want to place the entire Internet ecosystem, physical networks, content and apps, in the hands of federal bureaucrats.

While cable providers have generally opposed net neutrality regulation, there is, apparently, some support among them for regulations that would apply to the edge. The Rockefeller/Thune proposal is just a replay of this constraint — this time by forcing programmers to allow retransmission of broadcast content under terms set by Congress. While “what’s good for the goose is good for the gander” sounds appealing in theory, here it is simply doubling down on a terrible idea.

What it reveals most of all is that true neutrality advocates don’t want government control to be limited to ISPs — rather, progressives like Rockefeller (and apparently some conservatives, like Thune) want to subject the whole apparatus — distribution and content alike — to intrusive government oversight in order to “protect” consumers (a point Fred Campbell deftly expands upon here and here).

You can be sure that, if the GOP supports broadcast a la carte, it will pave the way for Democrats (and moderates like McCain who back a la carte) to expand anti-consumer unbundling requirements to cable next. Nearly every economic analysis has concluded that mandated a la carte pricing of cable programming would be harmful to consumers. There is no reason to think that applying it to broadcast channels would be any different.

What’s more, the logical extension of the bill is to apply unbundling to all MVPD channels and to saddle them with contract restraints, as well — and while we’re at it, why not unbundle House of Cards from Orange is the New Black? The Rockefeller bill may have started in part as an effort to “protect” OVDs, but there’ll be no limiting this camel once its nose is under the tent. Like it or not, channel unbundling is arbitrary — why not unbundle by program, episode, studio, production company, etc.?

There is simply no principled basis for the restraints in this bill, and thus there will be no limit to its reach. Indeed, “free market” defenders of the Rockefeller/Thune approach may well be supporting a bill that ultimately leads to something like compulsory, a la carte licensing of all video programming. As I noted in my testimony last year before the House Commerce Committee on the satellite video bill:

Unless we are prepared to bear the consumer harm from reduced variety, weakened competition and possibly even higher prices (and absolutely higher prices for some content), there is no economic justification for interfering in these business decisions.

So much for property rights — and so much for vibrant video programming.

That there is something wrong with the current system is evident to anyone who looks at it. As Gus Hurwitz noted in recent testimony on Rockefeller’s original bill,

The problems with the existing regulatory regime cannot be understated. It involves multiple statutes implemented by multiple agencies to govern technologies developed in the 60s, 70s, and 80s, according to policy goals from the 50s, 60s, and 70s. We are no longer living in a world where the Rube Goldberg of compulsory licenses, must carry and retransmission consent, financial interest and syndication exclusivity rules, and the panoply of Federal, state, and local regulations makes sense – yet these are the rules that govern the video industry.

While video regulation is in need of reform, this bill is not an improvement. In the short run it may ameliorate some carriage disputes, but it will do so at the expense of continued programming vibrancy and distribution innovations. The better way to effect change would be to abolish the Byzantine regulations that simultaneously attempt to place thumbs of both sides of the scale, and to rely on free market negotiations with a copyright baseline and antitrust review for actual abuses.

But STAVRA/Local Choice is about as far from that as you can get.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

[First posted to the CPIP Blog on June 17, 2014]

Last Thursday, Elon Musk, the founder and CEO of Tesla Motors, issued an announcement on the company’s blog with a catchy title: “All Our Patent Are Belong to You.” Commentary in social media and on blogs, as well as in traditional newspapers, jumped to the conclusion that Tesla is abandoning its patents and making them “freely” available to the public for whomever wants to use them. As with all things involving patented innovation these days, the reality of Tesla’s new patent policy does not match the PR spin or the buzz on the Internet.

The reality is that Tesla is not disclaiming its patent rights, despite Musk’s title to his announcement or his invocation in his announcement of the tread-worn cliché today that patents impede innovation. In fact, Tesla’s new policy is an example of Musk exercising patent rights, not abandoning them.

If you’re not puzzled by Tesla’s announcement, you should be. This is because patents are a type of property right that secures the exclusive rights to make, use, or sell an invention for a limited period of time. These rights do not come cheap — inventions cost time, effort, and money to create and companies like Tesla then exploit these property rights in spending even more time, effort and money in converting inventions into viable commercial products and services sold in the marketplace. Thus, if Tesla’s intention is to make its ideas available for public use, why, one may wonder, did it bother to expend the tremendous resources in acquiring the patents in the first place?

The key to understanding this important question lies in a single phrase in Musk’s announcement that almost everyone has failed to notice: “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” (emphasis added)

What does “in good faith” mean in this context? Fortunately, one intrepid reporter at the L.A. Times asked this question, and the answer from Musk makes clear that this new policy is not an abandonment of patent rights in favor of some fuzzy notion of the public domain, but rather it’s an exercise of his company’s patent rights: “Tesla will allow other manufacturers to use its patents in “good faith” – essentially barring those users from filing patent-infringement lawsuits against [Tesla] or trying to produce knockoffs of Tesla’s cars.” In the legalese known to patent lawyers and inventors the world over, this is not an abandonment of Tesla’s patents, this is what is known as a cross license.

In plain English, here’s the deal that Tesla is offering to manufacturers and users of its electrical car technology: in exchange for using Tesla’s patents, the users of Tesla’s patents cannot file patent infringement lawsuits against Tesla if Tesla uses their other patents. In other words, this is a classic deal made between businesses all of the time — you can use my property and I can use your property, and we cannot sue each other. It’s a similar deal to that made between two neighbors who agree to permit each other to cross each other’s backyard. In the context of patented innovation, this agreement is more complicated, but it is in principle the same thing: if automobile manufacturer X decides to use Tesla’s patents, and Tesla begins infringing X’s patents on other technology, then X has agreed through its prior use of Tesla’s patents that it cannot sue Tesla. Thus, each party has licensed the other to make, use and sell their respective patented technologies; in patent law parlance, it’s a “cross license.”

The only thing unique about this cross licensing offer is that Tesla publicly announced it as an open offer for anyone willing to accept it. This is not a patent “free for all,” and it certainly is not tantamount to Tesla “taking down the patent wall.” These are catchy sound bites, but they in fact obfuscate the clear business-minded nature of this commercial decision.

For anyone perhaps still doubting what is happening here, the same L.A Times story further confirms that Tesla is not abandoning the patent system. As stated to the reporter: “Tesla will continue to seek patents for its new technology to prevent others from poaching its advancements.” So much for the much ballyhooed pronouncements last week of how Tesla’s new patent (licensing) policy “reminds us of the urgent need for patent reform”! Musk clearly believes that the patent system is working just great for the new technological innovation his engineers are creating at Tesla right now.

For those working in the innovation industries, Tesla’s decision to cross license its old patents makes sense. Tesla Motors has already extracted much of the value from these old patents: Musk was able to secure venture capital funding for his startup company and he was able to secure for Tesla a dominant position in the electrical car market through his exclusive use of this patented innovation. (Venture capitalists consistently rely on patents in making investment decisions, and for anyone who doubts this need to watch only a few episodes of Shark Tank.) Now that everyone associates radical, cutting-edge innovation with Tesla, Musk can shift in his strategic use of his company’s assets, including his intellectual property rights, such as relying more heavily on the goodwill associated with the Tesla trademark. This is clear, for instance, from the statement to the LA Times that companies or individuals agreeing to the “good faith” terms of Tesla’s license agree not to make “knockoffs of Tesla’s cars.”

There are other equally important commercial reasons for Tesla adopting its new cross-licensing policy, but the point has been made. Tesla’s new cross-licensing policy for its old patents is not Musk embracing “the open source philosophy” (as he asserts in his announcement). This may make good PR given the overheated rhetoric today about the so-called “broken patent system,” but it’s time people recognize the difference between PR and a reasonable business decision that reflects a company that has used (old) patents to acquire a dominant market position and is now changing its business model given these successful developments.

At a minimum, people should recognize that Tesla is not declaring that it will not bring patent infringement lawsuits, but only that it will not sue people with whom it has licensed its patented innovation. This is not, contrary to one law professor’s statement, a company “refrain[ing] from exercising their patent rights to the fullest extent of the law.” In licensing its patented technology, Tesla is in fact exercising its patent rights to the fullest extent of the law, and that is exactly what the patent system promotes in the myriad business models and innovative