An unofficial version of the EU’s anticipated regulatory proposal on standard essential patents (SEPs), along with a related impact assessment, was leaked earlier this month, generating reactions that range from disquiet to disbelief (but mostly disbelief).
As it currently stands, it appears the regulation will significantly increase costs to the most innovative companies that participate in multiple standardization activities. It would, for instance, regulate technology prices, limit the enforcement of patent rights, and introduce new avenues for further delays in SEP-licensing negotiations.
It also might harm the EU’s innovativeness on the global stage and set precedents for other countries to regulate, possibly jeopardizing how the entire international technical-standardization system functions.
Dubious Premises
The regulation originates from last year’s call by the European Commission to establish principles and implement measures that will foster a “balanced,” “smooth,” and “predictable” framework for SEP licensing. With this in mind, the reform aims “to promote an efficient and sustainable SEP licensing ecosystem, where the interests of both SEP holders and implementers are considered” [emphasis added]. As explicitly mentioned in the call, the main problems affecting the SEP ecosystem are holdup, holdout,and forum shopping.
Unfortunately, it is far from clear these premises are correct or that they justify the sort of regulation the Commission now contemplates.
The draft regulation purports to fix a broken regime by promoting efficient licensing and ensuring a fair balance between the interests of patent holders and implementers, in order to mitigate the risks of both holdup and holdout as requested in well-established case law and, in particular, by the Court of Justice’s (CJEU) landmark Huawei v. ZTE case.
There is, however, scant evidence that the current SEP-licensing regime is inefficient or unbalanced. The best evidence is that SEP-reliant industries are no less efficient than other innovative industries. Likewise, SEP holders do not appear to be capturing the lion’s share of profits in the industries where they operate. In short, it’s not clear that there is any problem to solve in the first place.
There is also scant evidence that the Commission has taken account of hugely important geopolitical considerations. Policymakers are worried that Chinese companies (with the support of Chinese courts and authorities) may use litigation strategies to obtain significantly lower “fair, reasonable, and non-discriminatory” (FRAND) rates.
Indeed, the EU filed a case against China at the World Trade Organization (WTO) last year that complained about the strategic use of anti-suit injunctions (ASIs)—that is, orders restraining a party either from pursuing foreign proceedings or enforcing a judgment obtained in foreign proceedings. As explained in a recent paper, this trend could have severe economic repercussions, given that the smooth implementation of mobile-telecommunication standards is crucial to the economic potential of both the so-called “Internet of Things” and U.S. communications infrastructure writ large.
By disproportionately targeting inventors (as we argue below), the draft regulation penalizes precisely those companies that, from the perspective of geopolitics, it should be protecting (or, at least, not undermining). Indeed, as the Commission’s impact assessment warns, the share of SEPs owned by Chinese companies has increased dramatically in recent years. Penalizing European inventors will only exacerbate this trend.
Missing the Mark
Given the importance of achieving a balance between holdup and holdout, as well as avoiding steps that could reinforce China’s position on the geopolitical map, the leaked version of the forthcoming EU regulation is deeply concerning, to say the least.
Rather than wrestling with these complex issues, the proposal essentially focuses on ensuring that implementers receive licenses at affordable royalty rates. In other words, it would create significant red tape and compliance costs in an attempt to address an issue that is mostly peripheral to the stated aims, and arguably already dealt with by EU courts in Huawei v. ZTE. That decision, notably, forces parties to negotiate royalties in good faith before they can pursue judicial remedies, such as ASIs.
Critically, the proposal surmises that there is currently little transparency regarding the aggregate royalties that implementers pay for all the SEPs that underpin a standard. The proposal assumes that making this information public would enable implementers to make better determinations when they negotiate royalties.
To address this, the proposal creates several mandatory procedures that ultimately serve to make information on total royalty burdens public. It also creates a procedure that parties can use to obtain nonbinding FRAND royalty determinations from third-party arbitrators. More precisely, if contributors do not agree on an aggregate royalty sufficiently early before components and products implementing the standardized technology are put on the market, implementers and/or contributors can ask the EU Intellectual Property Office (EUIPO) to appoint conciliators with recommending an aggregate royalty (with exceedingly limited ability to appeal such decisions).
The proposal has at least two important drawbacks.
To start, it is unclear what a nonbinding royalty recommendation would achieve. On the one hand, backers might hope the nonbinding recommendations will, de facto, be transposed by national courts when they rule on FRAND disputes. This may well be correct, but it is far from ideal. One of the great strengths of the current system is that courts in different jurisdictions compete to become the forum of choice for royalty disputes. In doing so, they constantly refine the way they rule on such disputes. Replacing this emergent equilibrium with a one-size-fits-all approach would be a great loss.
Conversely, it’s plausible that national courts will continue to go about their daily business, largely ignoring the EUIPO royalty recommendations. If that were the case, one could legitimately ask what a lengthy and costly system of nonbinding royalty determinations really achieves. Whatever the case, the draft regulation offers little vision as to how its planned royalty determinations will improve actual outcomes.
A second important issue is that, in its current form, the proposal seems myopically focused on prices. This is a problem because licensing negotiations involve a much broader range of terms. Such considerations as available remedies and penalties, license-termination conditions, cross-licensing, and jurisdiction are often just as important as price.
Not only are these issues conspicuously absent from the draft regulation, but properly accounting for them would largely undermine the regulation’s price-comparison mechanism, as this heterogeneity shows such comparisons are essentially apples to oranges.
Along similar lines, the draft regulation also includes a system of sampling to determine whether patents are truly essential to the underlying standard. These checks would be conducted by independent evaluators selected according to criteria to be determined by the Commission, and based on a methodology to be developed by the Commission, to ensure that the sample can produce statistically valid results.
It’s unclear how much such a mechanism would enhance the status quo. Moreover, according to the proposal, the results of these essentiality checks would also not be legally binding. Rather than enhancing SEP-licensing negotiations and safeguarding the effectiveness of essentiality checks, this solution would just exacerbate holdout concerns. Indeed, implementers may use the process to delay negotiations or avoid payment of royalties while the process is ongoing.
Geopolitical Considerations
The Commission’s proposal also sends all the wrong signals internationally. In turn, this may undermine the geopolitical interests of both the EU and the United States.
By signaling its willingness to more closely interfere with the royalty rates agreed between inventors and implementers—even for patents registered outside the EU—the EU is effectively inviting other jurisdictions to do the same (or legitimizing ongoing efforts to do so).
This is far from ideal. For instance, Chinese government officials and courts have increasingly sought to influence and rule on global FRAND disputes, generally in ways that favor its own firms, which are largely on the implementer side of disputes. The EU’s proposal sends a strong signal that it is fair game for government agencies to more directly influence global FRAND royalty rates, as well as seeking to override the decisions of foreign courts.
In short, the EU’s draft regulation will embolden foreign jurisdictions to respond in kind and seek further authority over the royalty rates agreed upon by private parties. Ultimately, this will infuse the SEP-licensing space with politized oversight and vindicate China’s moves to depress the value of the West’s intellectual property, thus giving its state-backed rivals a leg up. At a time when geopolitical tensions between China and the West are historically high, such a move seems particularly ill-advised.
Conclusion
In sum, rather than strike a balance between patent owners’ and implementers’ interests, the EU proposal is one-sided. It only introduces burdens on SEP holders and disregards the significant risks of holdout strategies. Such a framework for SEP licensing would be at odds with the framework crafted by the CJEU in Huawei.
Further, it would also undermine the value of many SEPs in ways that would be particularly appreciated by Chinese policymakers. The consequences of such an approach would be disruptive for entire SEP-reliant industries, and for the EU’s economic interests.
The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.
The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.
In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.
Protectionism Under the Guise of Cybersecurity
Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.
The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.
Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”
The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.
Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.
Do Security and Privacy Require Protectionism?
As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.
In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).
Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.
Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.
The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.
Digital Sovereignty as Industrial Policy
Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”
She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.
Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.
Effect on startups
Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.
In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”
More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.
Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.
For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”
Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:
Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.
More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.
The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.
Direct investment
Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.
The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy.
According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.
It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.
Conclusion
Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.
National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.
[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at thesymposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]
In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).
In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.
The Hippocratic Oath of Regulatory Intervention
There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.
This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”
This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.
Assessing Market Failure in the Use of Consumer Data
To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.
The ANPR’s Oversights
In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.
First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.
Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.
Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.
Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.
This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.
There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:
Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.
Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.
Data Protection v. Data-Reliant Functionality
Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”
In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.
Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.
As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).
Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.
This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.
Private-Ordering Approaches to Consumer-Data Regulation
Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.
Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.
In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”
This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).
Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.
In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.
This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.
Bottom-Up v. Top-Down Regulation
Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.
This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.
Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.
A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.
Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.
Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.
These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.
Hybrid Policy Approaches
Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.
First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”
Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.
Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.
These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.
Conclusion
Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.
As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.
Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.
Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.
As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.
These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes.
The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement.
First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility.
Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized.
Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty.
As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions.
To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract.
Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration
Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.
As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.
This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.
IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes:
Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.
More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.
Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.
The Importance of Intellectual Property Rights for Startup Activity
Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this:
Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties;
Patent rights can be the basis for securitization, facilitating access to startup funding; and
Patent rights drive venture capital (VC) investment.
While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here).
The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities. In contrast, startup companies—almost by definition—must outsource production.
Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it:
SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.
Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis:
[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.
In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.
The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions
The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages.
IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price.
In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.
At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.
This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it:
Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.
Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.
The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that:
The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.
Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:
What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.
With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation.
Conclusion
In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.
For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely.
The signatories to the comments are:
Alden F. Abbott
Senior Research Fellow, Mercatus Center George Mason University Former General Counsel, U.S. Federal Trade Commission
Jonathan Barnett
Torrey H. Webb Professor of Law University of Southern California
Ronald A. Cass
Dean Emeritus, School of Law Boston University Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe Colangelo
Jean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics University of Basilicata and LUISS (Italy)
Richard A. Epstein
Laurence A. Tisch Professor of Law New York University
Bowman Heiden
Executive Director, Tusher Initiative at the Haas School of Business University of California, Berkeley
Justin (Gus) Hurwitz
Professor of Law University of Nebraska
Thomas A. Lambert
Wall Chair in Corporate Law and Governance University of Missouri
Stan J. Liebowitz
Ashbel Smith Professor of Economics University of Texas at Dallas
John E. Lopatka
A. Robert Noll Distinguished Professor of Law Penn State University
Keith Mallinson
Founder and Managing Partner WiseHarbor
Geoffrey A. Manne
President and Founder International Center for Law & Economics
Adam Mossoff
Professor of Law George Mason University
Kristen Osenga
Austin E. Owen Research Scholar and Professor of Law University of Richmond
Vernon L. Smith
George L. Argyros Endowed Chair in Finance and Economics Chapman University Nobel Laureate in Economics (2002)
Daniel F. Spulber
Elinor Hobbs Distinguished Professor of International Business Northwestern University
David J. Teece
Thomas W. Tusher Professor in Global Business University of California, Berkeley
Joshua D. Wright
University Professor of Law George Mason University Former Commissioner, U.S. Federal Trade Commission
John M. Yun
Associate Professor of Law George Mason University Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission
On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.
What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.
Why Isn’t Everything Interoperable?
The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.
And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.
But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.
The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.
Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.
Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.
There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:
It might be too costly to implement and/or maintain.
It might prescribe a certain product design and prevent experimentation and innovation.
It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
It might increase the risk of something not working, or of security breaches.
It might prevent certain pricing models that increase output.
It might compromise some element of the product or service that benefits specifically from not being interoperable.
In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.
In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.
Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).
But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.
Interoperability for Digital Platforms
Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.
It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.
A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.
Interoperability and Contact-Tracing Apps
A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.
No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.
In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.
It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.
A ‘Super Tool’ for Digital Market Intervention?
The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.
The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.
The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.
Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.
Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”
But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.
Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.
Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.
In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?
None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.
The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.
This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.
Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs).
However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.
These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.
Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.
The many benefits of patent protection
While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.
By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.
This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.
The shift away from injunctive relief
Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs).
However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.
The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.
Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits.
But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far.
Other reforms
And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.
Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.
For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.
Concluding remarks
The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.
While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.
This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).
Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.
One baleful aspect of U.S. antitrust enforcers’ current (and misguided) focus on the unilateral exercise of patent rights is an attack on the ability of standard essential patent (SEP) holders to obtain a return that incentivizes them to participate in collective standard setting. (This philosophy is manifested, for example, in a relatively recent U.S. Justice Department “business review letter” that lends support to the devaluation of SEPs.) Enforcers accept the view that FRAND royalty rates should compensate licensees only for the value of the incremental difference between the first- and second-best technologies in a hypothetical ex ante competition among patent holders to have their patented technologies included in a proposed standard – a methodology that yields relatively low royalty rates (tending toward zero when the first- and second-best technologies are very close substitutes). Tied to this perspective is enforcers’ concern with higher royalty rates as reflecting unearned “hold-up value” due to the “lock in” effects of a standard (the premium implementers are willing to pay patent holders whose technologies are needed to practice an established standard). As a result, strategies by which SEP holders unilaterally seek to maximize returns to their SEP-germane intellectual property, such as threatening lawsuits seeking injunctions for patent infringement, are viewed askance.
The ex ante “incremental value” approach, far from being economically optimal, is inherently flawed. It is at odds with elementary economic logic, which indicates that “ratcheting down” returns to SEPs in line with an “ex ante competition among technologies” model will lower incentives to invest in patented technologies offered up for consideration by SSOs in a standard- setting exercise. That disincentive effect will in turn diminish the quality of patents that end up as SEPs – thereby reducing the magnitude of the welfare benefits stemming from standards. In fact, the notion that FRAND principles should be applied in a manner that guarantees minimal returns to patent holders is inherently at odds with the justification for establishing a patent system in the first place. That is because the patent system is designed to generously reward large-scale dynamic gains that stem from innovation, while the niggardly “incremental value” yardstick is a narrow static welfare measure that ignores incentive effects (much as the “marginal cost pricing” ideal of neoclassical price theory is inconsistent with Austrian and other dynamic perspectives on marketplace interactions).
Recently, lawyer-economist Greg Sidak outlined an approach to SEP FRAND-based pricing that is far more in line with economic reality – one based on golf tournament prizes. In a paper to be delivered at the November 5 2015 “Patents in Telecoms” Conference at George Washington University, Sidak explains that collective standard-setting through a standard-setting organization (SSO) is analogous to establishing and running a professional golf tournament. Like golf tournament organizers, SSOs may be expected to award a substantial prize to the winner that reflects a significant spread between the winner and the runner-up, in order to maximize the benefits flowing from their enterprise. Relevant excerpts from Sidak’s draft paper (with footnotes omitted and hyperlink added) follow:
“If an inventor could receive only a pittance for his investment in developing his technology and in contributing it to a standard, he would cease contributing proprietary technologies to collective standards and instead pursue more profitable outside options. That reasoning is even more compelling if the inventor is a publicly traded firm, answerable to its shareholders. Therefore, modeling standard setting as a static Bertrand pricing game [reflected in the incremental value approach] without any differentiation among the competing technologies and without any outside option for the inventors would predict that every inventor loses—that is, no inventor could possibly recoup his investment in innovation and therefore would quickly exit the market. Standard setting would be a sucker’s game for inventors. . . .
[J]ust as the organizer of a golf tournament seeks to ensure that all contestants exert maximum effort to win the tournament, so as to ensure a competitive and entertaining tournament, the SSO must give each participant the incentive to offer the SSO its best technologies. . . .
The rivalrous process—the tournament—by which an SSO identifies and then adopts a particular technology for the standard incidentally produces something else of profound value, something which the economists who invoke static Bertrand competition to model a FRAND royalty manage to obscure. The high level of inventor participation that a standard-setting tournament is able to elicit by virtue of its payoff structure reveals valuable information about both the inventors and the technologies that might make subsequent rounds of innovation far more socially productive (for example, by identifying dead ends that future inventors need not invest time and money in exploring). In contrast, the alternative portrayal of standard setting as static Bertrand competition among technologies leads . . . to the dismal prediction that standard setting is essentially a lottery. The alternative technologies are assumed to be unlimited in number and undifferentiated in quality. All are equally mediocre. If the standard were instead a motion picture and the competing inventions were instead actors, there would be no movie stars—only extras from central casting, all equally suitable to play the leading role. In short, a model of competition for adoption of a technology into the standard that, in practical effect, randomly selects its winner and therefore does not aggregate and reveal information is a model that ignores what Nobel laureate Friedrich Hayek long ago argued is the quintessential virtue of a market mechanism.
The economic literature finds that a tournament is efficient when the cost of measuring the absolute output of each participant sufficiently exceeds the cost of measuring the relative output of each participant compared with the other participants. That condition obtains in the context of SEPs and SSOs. Measuring the actual output or value of each competing technology for a standard is notoriously difficult. However, it is much easier to ascertain the relative value of each technology. SEP holders and implementers routinely make these ordinal comparisons in FRAND royalty disputes. Given the similarities between tournaments and collective standard setting, and the fact that it is far easier to measure the relative value of an SEP than its absolute value, it is productive to analyze the standard-setting process as if it were a tournament. . . .
[I]n addition to guaranteeing participation, the prize structure must provide a sufficient incentive to encourage participants to exert a high level of effort. In a standard setting context, a “high level of effort” means investing significant capital and other resources to develop new technologies that have commercial value. The economic literature . . . suggests that the level of effort that a participant exerts depends on the spread, or difference, between the prize for winning the tournament and the next-best prize. Furthermore, . . . ‘as the spread increases, the incentive to devote additional resources to improving one’s probability of winning increases.’ That result implies that the first-place prize must exceed the second-place prize and that, the greater the disparity between those two prizes, the greater the incentive that participants have to invest in developing new and innovative technologies.”
Sidak’s latest insights are in line with the former bipartisan U.S. antitrust consensus (expressed in the 1995 U.S. Justice Department – Federal Trade Commission IP-Antitrust Guidelines) that antitrust enforcers should focus on targeting schemes that reduce competition among patented technologies, and not challenge unilateral efforts by patentees to maximize returns to their legally-protected property right. U.S. antitrust enforcers (and their foreign counterparts) would be well-advised to readopt that consensus and abandon efforts to limit returns to SEPs – an approach that is inimical to innovation and to welfare-enhancing dynamic competition in technology markets.
But are the leaders at the U.S. Department of Justice Antitrust Division (DOJ) and the Federal Trade paying any attention? The most recent public reports are not encouraging.
In a very recent filing with the U.S. International Trade Commission (ITC), FTC Chairwoman Edith Ramirez stated that “the danger that bargaining conducted in the shadow of an [ITC] exclusion order will lead to patent hold-up is real.” (Comparable to injunctions, ITC exclusion orders preclude the importation of items that infringe U.S. patents. They are the only effective remedy the ITC can give for patent infringement, since the ITC cannot assess damages or royalties.) She thus argued that, before issuing an exclusion order, the ITC should require an SEP holder to show that the infringer is unwilling or unable to enter into a patent license on “fair, reasonable, and non-discriminatory” (FRAND) terms – a new and major burden on the vindication of patent rights. In justifying this burden, Chairwoman Ramirez pointed to Motorola’s allegedly excessive SEP royalty demands from Microsoft – $6-$8 per gaming console, as opposed to a federal district court finding that pennies per console was the appropriate amount. She also cited LSI Semiconductor’s demand for royalties that exceeded the selling price of Realtek’s standard-compliant product, whereas a federal district court found the appropriate royalty to be only .19% of the product’s selling price. But these two examples do not support Chairwoman Ramirez’s point – quite the contrary. The fact that high initial royalty requests subsequently are slashed by patent courts shows that the patent litigation system is working, not that antitrust enforcement is needed, or that a special burden of proof must be placed on SEP holders. Moreover, differences in bargaining positions are to be expected as part of the normal back-and-forth of bargaining. Indeed, if anything, the extremely modest judicial royalty assessments in these cases raise the concern that SEP holders are being undercompensated, not overcompensated.
A recent speech by DOJ Assistant Attorney General for Antitrust (AAG) William J. Baer, delivered at the International Bar Association’s Competition Conference, suffers from the same sort of misunderstanding as Chairman Ramirez’s ITC filing. Stating that “[h]old up concerns are real”, AAG Baer cited the two examples described by Chairwoman Ramirez. He also mentioned the fact that Innovatio requested a royalty rate of over $16 per smart tablet for its SEP portfolio, but was awarded a rate of less than 10 cents per unit by the court. While admitting that the implementers “proved victorious in court” in those cases, he asserted that “not every implementer has the wherewithal to litigate”, that “[s]ometimes implementers accede to licensors’ demands, fearing exclusion and costly litigation”, that “consumers can be harmed and innovation incentives are distorted”, and that therefore “[a] future of exciting new products built atop existing technology may be . . . deferred”. These theoretical concerns are belied by the lack of empirical support for hold-up, and are contradicted by the recent finding, previously noted, that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy. (In addition, the implementers of patented technology tend to be large corporations; AAG Baer’s assertion that some may not have “the wherewithal to litigate” is a bare proposition unsupported by empirical evidence or more nuanced analysis.) In short, DOJ, like FTC, is advancing an argument that undermines, rather than bolsters, the case for applying antitrust to SEP holders’ efforts to defend their patent rights.
Ideally the FTC and DOJ should reevaluate their recent obsession with allegedly abusive unilateral SEP behavior and refocus their attention on truly serious competitive problems. (Chairwoman Ramirez and AAG Baer are both outstanding and highly experienced lawyers who are well-versed in policy analysis; one would hope that they would be open to reconsidering current FTC and DOJ policy toward SEPs, in light of hard evidence.) Doing so would benefit consumer welfare and innovation – which are, after all, the goals that those important agencies are committed to promote.