Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.

Arguing that her proposal is “akin to a pollution tax,” Ms. Kim writes:

A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air.  But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.

It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, The Economics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.   

With respect to the proposed tax on Microsoft and other high-tech employers, the fairness argument seems a stretch, and the efficiency argument outright fails. Let’s consider each.

To achieve fairness by forcing a victimizer to pay for imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s view is that Microsoft and its high-paid employees are victimizing (imposing costs on) incumbent renters and lower-paid homebuyers. But is that so clear?

Microsoft’s desire to employ high-skilled workers, and those employees’ desire to live near their work, conflicts with incumbent renters’ desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If Microsoft got its way, incumbent renters and lower paid homebuyers would be worse off.

But incumbent renters’ and lower-paid homebuyers’ insistence on low rents and home prices conflicts with the desires of Microsoft, the high-skilled workers it would like to hire, and local homeowners. If incumbent renters and lower paid homebuyers got their way and prevented Microsoft from employing high-wage workers, Microsoft, its potential employees, and local homeowners would be worse off. Who is the victim here?

As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.

A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.

When a business engages in some productive activity, it uses resources (labor, materials, etc.) to produce some sort of valuable output (e.g., a good or service). In determining what level of productive activity to engage in (e.g., how many hours to run the factory, etc.), it compares its cost of engaging in one more unit of activity to the added benefit (revenue) it will receive from doing so. If its so-called “marginal cost” from the additional activity is less than or equal to the “marginal benefit” it will receive, it will engage in the activity; otherwise, it won’t.  

When the business is bearing all the costs and benefits of its actions, this outcome is efficient. The cost of the inputs used in production are determined by the value they could generate in alternative uses. (For example, if a flidget producer could create $4 of value from an ounce of tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.) If a business finds that continued production generates additional revenue (reflective of consumers’ subjective valuation of the business’s additional product) in excess of its added cost (reflective of the value its inputs could create if deployed toward their next-best use), then making more moves productive resources to their highest and best uses, enhancing social welfare. This outcome is “allocatively efficient,” meaning that productive resources have been allocated in a manner that wrings the greatest possible value from them.

Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others.  Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.

Pigou’s idea was to use taxes to prevent such inefficiencies. If the government were to charge the producer a tax equal to the cost his activity imposed on others ($1 in the above example), then he would capture all the marginal benefit and bear all the marginal cost of his activity. He would thus be motivated to continue his activity only to the point at which its total marginal benefit equaled its total marginal cost. The point of a Pigouvian tax, then, is to achieve allocative efficiency—i.e., to channel productive resources toward their highest and best ends.

When it comes to the negative externality Ms. Kim has identified—an increase in housing prices occasioned by high-tech companies’ hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That is because the external cost at issue here is a “pecuniary” externality, a special sort of externality that does not generate inefficiency.

A pecuniary externality is one where the adverse third-party effect consists of an increase in market prices. If that’s the case, the allocative inefficiency that may justify Pigouvian taxes does not exist. There’s no inefficiency from the mere fact that buyers pay more.  Their loss is perfectly offset by a gain to sellers, and—here’s the crucial part—the higher prices channel productive resources toward, not away from, their highest and best ends. High rent levels, for example, signal to real estate developers that more resources should be devoted to creating living spaces within the city. That’s allocatively efficient.

Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.

In the end, Ms. Kim’s pollution tax analogy fails. The efficiency case for a Pigouvian tax to remedy negative externalities does not apply when, as here, the externality at issue is pecuniary.

For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.

Drug makers recently announced their 2019 price increases on over 250 prescription drugs. As examples, AbbVie Inc. increased the price of the world’s top-selling drug Humira by 6.2 percent, and Hikma Pharmaceuticals increased the price of blood-pressure medication Enalaprilat by more than 30 percent. Allergan reported an average increase across its portfolio of drugs of 3.5 percent; although the drug maker is keeping most of its prices the same, it raised the prices on 27 drugs by 9.5 percent and on another 24 drugs by 4.9 percent. Other large drug makers, such as Novartis and Pfizer, will announce increases later this month.

So far, the number of price increases is significantly lower than last year when drug makers increased prices on more than 400 drugs.  Moreover, on the drugs for which prices did increase, the average price increase of 6.3 percent is only about half of the average increase for drugs in 2018. Nevertheless, some commentators have expressed indignation and President Trump this week summoned advisors to the White House to discuss the increases.  However, commentators and the administration should keep in mind what the price increases actually mean and the numerous players that are responsible for increasing drug prices. 

First, it is critical to emphasize the difference between drug list prices and net prices.  The drug makers recently announced increases in the list, or “sticker” prices, for many drugs.  However, the list price is usually very different from the net price that most consumers and/or their health plans actually pay, which depends on negotiated discounts and rebates.  For example, whereas drug list prices increased by an average of 6.9 percent in 2017, net drug prices after discounts and rebates increased by only 1.9 percent. The differential between the growth in list prices and net prices has persisted for years.  In 2016 list prices increased by 9 percent but net prices increased by 3.2 percent; in 2015 list prices increased by 11.9 percent but net prices increased by 2.4 percent, and in 2014 list price increases peaked at 13.5 percent but net prices increased by only 4.3 percent.

For 2019, the list price increases for many drugs will actually translate into very small increases in the net prices that consumers actually pay.  In fact, drug maker Allergan has indicated that, despite its increase in list prices, the net prices that patients actually pay will remain about the same as last year.

One might wonder why drug makers would bother to increase list prices if there’s little to no change in net prices.  First, at least 40 percent of the American prescription drug market is subject to some form of federal price control.  As I’ve previously explained, because these federal price controls generally require percentage rebates off of average drug prices, drug makers have the incentive to set list prices higher in order to offset the mandated discounts that determine what patients pay.

Further, as I discuss in a recent Article, the rebate arrangements between drug makers and pharmacy benefit managers (PBMs) under many commercial health plans create strong incentives for drug makers to increase list prices. PBMs negotiate rebates from drug manufacturers in exchange for giving the manufacturers’ drugs preferred status on a health plan’s formulary.  However, because the rebates paid to PBMs are typically a percentage of a drug’s list price, drug makers are compelled to increase list prices in order to satisfy PBMs’ demands for higher rebates. Drug makers assert that they are pressured to increase drug list prices out of fear that, if they do not, PBMs will retaliate by dropping their drugs from the formularies. The value of rebates paid to PBMs has doubled since 2012, with drug makers now paying $150 billion annually.  These rebates have grown so large that, today, the drug makers that actually invest in drug innovation and bear the risk of drug failures receive only 39 percent of the total spending on drugs, while 42 percent of the spending goes to these pharmaceutical middlemen.

Although a portion of the increasing rebate dollars may eventually find its way to patients in the form of lower co-pays, many patients still suffer from the list prices increases.  The 29 million Americans without drug plan coverage pay more for their medications when list prices increase. Even patients with insurance typically have cost-sharing obligations that require them to pay 30 to 40 percent of list prices.  Moreover, insured patients within the deductible phase of their drug plan pay the entire higher list price until they meet their deductible.  Higher list prices jeopardize patients’ health as well as their finances; as out-of-pocket costs for drugs increase, patients are less likely to adhere to their medication routine and more likely to abandon their drug regimen altogether.

Policymakers must realize that the current system of government price controls and distortive rebates creates perverse incentives for drug makers to continue increasing drug list prices. Pointing the finger at drug companies alone for increasing prices does not represent the problem at hand.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Last week, Senator Orrin Hatch, Senator Thom Tillis, and Representative Bill Flores introduced the Hatch-Waxman Integrity Act of 2018 (HWIA) in both the Senate and the House of Representatives.  If enacted, the HWIA would help to ensure that the unbalanced inter partes review (IPR) process does not stifle innovation in the drug industry and jeopardize patients’ access to life-improving drugs.

Created under the America Invents Act of 2012, IPR is a new administrative pathway for challenging patents. It was, in large part, created to fix the problem of patent trolls in the IT industry; the trolls allegedly used questionable or “low quality” patents to extort profits from innovating companies.  IPR created an expedited pathway to challenge patents of dubious quality, thus making it easier for IT companies to invalidate low quality patents.

However, IPR is available for patents in any industry, not just the IT industry.  In the market for drugs, IPR offers an alternative to the litigation pathway that Congress created over three decades ago in the Hatch-Waxman Act. Although IPR seemingly fixed a problem that threatened innovation in the IT industry, it created a new problem that directly threatened innovation in the drug industry. I’ve previously published an article explaining why IPR jeopardizes drug innovation and consumers’ access to life-improving drugs. With Hatch-Waxman, Congress sought to achieve a delicate balance between stimulating innovation from brand drug companies, who hold patents, and facilitating market entry from generic drug companies, who challenge the patents.  However, IPR disrupts this balance as critical differences between IPR proceedings and Hatch-Waxman litigation clearly tilt the balance in the patent challengers’ favor. In fact, IPR has produced noticeably anti-patent results; patents are twice as likely to be found invalid in IPR challenges as they are in Hatch-Waxman litigation.

The Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts in Hatch-Waxman proceedings. In federal court, patents are presumed valid and challengers must prove each patent claim invalid by “clear and convincing evidence.” In IPR proceedings, no such presumption of validity applies and challengers must only prove patent claims invalid by the “preponderance of the evidence.”

Moreover, whereas patent challengers in district court must establish sufficient Article III standing, IPR proceedings do not have a standing requirement.  This has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific pre-filing settlement demands.  The lack of a standing requirement has also led to the  exploitation of the IPR process by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet.

Finally, patent owners are often forced into duplicative litigation in both IPR proceedings and federal court litigation, leading to persistent uncertainty about the validity of their patents.  Many patent challengers that are unsuccessful in invalidating a patent in district court may pursue subsequent IPR proceedings challenging the same patent, essentially giving patent challengers “two bites at the apple.”  And if the challenger prevails in the IPR proceedings (which is easier to do given the lower standard of proof), the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision.

The pro-challenger bias in IPR creates significant uncertainty for patent rights in the drug industry.  As an example, just last week patent claims for drugs generating $6.5 billion for drug company Sanofi were invalidated in an IPR proceeding.  Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And, if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

The HWIA, which applies only to the drug industry, is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It eliminates challengers’ ability to file duplicative claims in both federal court and through the IPR process. Instead, they must choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR (which is faster and provides certain pro-challenger provisions). In addition to eliminating generic challengers’ “second bite of the apple,” the HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.

Thus, if enacted, the HWIA would create incentives that reestablish Hatch-Waxman litigation as the standard pathway for generic challenges to brand patents.  Yet, it would preserve IPR proceedings as an option when speed of resolution is a primary concern.  Ultimately, it will restore balance to the drug industry to safeguard competition, innovation, and patients’ access to life-improving drugs.

“Our City has become a cesspool,” according Portland police union president, Daryl Turner. He was describing efforts to address the city’s large and growing homelessness crisis.

Portland Mayor Ted Wheeler defended the city’s approach, noting that every major city, “all the way up and down the west coast, in the Midwest, on the East Coast, and frankly, in virtually every large city in the world” has a problem with homelessness. Nevertheless, according to the Seattle Times, Portland is ranked among the 10 worst major cities in the U.S. for homelessness. Wheeler acknowledged, “the problem is getting worse.”

This week, the city’s Budget Office released a “performance report” for some of the city’s bureaus. One of the more eyepopping statistics is the number of homeless camps the city has cleaned up over the years.

PortlandHomelessCampCleanups

Keep in mind, Multnomah County reports there are 4,177 homeless residents in the entire county. But the city reports clearing more than 3,100 camps in one year. Clearly, the number of homeless in the city is much larger than reflected in the annual homeless counts.

The report makes a special note that, “As the number of clean‐ups has increased and program operations have stabilized, the total cost per clean‐up has decreased substantially as well.” Sounds like economies of scale.

Turns out, Budget Office’s simple graphic gives enough information to estimate the economies of scale in homeless camp cleanups. Yes, it’s kinda crappy data. (Could it really be the case that in two years in a row, the city cleaned up exactly the same number of camps at exactly the same cost?) Anyway data is data.

First we plot the total annual costs for cleanups. Of course it’s an awesome fit (R-squared of 0.97), but that’s what happens when you have three observations and two independent variables.

PortlandHomelessTC

Now that we have an estimate of the total cost function, we can plot the marginal cost curve (blue) and average cost curve (orange).

PortlandHomelessMCAC1

That looks like a textbook example of economies of scale: decreasing average cost. It also looks like a textbook example of natural monopoly: marginal cost lower than average cost over the relevant range of output.

What strikes me as curious is how low is the implied marginal cost of a homeless camp cleanup, as shown in the table below.

FY Camps TC AC MC
2014-15 139 $171,109 $1,231 $3,178
2015-16 139 $171,109 $1,231 $3,178
2016-17 571 $578,994 $1,014 $774
2017-18 3,122 $1,576,610 $505 $142

It is somewhat shocking that the marginal cost of an additional camp cleanup is only $142. The hourly wages for the cleanup crew alone would be way more than $142. Something seems fishy with the numbers the city is reporting.

My guess: The city is shifting some of the cleanup costs to other agencies, such as Multnomah County and/or the Oregon Department of Transportation. I also suspect the city is not fully accounting for the costs of the cleanups. And, I am almost certain the city is significantly under reporting how many homeless are living on Portland streets.

This post was co-authored with Chelsea Boyd

The Food and Drug Administration has spoken, and its words have, once again, ruffled many feathers. Coinciding with the deadline for companies to lay out their plans to prevent youth access to e-cigarettes, the agency has announced new regulatory strategies that are sure to not only make it more difficult for young people to access e-cigarettes, but for adults who benefit from vaping to access them as well.

More surprising than the FDA’s paradoxical strategy of preventing teen smoking by banning not combustible cigarettes, but their distant cousins, e-cigarettes, is that the biggest support for establishing barriers to accessing e-cigarettes seems to come from the tobacco industry itself.

Going above and beyond the FDA’s proposals, both Altria and JUUL are self-restricting flavor sales, creating more — not fewer — barriers to purchasing their products. And both companies now publicly support a 21-to-purchase mandate. Unfortunately, these barriers extend beyond restricting underage access and will no doubt affect adult smokers seeking access to reduced-risk products.

To say there are no benefits to self-regulation by e-cigarette companies would be misguided. Perhaps the biggest benefit is to increase the credibility of these companies in an industry where it has historically been lacking. Proposals to decrease underage use of their product show that these companies are committed to improving the lives of smokers. Going above and beyond the FDA’s regulations also allows them to demonstrate that they take underage use seriously.

Yet regulation, whether imposed by the government or as part of a business plan, comes at a price. This is particularly true in the field of public health. In other health areas, the FDA is beginning to recognize that it needs to balance regulatory prudence with the risks of delaying innovation. For example, by decreasing red tape in medical product development, the FDA aims to help people access novel treatments for conditions that are notoriously difficult to treat. Unfortunately, this mindset has not expanded to smoking.

Good policy, whether imposed by government or voluntarily adopted by private actors, should not help one group while harming another. Perhaps the question that should be asked, then, is not whether these new FDA regulations and self-imposed restrictions will decrease underage use of e-cigarettes, but whether they decrease underage use enough to offset the harm caused by creating barriers to access for adult smokers.

The FDA’s new point-of-sale policy restricts sales of flavored products (not including tobacco flavors or menthol/mint flavors) to either specialty, age-restricted, in-person locations or to online retailers with heightened age-verification systems. JUUL, Reynolds and Altria have also included parts of this strategy in their proposed self-regulations, sometimes going even further by limiting sales of flavored products to their company websites.

To many people, these measures may not seem like a significant barrier to purchasing e-cigarettes, but in fact, online retail is a luxury that many cannot access. Heightened online age-verification processes are likely to require most of the following: a credit or debit card, a Social Security number, a government-issued ID, a cellphone to complete two-factor authorization, and a physical address that matches the user’s billing address. According to a 2017 Federal Deposit Insurance Corp. survey, one in four U.S. households are unbanked or underbanked, which is an indicator of not having a debit or credit card. That factor alone excludes a quarter of the population, including many adults, from purchasing online. It’s also important to note that the demographic characteristics of people who lack the items required to make online purchases are also the characteristics most associated with smoking.

Additionally, it’s likely that these new point-of-sale restrictions won’t have much of an effect at all on the target demographic — those who are underage. According to a 2017 Centers for Disease Control and Prevention study, of the 9 percent of high school students who currently use electronic nicotine delivery systems (ENDS), only 13 percent reported purchasing the device for themselves from a store. This suggests that 87 percent of underage users won’t be deterred by prohibitive measures to move sales to specialty stores or online. Moreover, Reynolds estimates that only 20 percent of its VUSE sales happen online, indicating that more than three-quarters of users — consisting mainly of adults — purchase products in brick-and-mortar retail locations.

Existing enforcement techniques, if properly applied at the point of sale, could have a bigger impact on youth access. Interestingly, a recent analysis by Baker White of FDA inspection reports suggests that the agency’s existing approaches to prevent youth access may be lacking — meaning that there is much room for improvement. Overall, selling to minors is extremely low-risk for stores. The likelihood of a store receiving a fine for violation of the minimum age of sale is once for every 36.7 years of operation, the financial risk is about 2 cents per day, and the risk of receiving a no sales order (the most severe consequence) is 1 for every 2,825 years of operation. Furthermore, for every $279 the FDA receives in fines, it spends over $11,800. With odds like those, it’s no wonder some stores are willing to sell to minors: Their risk is minimal.

Eliminating access to flavored products is the other arm of the FDA’s restrictions. Many people have suggested that flavors are designed to appeal to youth, yet fewer talk about the proportion of adults who use flavored e-cigarettes. In reality, flavors are an important factor for adults who switch from combustible cigarettes to e-cigarettes. A 2018 survey of 20,676 US adults who frequently use e-cigarettes showed that “since 2013, fruit-flavored e-liquids have replaced tobacco-flavored e-liquids as the most popular flavors with which participants had initiated e-cigarette use.” By relegating flavored products to specialty retailers and online sales, the FDA has forced adult smokers, who may switch from combustible cigarettes to e-cigarettes, to go out of their way to initiate use.

It remains to be seen if new regulations, either self- or FDA-imposed, will decrease underage use. However, we already know who is most at risk for negative outcomes from these new regulations: people who are geographically disadvantaged (for instance, people who live far away from adult-only retailers), people who might not have credit to go through an online retailer, and people who rely on new flavors as an incentive to stay away from combustible cigarettes. It’s not surprising or ironic that these are also the people who are most at risk for using combustible cigarettes in the first place.

Given the likelihood that the new way of doing business will have minimal positive effects on youth use but negative effects on adult access, we must question what the benefits of these policies are. Fortunately, we know the answer already: The FDA gets political capital and regulatory clout; industry gets credibility; governments get more excise tax revenue from cigarette sales. And smokers get left behind.

A recent NBER working paper by Gutiérrez & Philippon has attracted attention from observers who see oligopoly everywhere and activists who want governments to more actively “manage” competition. The analysis in the paper is fundamentally flawed and should not be relied upon by policymakers, regulators, or anyone else.

As noted in my earlier post, Gutiérrez & Philippon attempt to craft a causal linkage between differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. Their paper’s abstract leads with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

This post focuses on Gutiérrez & Philippon’s claim that EU markets have lower “excess profits.” This is perhaps the most outrageous claim in the paper. If anyone bothers to read the full paper, they’ll see that claims that EU firms have lower excess profits is simply not supported by the paper itself. Aside from a passing mention of someone else’s work in a footnote, the only mention of “excess profits” is in the paper’s headline-grabbing abstract.

What’s even more outrageous is the authors don’t define (or even describe) what they mean by excess profits.

These two factors alone should be enough to toss aside the paper’s assertion about “excess” profits. But, there’s more.

Gutiérrez & Philippon define profit to be gross operating surplus and mixed income (known as “GOPS” in the OECD’s STAN Industrial Analysis dataset). GOPS is not the same thing as gross margin or gross profit as used in business and finance (for example GOPS subtracts wages, but gross margin does not). The EU defines GOPS as (emphasis added):

Operating surplus is the surplus (or deficit) on production activities before account has been taken of the interest, rents or charges paid or received for the use of assets. Mixed income is the remuneration for the work carried out by the owner (or by members of his family) of an unincorporated enterprise. This is referred to as ‘mixed income’ since it cannot be distinguished from the entrepreneurial profit of the owner.

Here’s Figure 1 from Gutiérrez & Philippon plotting GOPS as a share of gross output.

Fig1-GutierrezPhilippon

Look at the huge jump in gross operating surplus for U.S. firms!

Now, look at the scale of the y-axis. Not such a big jump after all.

Over 23 years, from 1992 to 2015, the gross operating surplus rate for U.S. firms grew by 2.5 percentage points. In the EU, the rate increased by about one percentage point.

Using the STAN dataset, I plotted the gross operating surplus rate for each EU country (blue dots) and the U.S. (red dots), along with a time trend. Three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a gross operating surplus rate of about 19.5 percent; and
  2. There’s a huge variation in gross operating surplus rate across EU countries.
  3. Yes, gross operating surplus is trending slightly upward in the U.S. and slightly downward for the EU average, but there doesn’t appear to be a huge difference in the slope of the trendlines. In fact the slopes of the trendlines are not statistically significantly different from zero and are not statistically significantly different from each other.

GOPSprod

The use of gross profits raises some serious questions. For example, the Stigler Center’s James Traina finds that, after accounting for selling, general, and administrative expenses (SG&A), mark-ups for publicly traded firms in the U.S. have not meaningfully increased since 1980.

The figure below plots net operating surplus (NOPS equals GOPS minus consumption of fixed capital)—which is not the same thing as net income for a business.

Same three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a net operating surplus rate of a little more than seven percent; and
  2. There’s a huge variation in net operating surplus rate across EU countries.
  3. The slope of the trendlines for net operating surplus in the U.S. and EU are not statistically significantly different from zero and are not statistically significantly different from each other.

NOPSprod

It’s very possible that U.S. firms are achieving higher and growing “excess” profits relative to EU firms. It’s also very possible they’re not. Despite the bold assertions of Gutiérrez & Philippon, the information presented in their paper provides no useful information one way or the other.

 

A recent NBER working paper by Gutiérrez & Philippon attempts to link differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. The paper’s abstract begins with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

The authors are not clear what they mean by lower, however its seems they mean lower today relative to the 1990s.

This blog post focuses on the first claim: “Today, European markets have lower concentration …”

At the risk of being pedantic, Gutiérrez & Philippon’s measures of market concentration for which both U.S. and EU data are reported cover the period from 1999 to 2012. Thus, “the 1990s” refers to 1999, and “today” refers to 2012, or six years ago.

The table below is based on Figure 26 in Gutiérrez & Philippon. In 2012, there appears to be no significant difference in market concentration between the U.S. and the EU, using either the 8-firm concentration ratio or HHI. Based on this information, it cannot be concluded broadly that EU sectors have lower concentration than the U.S.

2012U.S.EU
CR826% (+5%)27% (-7%)
HHI640 (+150)600 (-190)

Gutiérrez & Philippon focus on the change in market concentration to draw their conclusions. However, the levels of market concentration measures are strikingly low. In all but one of the industries (telecommunications) in Figure 27 of their paper, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent. Similarly, the HHI measures reported in the paper are at levels that most observers would presume to be competitive. In addition, in 7 of the 12 sectors surveyed, the U.S. 8-firm concentration ratio is lower than in the EU.

The numbers in parentheses in the table above show the change in the measures of concentration since 1999. The changes suggests that U.S. markets have become more concentrated and EU markets have become less concentrated. But, how significant are the changes in concentration?

A simple regression of the relationship between CR8 and a time trend finds that in the EU, CR8 has decreased an average of 0.5 percentage point a year, while the U.S. CR8 increased by less than 0.4 percentage point a year from 1999 to 2012. Tucked in an appendix to Gutiérrez & Philippon, Figure 30 shows that CR8 in the U.S. had decreased by about 2.5 percentage points from 2012 to 2014.

A closer examination of Gutiérrez & Philippon’s 8-firm concentration ratio for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in CR8 for the EU is not statistically significantly different from zero.

A regression of the relationship between HHI and a time trend finds that in the EU, HHI has decreased an average of 12.5 points a year, while the U.S. HHI increased by less than 16.4 points a year from 1999 to 2012.

As with CR8, a closer examination of Gutiérrez & Philippon’s HHI for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in HHI for the EU is not statistically significantly different from zero.

Readers should be cautious in relying on Gutiérrez & Philippon’s data to conclude that the U.S. is “drifting” toward greater market concentration while the EU is “drifting” toward lower market concentration. Indeed, the limited data presented in the paper point toward a convergence in market concentration between the two regions.

 

 

An important but unheralded announcement was made on October 10, 2018: The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) released a draft CEN CENELAC Workshop Agreement (CWA) on the licensing of Standard Essential Patents (SEPs) for 5G/Internet of Things (IoT) applications. The final agreement, due to be published in early 2019, is likely to have significant implications for the development and roll-out of both 5G and IoT applications.

CEN and CENELAC, which along with the European Telecommunications Standards Institute (ETSI) are the officially recognized standard setting bodies in Europe, are private international non profit organizations with a widespread network consisting of technical experts from industry, public administrations, associations, academia and societal organizations. This first Workshop brought together representatives of the 5G/Internet of Things (IoT) technology user and provider communities to discuss licensing best practices and recommendations for a code of conduct for licensing of SEPs. The aim was to produce a CWA that reflects and balances the needs of both communities.

The final consensus outcome of the Workshop will be published as a CEN-CENELEC Workshop Agreement (CWA). The draft, which is available for public comments, comprises principles and guidelines that prepare a foundation for future licensing of standard essential patents for fifth generation (5G) technologies. The draft also contains a section on Q&A to help aid new implementers and patent holders.

The IoT ecosystem is likely to have over 20 billion interconnected devices by 2020 and represent a market of $17 trillion (about the same as the current GDP of the U.S.). The data collected by one device, such as a smart thermostat that learns what time the consumer is likely to be at home, can be used to increase the performance of another connected device, such as a smart fridge. Cellular technologies are a core component of the IoT ecosystem, alongside applications, devices, software etc., as they provide connectivity within the IoT system. 5G technology, in particular, is expected to play a key role in complex IoT deployments, which will transcend the usage of cellular networks from smart phones to smart home appliances, autonomous vehicles, health care facilities etc. in what has been aptly described as the fourth industrial revolution.

Indeed, the role of 5G to IoT is so significant that the proposed $117 billion takeover bid for U.S. tech giant Qualcomm by Singapore-based Broadcom was blocked by President Trump, citing national security concerns. (A letter sent by the Committee on Foreign Investment in the US suggested that Broadcom might starve Qualcomm of investment, preventing it from competing effectively against foreign competitors–implicitly those in China.)

While commercial roll-out of 5G technology has not yet fully begun, several efforts are being made by innovator companies, standard setting bodies and governments to maximize the benefits from such deployment.

The draft CWA Guidelines (hereinafter “the guidelines”) are consistent with some of the recent jurisprudence on SEPs on various issues. While there is relatively less guidance specifically in relation to 5G SEPs, it provides clarifications on several aspects of SEP licensing which will be useful, particularly, the negotiating process and conduct of both parties.

The guidelines contain 6 principles followed by some questions pertaining to SEP licensing. The principles deal with:

  1. The obligation of SEP holders to license the SEPs on Fair, Reasonable and Non-Discriminatory (FRAND) terms;
  2. The obligation on both parties to conduct negotiations in good faith;
  3. The obligation of both parties to provide necessary information (subject to confidentiality) to facilitate timely conclusion of the licensing negotiation;
  4. Compensation that is “fair and reasonable” and achieves the right balance between incentives to contribute technology and the cost of accessing that technology;
  5. A non-discriminatory obligation on the SEP holder for similarly situated licensees even though they don’t need to be identical; and
  6. Recourse to a third party FRAND determination either by court or arbitration if the negotiations fail to conclude in a timely manner.

There are 22 questions and answers, as well, which define basic terms and touch on issues such as: what amounts as good faith conduct of negotiating parties, global portfolio licensing, FRAND royalty rates, patent pooling, dispute resolution, injunctions, and other issues relevant to FRAND licensing policy in general.

Below are some significant contributions that the draft report makes on issues such as the supply chain level at which licensing is best done, treatment of small and medium enterprises (SMEs), non disclosure agreements, good faith negotiations and alternative dispute resolution.

Typically in the IoT ecosystem, many technologies will be adopted of which several will be standardized. The guidelines offer help to product and service developers in this regard and suggest that one may need to obtain licenses from SEP owners for product or services incorporating communications technology like 3G UMTS, 4G LTE, Wi-Fi, NB-IoT, 31 Cat-M or video codecs such as H.264. The guidelines, however, clarify that with the deployment of IoT, licenses for several other standards may be needed and developers should be mindful of these complexities when starting out in order to avoid potential infringements.

Notably, the guidelines suggest that in order to simplify licensing, reduce costs for all parties and maintain a level playing field between licensees, SEP holders should license at one level. While this may vary between different industries, for communications technology, the licensing point is often at the end-user equipment level. There has been a fair bit of debate on this issue and the recent order by Judge Koh granting FTC’s partial summary motion deals with some of this.

In the judgment delivered on November 6, Judge Koh relied primarily on the 9th circuit decisions in Microsoft v Motorola (2012 and 2015)  to rule on the core issue of the scope of the FRAND commitments–specifically on the question of whether licensing extends to all levels or is confined to the end device level. The court interpreted the pro- competitive principles behind the non-discrimination requirement to mean that such commitments are “sweeping” and essentially that an SEP holder has to license to anyone willing to offer a FRAND rate globally. It also cited Ericsson v D-Link, where the Federal Circuit held that “compliant devices necessarily infringe certain claims in patents that cover technology incorporated into the standard and so practice of the standard is impossible without licenses to all incorporated SEP technology.”

The guidelines speak about the importance of non-disclosure agreements (NDAs) in such licensing agreements given that some of the information exchanged between parties during negotiation, such as claim charts etc., may be sensitive and confidential. Therefore, an undue delay in agreeing to an NDA, without well-founded reasons, might be taken as evidence of a lack of good faith in negotiations rendering such a licensee as unwilling.

They also provide quite a boost for small and medium enterprises (SMEs) in licensing negotiations by addressing the duty of SEP owners to be mindful of SMEs that may be less experienced and therefore lack information from which to draw assurance that proposed terms are FRAND. The guidelines provide that SEP owners should provide whatever information they can under NDA to help the negotiation process. Equally, the same obligation applies on a licensee who is more experienced in dealing with a SEP owner who is an SME.

There is some clarity on time frames for negotiations and the guidelines provide a maximum time that parties should take to respond to offers and counter offers, which could extend up to several months in complex cases involving hundreds of patents. The guidelines also prescribe conduct of potential licensees on receiving an offer and how to make counter-offers in a timely manner.

Furthermore, the guidelines lay down the various ways in which royalty rates may be structured and clarify that there is no one fixed way in which this may be done. Similarly, they offer myriad ways in which potential licensees may be able to determine for themselves if the rates offered to them are fair and reasonable, such as third party patent landscape reports, public announcements, expert advice etc.

Finally, in the case that a negotiation reaches an impasse, the guidelines endorse an alternative dispute mechanism such as mediation or arbitration for the parties to resolve the issue. Bodies such as International Chamber of Commerce and World Intellectual Property Organization may provide useful platforms in this regard.

Almost 20 years have passed since technology pioneer Kevin Ashton first coined the phrase Internet of Things. While companies are gearing up to participate in the market of IoT, regulation and policy in the IoT world seems far from a predictable framework to follow. There are a lot of guesses about how rules and standards are likely to shape up, with little or no guidance for companies on how to prepare themselves for what faces them very soon. Therefore concrete efforts such as these are rather welcome. The draft guidelines do attempt to offer some much needed clarity and are now open for public comments due by December 13. It will be good to see what the final CWA report on licensing of SEPs for 5G and IoT looks like.

 

Last week, the UK Court of Appeal upheld the findings of the High Court in an important case regarding standard essential patents (SEPs). Of particular significance, the Court of Appeal upheld the finding that the defendant, an implementer of SEPs, could have the sale of its products enjoined in the UK unless it enters into a global licensing deal on terms deemed by the court to be fair, reasonable and non-discriminatory (FRAND). The case is noteworthy not least because the threat of an injunction of this sort has become increasingly rare in other jurisdictions, arguably resulting in an imbalance in bargaining power between patent holders and implementers.

The case concerned patents held by Unwired Planet (most of which had been purchased from Ericsson) that it had declared to be essential to the operation of various telecommunications standards. Chinese telecom giant Huawei had incorporated these patented technologies in its products but disputed the legitimacy of Unwired Planet’s (UP) patents and refused to license them on the terms that were offered.

By way of a background to the case, in March 2014, UP resorted to suing Huawei, Samsung and Google and claiming an injunction when it found it hard to secure licenses. After the commencement of proceedings, UP made licence offers to the defendants. It made offers in April and July 2014 respectively and during the proceedings, including a worldwide SEP portfolio licence, a UK SEP portfolio licence and per-patent licences for any of the SEPs in suit. The defendants argued that the offers were not FRAND. Huawei and Samsung also contended that the offers were in breach of European competition law. UP  settled with Google. Three technical trials of the patents began and UP was able to show that at least two of the patents sued upon were valid and essential and had been infringed. Subsequently, Samsung secured a settlement (at a rate below the market rate) and the FRAND trial went ahead with just Huawei.

Judge Birss delivered the High Court order on April 5, 2017. He held that UP’s patents were valid and infringed and it did not abuse its dominant position by requesting an injunction. He ordered a FRAND injunction that was stayed pending appeal against the two patents that had been infringed. The injunction was subject to a number of conditions which are applied because the case was dealing with patents subject to a FRAND undertaking. It will cease to have effect if Huawei enters into the FRAND license determined by the Court. He also observed that the parties can return for further determination when such license expires. Furthermore, it was held that there was one set of FRAND terms and that the scope of this FRAND was world wide.

The UK Court of Appeal (the bench consisting of Lord Justice Kitchin, Lord Justice Floyd, Lady Justice Asplin) in handing down a 291 paragraph, 66 page judgment dealing with Huawei’s appeal, upheld Birss’ findings. The centrality of Huawei’s appeal focused on the global nature of the FRAND license and the non-discrimination undertaking of UP’s FRAND commitments. Some significant findings of the Court of Appeal are briefly provided below.

The Court of Appeal in upholding Birss’ decision noted that it was unfair to say that UP is using the threat of an injunction to leverage Huawei into taking a global license, and that Huawei had the option to take the global license or submit to an injunction in the UK. Drawing attention to the potential complexities in a FRAND negotiation, the Court observed:

..The owner of a SEP may still use the threat of an injunction to try to secure the payment of excessive licence fees and so engage in hold-up activities. Conversely, the infringer may refuse to engage constructively or behave unreasonably in the negotiation process and so avoid paying the licence fees to which the SEP owner is properly entitled, a process known as “hold-out”.

Furthermore, Huawei argues that imposition of a global license on terms set by a national court based on a national finding of infringement is wrong in principle. It also states that there is currently an ongoing patent litigation in both Germany and China and that there are some countries where UP holds “no relevant” patents at all.

In response to these contentions, the Court of Appeal has held that it may be highly impractical for a SEP owner to seek to negotiate a license of its patent rights in each country and rejected the submission made by Huawei that the approach adopted by Birss in these proceedings is out of line with the territorial nature of patent litigations. It clarified that Birss did not adjudicate on issues of infringement or validity concerning foreign SEPs and did not usurp the rights of foreign courts. It further observed that such an approach of Birss  is consistent with the Council and the European Economic and Social Committee dated 29 November 2017 (COM (2017) 712 final) (“the November 2017 EU Communication”) which notes in section 2.4:

For products with a global circulation, SEP licences granted on a worldwide basis may contribute to a more efficient approach and therefore can be compatible with FRAND.

The Court of Appeal however disagreed with Birss on the issue that there was only one set of FRAND terms. This view of the bench certainly comes as a relief since it seems to appropriately reflect the practical realities of a FRAND negotiation. The Court held:

Patent licences are complex and, having regard to the commercial priorities of the participating undertakings and the experience and preferences of the individuals involved, may be structured in different ways in terms of, for example, the particular contracting parties, the rights to be included in the licence, the geographical scope of the licence, the products to be licensed, royalty rates and how they are to be assessed, and payment terms. Further, concepts such as fairness and reasonableness do not sit easily with such a rigid approach.

Similarly, on the non- discrimination prong of FRAND, the Court of Appeal agreed with Birss that it was not “hard-edged” and the test is whether such difference in rates distorts competition between the licensees. It also noted that the “hard-edged” interpretation would be “akin to the re-insertion of a “most favoured licensee” clause in the FRAND undertaking” which does not seem to be what the standards body, European Telecommunications Standards Institute (ETSI) had in mind when it formulated its policies. The Court also held :

We consider that a non-discrimination rule has the potential to harm the technological development of standards if it has the effect of compelling the SEP owner to accept a level of compensation for the use of its invention which does not reflect the value of the licensed technology.

Finally, the Court of Appeal held that UP did not abuse its dominant position just because it failed to strictly comply with the safe harbor framework laid down by Court of Justice of the European Union in Huawei v. ZTE. The only requirement that must be satisfied before proceedings are commenced by the SEP holder is that the SEP holder give sufficient notice to or consult with the implementer.

The Court of Appeal’s decision offers some significant guidance to the emerging policy debate on FRAND. As mentioned at the beginning of this post, the decision is significant particularly for the reason that UP is one of a total of two cases in the last two years, where an injunctive relief has been granted in instances involving standard essential patents. Such reliefs have been rarely granted in years in the first place. The second such instance of a grant of injunction pertains to Huawei v. Samsung where the Shenzhen Court in China held earlier this year that Huawei met the FRAND obligation while Samsung did not (negotiations were dragged on for 6 years). An injunction was granted against Samsung for infringing two of Huawei’s Chinese patents which are counterparts of two U.S. asserted patents (however Judge Orrick of the U.S. District Court for the Northern District of California enjoined Huawei from enforcing the injunction).

Current jurisprudence on injunctive relief with respect to FRAND encumbered SEPs is that there is no per se ban on these reliefs. However, courts have been very reluctant to actually grant them. While injunctions are statutory remedies, and granted automatically in most cases when a patent is found to be infringed, administrative agencies and courts have held a position that shows that FRAND commitments certainly limit this premise.

Following the eBay decision in the U.S., defendants in infringement claims involving SEPs have argued that permanent injunctions should not be available for FRAND-encumbered SEPs and were upheld in cases such as Apple v. Motorola in 2014 (where Judge Randall Radar also makes a sound case for evidence of a hold out by Apple in his dissenting order). However, in an institutional bargaining framework of FRAND, which is based on a mutuality of considerations, such a recourse is misplaced and likely to inevitably disturb this balance. The current narrative on FRAND that dominates policymaking and jurisprudence is incomplete in its unilateral focus of avoiding the possible problem of a patent hold up in the absence of concrete evidence indicating its probability. In Ericsson v D-Links Judge Davis of the US Court of Appeals for the Federal Circuit underscored this point when he observed that “if an accused infringer wants an instruction on patent hold-up and royalty stacking [to be given to the jury], it must provide evidence on the record of patent hold-up and royalty stacking.”

Remedies emanating from a one sided perspective tilt the bargaining dynamic in favour of implementers and if the worst penalty a SEP infringer has to pay is the FRAND royalty it would have otherwise paid beforehand, then a hold out or a reverse hold up by implementers becomes a very profitable strategy. Remedies for patent infringement cannot be ignored because they are also core to the framework for licensing negotiations and ensuring compliance by licensees. A disproportionate reliance on liability rules over property rights is likely to exacerbate the countervailing problem of hold out and detrimentally impact incentives to innovate, ultimately undermining the welfare goals that such enforcement seeks to achieve.

The Court of Appeal has therefore given valuable guidance in its decision when it noted:

Just as implementers need protection, so too do the SEP owners. They are entitled to an appropriate reward for carrying out their research and development activities and for engaging with the standardization process, and they must be able to prevent technology users from free-riding on their innovations. It is therefore important that implementers engage constructively in any FRAND negotiation and, where necessary, agree to submit to the outcome of an appropriate FRAND determination.

Hopefully this order brings with it some balance in FRAND negotiations as well as a shift in the perspective of courts in how they adjudicate on these litigations. It underscores an oft forgotten principle that is core to the FRAND framework- that FRAND is a two-way street, as was observed in the celebrated case of Huawei v. ZTE in 2015.