[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The first post, by Luke Froeb, Michael Doane & Mikhael Shor is here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

Near the end of her new proposal to break up Facebook, Google, Amazon, and Apple, Senator Warren asks, “So what would the Internet look like after all these reforms?”

It’s a good question, because, as she herself notes, “Twenty-five years ago, Facebook, Google, and Amazon didn’t exist. Now they are among the most valuable and well-known companies in the world.”

To Warren, our most dynamic and innovative companies constitute a problem that needs solving.

She described the details of that solution in a blog post:

First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

* * *
Second, my administration would appoint regulators committed to reversing illegal and anti-competitive tech mergers….
I will appoint regulators who are committed to… unwind[ing] anti-competitive mergers, including:

– Amazon: Whole Foods; Zappos;
– Facebook: WhatsApp; Instagram;
– Google: Waze; Nest; DoubleClick

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.  

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Source: Free Software Magazine

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

Source: Web Design Museum  

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

  • “Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”
  • “Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

  • “In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

    Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

Source: WSJ

Warren also faults Big Tech for a decline in startups, saying,

The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.

But this trend predates the existence of the companies she criticizes, as this chart from Quartz shows:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

Arguably, it is also simply getting harder to innovate. As economists Nick Bloom, Chad Jones, John Van Reenen and Michael Webb argue,

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difficulty of finding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

Longtime TOTM blogger, Paul Rubin, has a new book now available for preorder on Amazon.

The book’s description reads:

In spite of its numerous obvious failures, many presidential candidates and voters are in favor of a socialist system for the United States. Socialism is consistent with our primitive evolved preferences, but not with a modern complex economy. One reason for the desire for socialism is the misinterpretation of capitalism.   

The standard definition of free market capitalism is that it’s a system based on unbridled competition. But this oversimplification is incredibly misleading—capitalism exists because human beings have organically developed an elaborate system based on trust and collaboration that allows consumers, producers, distributors, financiers, and the rest of the players in the capitalist system to thrive.

Paul Rubin, the world’s leading expert on cooperative capitalism, explains simply and powerfully how we should think about markets, economics, and business—making this book an indispensable tool for understanding and communicating the vast benefits the free market bestows upon societies and individuals. 

On March 14, the Federal Circuit will hear oral arguments in the case of BTG International v. Amneal Pharmaceuticals that could dramatically influence the future of duplicative patent litigation in the pharmaceutical industry.  The court will determine whether the America Invents Act (AIA) bars patent challengers that succeed in invalidating patents in inter partes review (IPR) proceedings from repeating their winning arguments in district court.  Courts and litigants had previously assumed that the AIA’s estoppel provision only prevented unsuccessful challengers from reusing failed arguments.   However, in an amicus brief filed in the case last month, the U.S. Patent and Trade Office (USPTO) argued that, although it seems counterintuitive, under the AIA, even parties that succeed in getting patents invalidated in IPR cannot reuse their arguments. 

If the Federal Circuit agrees with the USPTO, patent challengers could be strongly deterred from bringing IPR proceedings because it would mean they couldn’t reuse any arguments in district court.  This deterrent effect would be especially strong for generic drug makers, who must prevail in district court in order to get approval for their Abbreviated New Drug Application from the FDA. 

Critics of the USPTO’s position assert that it will frustrate the AIA’s purpose of facilitating generic competition.  However, if the Federal Circuit adopts the position, it would also reduce the amount of duplicative litigation that plagues the pharmaceutical industry and threatens new drug innovation.  According to a 2017 analysis of over 6,500 IPR challenges filed between 2012 and 2017, approximately 80% of IPR challenges were filed during an ongoing district court case challenging the patent.   This duplicative litigation can increase costs for both challengers and patent holders; the median cost for an IPR proceeding that results in a final decision is $500,000 and the median cost for just filing an IPR petition is $100,000.  Moreover, because of duplicative litigation, pharmaceutical patent holders face persistent uncertainty about the validity of their patents. Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

In addition, deterring IPR challenges would help to reestablish balance between drug patent owners and patent challengers.  As I’ve previously discussed here and here, the pro-challenger bias in IPR proceedings has led to significant deviation in patent invalidation rates under the two pathways; compared to district court challenges, patents are twice as likely to be found invalid in IPR challenges. The challenger is more likely to prevail in IPR proceedings because the Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts. Furthermore, if the challenger prevails in the IPR proceedings, the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision in favor of the patent holder.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision. 

However, the USPTO acknowledges that its position is counterintuitive because it means that a court could not consider invalidity arguments that the PTAB found persuasive.  It is unclear whether the Federal Circuit will refuse to adopt this counterintuitive position or whether Congress will amend the AIA to limit estoppel to failed invalidity claims.  As a result, a better and more permanent way to eliminate duplicative litigation would be for Congress to enact the Hatch-Waxman Integrity Act of 2019 (HWIA).  The HWIA was introduced by Senator Thom Tillis in the Senate and Congressman Bill Flores In the House, and proposed in the last Congress by Senator Orrin Hatch.  The HWIA eliminates the ability of drug patent challengers to file duplicative claims in both federal court and IPR proceedings.  Instead, they must choose between either district court litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR proceedings (which are faster and provide certain pro-challenger provisions). 

Thus, the HWIA would reduce duplicative litigation that increases costs and uncertainty for drug patent owners.   This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and ensure that consumers continue to have access to life-improving drugs.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California.

This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]

[Froeb, Doane & Shor: This post does not attempt to answer the question of what the court should decide in FTC v. Qualcomm because we do not have access to the information that would allow us to make such a determination. Rather, we focus on economic issues confronting the court by drawing heavily from our writings in this area: Gregory Werden & Luke Froeb, Why Patent Hold-Up Does Not Violate Antitrust Law; Luke Froeb & Mikhael Shor, Innovators, Implementors and Two-sided Hold-up; Bernard Ganglmair, Luke Froeb & Gregory Werden, Patent Hold Up and Antitrust: How a Well-Intentioned Rule Could Retard Innovation.]

Not everything is “hold-up”

It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.

First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.

At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.

The smallest component “principle”

In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).

So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”

What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.

In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:

For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.

The role of economics

Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.

Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.

Concluding policy thoughts

What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.

In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.

Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Rate this:

Continue Reading...

In my fifteen years as a law professor, I’ve become convinced that there’s a hole in the law school curriculum.  When it comes to regulation, we focus intently on the process of regulating and the interpretation of rules (see, e.g., typical administrative law and “leg/reg” courses), but we rarely teach students what, as a matter of substance, distinguishes a good regulation from a bad one.  That’s unfortunate, because lawyers often take the lead in crafting regulatory approaches. 

In the fall of 2017, I published a book seeking to fill this hole.  That book, How to Regulate: A Guide for Policymakers, is the inspiration for a symposium that will occur this Friday (Feb. 8) at the University of Missouri Law School.

The symposium, entitled Protecting the Public While Fostering Innovation and Entrepreneurship: First Principles for Optimal Regulation, will bring together policymakers and regulatory scholars who will go back to basics. Participants will consider two primary questions:

(1) How, as a substantive matter, should regulation be structured in particular areas? (Specifically, what regulatory approaches would be most likely to forbid the bad while chilling as little of the good as possible and while keeping administrative costs in check? In other words, what rules would minimize the sum of error and decision costs?), and

(2) What procedures would be most likely to generate such optimal rules?


The symposium webpage includes the schedule for the day (along with a button to Livestream the event), but here’s a quick overview.

I’ll set the stage by discussing the challenge policymakers face in trying to accomplish three goals simultaneously: ban bad instances of behavior, refrain from chilling good ones, and keep rules simple enough to be administrable.

We’ll then hear from a panel of experts about the principles that would best balance those competing concerns in their areas of expertise. Specifically:

  • Jerry Ellig (George Washington University; former chief economist of the FCC) will discuss telecommunications policy;
  • TOTM’s own Gus Hurwitz (Nebraska Law) will consider regulation of Internet platforms; and
  • Erika Lietzan (Mizzou Law) will examine the regulation of therapeutic drugs and medical devices.

Hopefully, we can identify some common threads among the substantive principles that should guide effective regulation in these disparate areas

Before we turn to consider regulatory procedures, we will hear from our keynote speaker, Commissioner Hester Peirce of the SEC. As The Economist recently reported, Commissioner Peirce has been making waves with her speeches, many of which have gone back to basics and asked why the government is intervening and whether it’s doing so in an optimal fashion.

Following Commissioner Peirce’s address, we will hear from the following panelists about how regulatory procedures should be structured in order to generate substantively optimal rules:

  • Bridget Dooling (George Washington University; former official in the White House Office of Information and Regulatory Affairs);
  • Ken Davis (former Deputy Attorney General of Virginia and member of the Federalist Society’s Regulatory Transparency Project);
  • James Broughel (Senior Fellow at the Mercatus Center; expert on state-level regulatory review procedures); and
  • Justin Smith (former counsel to Missouri governor; led the effort to streamline the Missouri regulatory code).

As you can see, this Friday is going to be a great day at Mizzou Law. If you’re close enough to join us in person, please come. Otherwise, please join us via Livestream.

In the opening seconds of what was surely one of the worst oral arguments in a high-profile case that I have ever heard, Pantelis Michalopoulos, arguing for petitioners against the FCC’s 2018 Restoring Internet Freedom Order (RIFO) expertly captured both why the side he was representing should lose and the overall absurdity of the entire net neutrality debate: “This order is a stab in the heart of the Communications Act. It would literally write ‘telecommunications’ out of the law. It would end the communications agency’s oversight over the main communications service of our time.”

The main communications service of our time is the Internet. The Communications and Telecommunications Acts were written before the advent of the modern Internet, for an era when the telephone was the main communications service of our time. The reality is that technological evolution has written “telecommunications” out of these Acts – the “telecommunications services” they were written to regulate are no longer the important communications services of the day.

The basic question of the net neutrality debate is whether we expect Congress to weigh in on how regulators should respond when an industry undergoes fundamental change, or whether we should instead allow those regulators to redefine the scope of their own authority. In the RIFO case, petitioners (and, more generally, net neutrality proponents) argue that agencies should get to define their own authority. Those on the other side of the issue (including me) argue that that it is up to Congress to provide agencies with guidance in response to changing circumstances – and worry that allowing independent and executive branch agencies broad authority to act without Congressional direction is a recipe for unfettered, unchecked, and fundamentally abusive concentrations of power in the hands of the executive branch.

These arguments were central to the DC Circuit’s evaluation of the prior FCC net neutrality order – the Open Internet Order. But rather than consider the core issue of the case, the four hours of oral arguments this past Friday were instead a relitigation of long-ago addressed ephemeral distinctions, padded out with irrelevance and esoterica, and argued with a passion available only to those who believe in faerie tales and monsters under their bed. Perhaps some revelled in hearing counsel for both sides clumsily fumble through strained explanations of the difference between standalone telecommunications services and information services that are by definition integrated with them, or awkward discussions about how ISPs may implement hypothetical prioritization technologies that have not even been developed. These well worn arguments successfully demonstrated, once again, how many angels can dance upon the head of a single pin – only never before have so many angels been so irrelevant.

This time around, petitioners challenging the order were able to scare up some intervenors to make novel arguments on their behalf. Most notably, they were able to scare up a group of public safety officials to argue that the FCC had failed to consider arguments that the RIFO would jeopardize public safety services that rely on communications networks. I keep using the word “scare” because these arguments are based upon incoherent fears peddled by net neutrality advocates in order to find unsophisticated parties to sign on to their policy adventures. The public safety fears are about as legitimate as concerns that the Easter Bunny might one day win the Preakness – and merited as much response from the FCC as a petition from the Racehorse Association of America demanding the FCC regulate rabbits.

In the end, I have no idea how the DC Circuit is going to come down in this case. Public Safety concerns – like declarations of national emergencies – are often given undue and unwise weight. And there is a legitimately puzzling, if fundamentally academic, argument about a provision of the Communications Act (47 USC 257(c)) that Congress repealed after the Order was adopted and that was an noteworthy part of the notice the FCC gave when the Order was proposed that could lead the Court to remand the Order back to the Commission.

In the end, however, this case is unlikely to address the fundamental question of whether the FCC has any business regulating Internet access services. If the FCC loses, we’ll be back here in another year or two; if the FCC wins, we’ll be back here the next time a Democrat is in the White House. And the real tragedy is that every minute the FCC spends on the interminable net neutrality non-debate is a minute not spent on issues like closing the rural digital divide or promoting competitive entry into markets by next generation services.

So much wasted time. So many billable hours. So many angels dancing on the head of a pin. If only they were the better angels of our nature.


Postscript: If I sound angry about the endless fights over net neutrality, it’s because I am. I live in one of the highest-cost, lowest-connectivity states in the country. A state where much of the territory is covered by small rural carriers for whom the cost of just following these debates can mean delaying the replacement of an old switch, upgrading a circuit to fiber, or wiring a street. A state in which if prioritization were to be deployed it would be so that emergency services would be able to work over older infrastructure or so that someone in a rural community could remotely attend classes at the University or consult with a primary care physician (because forget high speed Internet – we have counties without doctors in them). A state in which if paid prioritization were to be developed it would be to help raise capital to build out service to communities that have never had high-speed Internet access.

So yes: the fact that we might be in for another year of rule making followed by more litigation because some firefighters signed up for the wrong wireless service plan and then were duped into believing a technological, economic, and political absurdity about net neutrality ensuring they get free Internet access does make me angry. Worse, unlike the hypothetical harms net neutrality advocates are worried about, the endless discussion of net neutrality causes real, actual, concrete harm to the people net neutrality advocates like to pat themselves on the back as advocating for. We should all be angry about this, and demanding that Congress put this debate out of our misery.

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of  information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare  in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services.  The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

A recent working paper by Hashmat Khan and Matthew Strathearn attempts to empirically link anticompetitive collusion to the boom and bust cycles of the economy.

The level of collusion is higher during a boom relative to a recession as collusion occurs more frequently when demand is increasing (entering into a collusive arrangement is more profitable and deviating from an existing cartel is less profitable). The model predicts that the number of discovered cartels and hence antitrust filings should be procyclical because the level of collusion is procyclical.

The first sentence—a hypothesis that collusion is more likely during a “boom” than in recession—seems reasonable. At the same time, a case can be made that collusion would be more likely during recession. For example, a reduced risk of entry from competitors would reduce the cost of collusion.

The second sentence, however, seems a stretch. Mainly because it doesn’t recognize the time delay between the collusive activity, the date the collusion is discovered by authorities, and the date the case is filed.

Perhaps, more importantly, it doesn’t acknowledge that many collusive arrangement span months, if not years. That span of time could include times of “boom” and times of recession. Thus, it can be argued that the date of the filing has little (or nothing) to do with the span over which the collusive activity occurred.

I did a very lazy man’s test of my criticisms. I looked at six of the filings cited by Khan and Strathearn for the year 2011, a “boom” year with a high number of horizontal price fixing cases filed.

khanstrathearn

My first suspicion was correct. In these six cases, an average of more than three years passed from the date of the last collusive activity and the date the case was filed. Thus, whether the economy is a boom or bust when the case is filed provides no useful information regarding the state of the economy when the collusion occurred.

Nevertheless, my lazy man’s small sample test provides some interesting—and I hope useful—information regarding Khan and Strathearn’s conclusions.

  1. From July 2001 through September 2009, 24 of the 99 months were in recession. In other words, during this period, there was a 24 percent chance the economy was in recession in any given month.
  2. Five of the six collusive arrangements began when the economy was in recovery. Only one began during a recession. This may seem to support their conclusion that collusive activity is more likely during a recovery. However, even if the arrangements began randomly, there would be a 55 percent chance that that five or more began during a recovery. So, you can’t read too much into the observation that most of the collusive agreements began during a “boom.”
  3. In two of the cases, the collusive activity occurred during a span of time that had no recession. The chances of this happening randomly is less than 1 in 20,000, supporting their conclusion regarding collusive activity and the business cycle.

Khan and Strathearn fall short in linking collusive activity to the business cycle but do a good job of linking antitrust enforcement activities to the business cycle. The information they use from the DOJ website is sufficient to determine when the collusive activity occurred—but it’ll take more vigorous “scrubbing” (their word) of the site to get the relevant data.

The bigger question, however, is the relevance of this research. Naturally, one could argue this line of research indicates that competition authorities should be extra vigilant during a booming economy. Yet, Adam Smith famously noted, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” This suggests that collusive activity—or the temptation to engage in such activity—is always and everywhere present, regardless of the business cycle.