Archives For technology

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The first post, by Luke Froeb, Michael Doane & Mikhael Shor is here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

Near the end of her new proposal to break up Facebook, Google, Amazon, and Apple, Senator Warren asks, “So what would the Internet look like after all these reforms?”

It’s a good question, because, as she herself notes, “Twenty-five years ago, Facebook, Google, and Amazon didn’t exist. Now they are among the most valuable and well-known companies in the world.”

To Warren, our most dynamic and innovative companies constitute a problem that needs solving.

She described the details of that solution in a blog post:

First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

* * *
Second, my administration would appoint regulators committed to reversing illegal and anti-competitive tech mergers….
I will appoint regulators who are committed to… unwind[ing] anti-competitive mergers, including:

– Amazon: Whole Foods; Zappos;
– Facebook: WhatsApp; Instagram;
– Google: Waze; Nest; DoubleClick

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.  

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Source: Free Software Magazine

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

Source: Web Design Museum  

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

  • “Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”
  • “Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

  • “In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

    Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

Source: WSJ

Warren also faults Big Tech for a decline in startups, saying,

The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.

But this trend predates the existence of the companies she criticizes, as this chart from Quartz shows:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

Arguably, it is also simply getting harder to innovate. As economists Nick Bloom, Chad Jones, John Van Reenen and Michael Webb argue,

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difficulty of finding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California.

This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]

[Froeb, Doane & Shor: This post does not attempt to answer the question of what the court should decide in FTC v. Qualcomm because we do not have access to the information that would allow us to make such a determination. Rather, we focus on economic issues confronting the court by drawing heavily from our writings in this area: Gregory Werden & Luke Froeb, Why Patent Hold-Up Does Not Violate Antitrust Law; Luke Froeb & Mikhael Shor, Innovators, Implementors and Two-sided Hold-up; Bernard Ganglmair, Luke Froeb & Gregory Werden, Patent Hold Up and Antitrust: How a Well-Intentioned Rule Could Retard Innovation.]

Not everything is “hold-up”

It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.

First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.

At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.

The smallest component “principle”

In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).

So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”

What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.

In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:

For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.

The role of economics

Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.

Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.

Concluding policy thoughts

What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.

In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.

Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Rate this:

Continue Reading...

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Last week, the DOJ cleared the merger of CVS Health and Aetna (conditional on Aetna’s divesting its Medicare Part D business), a merger that, as I previously noted at a House Judiciary hearing, “presents a creative effort by two of the most well-informed and successful industry participants to try something new to reform a troubled system.” (My full testimony is available here).

Of course it’s always possible that the experiment will fail — that the merger won’t “revolutioniz[e] the consumer health care experience” in the way that CVS and Aetna are hoping. But it’s a low (antitrust) risk effort to address some of the challenges confronting the healthcare industry — and apparently the DOJ agrees.

I discuss the weakness of the antitrust arguments against the merger at length in my testimony. What I particularly want to draw attention to here is how this merger — like many vertical mergers — represents business model innovation by incumbents.

The CVS/Aetna merger is just one part of a growing private-sector movement in the healthcare industry to adopt new (mostly) vertical arrangements that seek to move beyond some of the structural inefficiencies that have plagued healthcare in the United States since World War II. Indeed, ambitious and interesting as it is, the merger arises amidst a veritable wave of innovative, vertical healthcare mergers and other efforts to integrate the healthcare services supply chain in novel ways.

These sorts of efforts (and the current DOJ’s apparent support for them) should be applauded and encouraged. I need not rehash the economic literature on vertical restraints here (see, e.g., Lafontaine & Slade, etc.). But especially where government interventions have already impaired the efficient workings of a market (as they surely have, in spades, in healthcare), it is important not to compound the error by trying to micromanage private efforts to restructure around those constraints.   

Current trends in private-sector-driven healthcare reform

In the past, the most significant healthcare industry mergers have largely been horizontal (i.e., between two insurance providers, or two hospitals) or “traditional” business model mergers for the industry (i.e., vertical mergers aimed at building out managed care organizations). This pattern suggests a sort of fealty to the status quo, with insurers interested primarily in expanding their insurance business or providers interested in expanding their capacity to provide medical services.

Today’s health industry mergers and ventures seem more frequently to be different in character, and they portend an industry-wide experiment in the provision of vertically integrated healthcare that we should enthusiastically welcome.

Drug pricing and distribution innovations

To begin with, the CVS/Aetna deal, along with the also recently approved Cigna-Express Scripts deal, solidifies the vertical integration of pharmacy benefit managers (PBMs) with insurers.

But a number of other recent arrangements and business models center around relationships among drug manufacturers, pharmacies, and PBMs, and these tend to minimize the role of insurers. While not a “vertical” arrangement, per se, Walmart’s generic drug program, for example, offers $4 prescriptions to customers regardless of insurance (the typical generic drug copay for patients covered by employer-provided health insurance is $11), and Walmart does not seek or receive reimbursement from health plans for these drugs. It’s been offering this program since 2006, but in 2016 it entered into a joint buying arrangement with McKesson, a pharmaceutical wholesaler (itself vertically integrated with Rexall pharmacies), to negotiate lower prices. The idea, presumably, is that Walmart will entice consumers to its stores with the lure of low-priced generic prescriptions in the hope that they will buy other items while they’re there. That prospect presumably makes it worthwhile to route around insurers and PBMs, and their reimbursements.

Meanwhile, both Express Scripts and CVS Health (two of the country’s largest PBMs) have made moves toward direct-to-consumer sales themselves, establishing pricing for a small number of drugs independently of health plans and often in partnership with drug makers directly.   

Also apparently focused on disrupting traditional drug distribution arrangements, Amazon has recently purchased online pharmacy PillPack (out from under Walmart, as it happens), and with it received pharmacy licenses in 49 states. The move introduces a significant new integrated distributor/retailer, and puts competitive pressure on other retailers and distributors and potentially insurers and PBMs, as well.

Whatever its role in driving the CVS/Aetna merger (and I believe it is smaller than many reports like to suggest), Amazon’s moves in this area demonstrate the fluid nature of the market, and the opportunities for a wide range of firms to create efficiencies in the market and to lower prices.

At the same time, the differences between Amazon and CVS/Aetna highlight the scope of product and service differentiation that should contribute to the ongoing competitiveness of these markets following mergers like this one.

While Amazon inarguably excels at logistics and the routinizing of “back office” functions, it seems unlikely for the foreseeable future to be able to offer (or to be interested in offering) a patient interface that can rival the service offerings of a brick-and-mortar CVS pharmacy combined with an outpatient clinic and its staff and bolstered by the capabilities of an insurer like Aetna. To be sure, online sales and fulfillment may put price pressure on important, largely mechanical functions, but, like much technology, it is first and foremost a complement to services offered by humans, rather than a substitute. (In this regard it is worth noting that McKesson has long been offering Amazon-like logistics support for both online and brick-and-mortar pharmacies. “‘To some extent, we were Amazon before it was cool to be Amazon,’ McKesson CEO John Hammergren said” on a recent earnings call).

Treatment innovations

Other efforts focus on integrating insurance and treatment functions or on bringing together other, disparate pieces of the healthcare industry in interesting ways — all seemingly aimed at finding innovative, private solutions to solve some of the costly complexities that plague the healthcare market.

Walmart, for example, announced a deal with Quest Diagnostics last year to experiment with offering diagnostic testing services and potentially other basic healthcare services inside of some Walmart stores. While such an arrangement may simply be a means of making doctor-prescribed diagnostic tests more convenient, it may also suggest an effort to expand the availability of direct-to-consumer (patient-initiated) testing (currently offered by Quest in Missouri and Colorado) in states that allow it. A partnership with Walmart to market and oversee such services has the potential to dramatically expand their use.

Capping off (for now) a buying frenzy in recent years that included the purchase of PBM, CatamaranRx, UnitedHealth is seeking approval from the FTC for the proposed merger of its Optum unit with the DaVita Medical Group — a move that would significantly expand UnitedHealth’s ability to offer medical services (including urgent care, outpatient surgeries, and health clinic services), give it a significant group of doctors’ clinics throughout the U.S., and turn UnitedHealth into the largest employer of doctors in the country. But of course this isn’t a traditional managed care merger — it represents a significant bet on the decentralized, ambulatory care model that has been slowly replacing significant parts of the traditional, hospital-centric care model for some time now.

And, perhaps most interestingly, some recent moves are bringing together drug manufacturers and diagnostic and care providers in innovative ways. Swiss pharmaceutical company, Roche, announced recently that “it would buy the rest of U.S. cancer data company Flatiron Health for $1.9 billion to speed development of cancer medicines and support its efforts to price them based on how well they work.” Not only is the deal intended to improve Roche’s drug development process by integrating patient data, it is also aimed at accommodating efforts to shift the pricing of drugs, like the pricing of medical services generally, toward an outcome-based model.

Similarly interesting, and in a related vein, early this year a group of hospital systems including Intermountain Health, Ascension, and Trinity Health announced plans to begin manufacturing generic prescription drugs. This development further reflects the perceived benefits of vertical integration in healthcare markets, and the move toward creative solutions to the unique complexity of coordinating the many interrelated layers of healthcare provision. In this case,

[t]he nascent venture proposes a private solution to ensure contestability in the generic drug market and consequently overcome the failures of contracting [in the supply and distribution of generics]…. The nascent venture, however it solves these challenges and resolves other choices, will have important implications for the prices and availability of generic drugs in the US.

More enforcement decisions like CVS/Aetna and Bayer/Monsanto; fewer like AT&T/Time Warner

In the face of all this disruption, it’s difficult to credit anticompetitive fears like those expressed by the AMA in opposing the CVS-Aetna merger and a recent CEA report on pharmaceutical pricing, both of which are premised on the assumption that drug distribution is unavoidably dominated by a few PBMs in a well-defined, highly concentrated market. Creative arrangements like the CVS-Aetna merger and the initiatives described above (among a host of others) indicate an ease of entry, the fluidity of traditional markets, and a degree of business model innovation that suggest a great deal more competitiveness than static PBM market numbers would suggest.

This kind of incumbent innovation through vertical restructuring is an increasingly important theme in antitrust, and efforts to tar such transactions with purported evidence of static market dominance is simply misguided.

While the current DOJ’s misguided (and, remarkably, continuing) attempt to stop the AT&T/Time Warner merger is an aberrant step in the wrong direction, the leadership at the Antitrust Division generally seems to get it. Indeed, in spite of strident calls for stepped-up enforcement in the always-controversial ag-biotech industry, the DOJ recently approved three vertical ag-biotech mergers in fairly rapid succession.

As I noted in a discussion of those ag-biotech mergers, but equally applicable here, regulatory humility should continue to carry the day when it comes to structural innovation by incumbent firms:

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.

 

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

Senator Mark Warner has proposed 20 policy prescriptions for bringing “big tech” to heel. The proposals — which run the gamut from policing foreign advertising on social networks to regulating feared competitive harms — provide much interesting material for Congress to consider.

On the positive side, Senator Warner introduces the idea that online platforms may be able to function as least-cost avoiders with respect to certain tortious behavior of their users. He advocates for platforms to implement technology that would help control the spread of content that courts have found violated certain rights of third-parties.

Yet, on other accounts — specifically the imposition of an “interoperability” mandate on platforms — his proposals risk doing more harm than good.

The interoperability mandate was included by Senator Warner in order to “blunt [tech platforms’] ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.” According to Senator Warner, such a measure would enable startups to offset the advantages that arise from network effects on large tech platforms by building their services more easily on the backs of successful incumbents.

Whatever you think of the moats created by network effects, the example of “successful” previous regulation on this issue that Senator Warner relies upon is perplexing:

A prominent template for [imposing interoperability requirements] was in the AOL/Time Warner merger, where the FCC identified instant messaging as the ‘killer app’ – the app so popular and dominant that it would drive consumers to continue to pay for AOL service despite the existence of more innovative and efficient email and internet connectivity services. To address this, the FCC required AOL to make its instant messaging service (AIM, which also included a social graph) interoperable with at least one rival immediately and with two other rivals within 6 months.

But the AOL/Time Warner merger and the FCC’s conditions provide an example that demonstrates the exact opposite of what Senator Warner suggests. The much-feared 2001 megamerger prompted, as the Senator notes, fears that the new company would be able to leverage its dominance in the nascent instant messaging market to extend its influence into adjacent product markets.

Except, by 2003, despite it being unclear that AOL had developed interoperable systems, two large competitors had arisen that did not run interoperable IM networks (Yahoo! and Microsoft). In that same period, AOL’s previously 100% IM market share had declined by about half. By 2009, after eight years of heavy losses, Time Warner shed AOL, and by last year AIM was completely dead.

Not only was it not clear that AOL was able to make AIM interoperable, AIM was never able to catch up once better, rival services launched. What the conditions did do, however, was prevent AOL from launching competitive video chat services as it flailed about in the wake of the deal, thus forcing it to miss out on a market opportunity available to unencumbered competitors like Microsoft and Yahoo!

And all of this of course ignores the practical impossibility entailed in interfering in highly integrated technology platforms.

The AOL/Time Warner merger conditions are no template for successful tech regulation. Congress would be ill-advised to rely upon such templates for crafting policy around tech and innovation.

What to make of Wednesday’s decision by the European Commission alleging that Google has engaged in anticompetitive behavior? In this post, I contrast the European Commission’s (EC) approach to competition policy with US antitrust, briefly explore the history of smartphones and then discuss the ruling.

Asked about the EC’s decision the day it was announced, FTC Chairman Joseph Simons noted that, while the market is concentrated, Apple and Google “compete pretty heavily against each other” with their mobile operating systems, in stark contrast to the way the EC defined the market. Simons also stressed that for the FTC what matters is not the structure of the market per se but whether or not there is harm to the consumer. This again contrasts with the European Commission’s approach, which does not require harm to consumers. As Simons put it:

Once they [the European Commission] find that a company is dominant… that imposes upon the company kind of like a fairness obligation irrespective of what the effect is on the consumer. Our regulatory… our antitrust regime requires that there be a harm to consumer welfare — so the consumer has to be injured — so the two tests are a little bit different.

Indeed, and as the history below shows, the popularity of Apple’s iOS and Google’s Android operating systems arose because they were superior products — not because of anticompetitive conduct on the part of either Apple or Google. On the face of it, the conduct of both Apple and Google has led to consumer benefits, not harms. So, from the perspective of U.S. antitrust authorities, there is no reason to take action.

Moreover, there is a danger that by taking action as the EU has done, competition and innovation will be undermined — which would be a perverse outcome indeed. These concerns were reflected in a statement by Senator Mike Lee (R-UT):

Today’s decision by the European Commission to fine Google over $5 billion and require significant changes to its business model to satisfy EC bureaucrats has the potential to undermine competition and innovation in the United States,” Sen. Lee said. “Moreover, the decision further demonstrates the different approaches to competition policy between U.S. and EC antitrust enforcers. As discussed at the hearing held last December before the Senate’s Subcommittee on Antitrust, Competition Policy & Consumer Rights, U.S. antitrust agencies analyze business practices based on the consumer welfare standard. This analytical framework seeks to protect consumers rather than competitors. A competitive marketplace requires strong antitrust enforcement. However, appropriate competition policy should serve the interests of consumers and not be used as a vehicle by competitors to punish their successful rivals.

Ironically, the fundamental basis for the Commission’s decision is an analytical framework developed by economists at Harvard in the 1950s, which presumes that the structure of a market determines the conduct of the participants, which in turn presumptively affects outcomes for consumers. This “structure-conduct-performance” paradigm has been challenged both theoretically and empirically (and by “challenged,” I mean “demolished”).

Maintaining, as EC Commissioner Vestager has, that “What would serve competition is to have more players,” is to adopt a presumption regarding competition rooted in the structure of the market, without sufficient attention to the facts on the ground. As French economist Jean Tirole noted in his Nobel Prize lecture:

Economists accordingly have advocated a case-by-case or “rule of reason” approach to antitrust, away from rigid “per se” rules (which mechanically either allow or prohibit certain behaviors, ranging from price-fixing agreements to resale price maintenance). The economists’ pragmatic message however comes with a double social responsibility. First, economists must offer a rigorous analysis of how markets work, taking into account both the specificities of particular industries and what regulators do and do not know….

Second, economists must participate in the policy debate…. But of course, the responsibility here goes both ways. Policymakers and the media must also be willing to listen to economists.

In good Tirolean fashion, we begin with an analysis of how the market for smartphones developed. What quickly emerges is that the structure of the market is a function of intense competition, not its absence. And, by extension, mandating a different structure will likely impede competition, or, at the very least, will not likely contribute to it.

A brief history of smartphone competition

In 2006, Nokia’s N70 became the first smartphone to sell more than a million units. It was a beautiful device, with a simple touch screen interface and real push buttons for numbers. The following year, Apple released its first iPhone. It sold 7 million units — about the same as Nokia’s N95 and slightly less than LG’s Shine. Not bad, but paltry compared to the sales of Nokia’s 1200 series phones, which had combined sales of over 250 million that year — about twice the total of all smartphone sales in 2007.

By 2017, smartphones had come to dominate the market, with total sales of over 1.5 billion. At the same time, the structure of the market has changed dramatically. In the first quarter of 2018, Apple’s iPhone X and iPhone 8 were the two best-selling smartphones in the world. In total, Apple shipped just over 52 million phones, accounting for 14.5% of the global market. Samsung, which has a wider range of devices, sold even more: 78 million phones, or 21.7% of the market. At third and fourth place were Huawei (11%) and Xiaomi (7.5%). Nokia and LG didn’t even make it into the top 10, with market shares of only 3% and 1% respectively.

Several factors have driven this highly dynamic market. Dramatic improvements in cellular data networks have played a role. But arguably of greater importance has been the development of software that offers consumers an intuitive and rewarding experience.

Apple’s iOS and Google’s Android operating systems have proven to be enormously popular among both users and app developers. This has generated synergies — or what economists call network externalities — as more apps have been developed, so more people are attracted to the ecosystem and vice versa, leading to a virtuous circle that benefits both users and app developers.

By contrast, Nokia’s early smartphones, including the N70 and N95, ran Symbian, the operating system developed for Psion’s handheld devices, which had a clunkier user interface and was more difficult to code — so it was less attractive to both users and developers. In addition, Symbian lacked an effective means of solving the problem of fragmentation of the operating system across different devices, which made it difficult for developers to create apps that ran across the ecosystem — something both Apple (through its closed system) and Google (through agreements with carriers) were able to address. Meanwhile, Java’s MIDP used in LG’s Shine, and its successor J2ME imposed restrictions on developers (such as prohibiting access to files, hardware, and network connections) that seem to have made it less attractive than Android.

The relative superiority of their operating systems enabled Apple and the manufacturers of Android-based phones to steal a march on the early leaders in the smartphone revolution.

The fact that Google allows smartphone manufacturers to install Android for free, distributes Google Play and other apps in a free bundle, and pays such manufacturers for preferential treatment for Google Search, has also kept the cost of Android-based smartphones down. As a result, Android phones are the cheapest on the market, providing a powerful experience for as little as $50. It is reasonable to conclude from this that innovation, driven by fierce competition, has led to devices, operating systems, and apps that provide enormous benefits to consumers.

The Commission decision would harm device manufacturers, app developers and consumers

The EC’s decision seems to disregard the history of smartphone innovation and competition and their ongoing consequences. As Dirk Auer explains, the Open Handset Alliance (OHA) was created specifically to offer an effective alternative to Apple’s iPhone — and it worked. Indeed, it worked so spectacularly that Android is installed on about 80% of all new phones. This success was the result of several factors that the Commission now seeks to undermine:

First, in order to maintain order within the Android universe, and thereby ensure that apps developed for Android would function on the vast majority of Android devices, Google and the OHA sought to limit the extent to which Android “forks” could be created. (Apple didn’t face this problem because its source code is proprietary, so cannot be modified by third-party developers.) One way Google does this is by imposing restrictions on the licensing of its proprietary apps, such as the Google Play store (a repository of apps, similar to Apple’s App Store).

Device manufacturers that don’t conform to these restrictions may still build devices with their forked version of Android — but without those Google apps. Indeed, Amazon chooses to develop a non-conforming version of Android and built its own app repository for its Fire devices (though it is still possible to add the Google Play Store). That strategy seems to be working for Amazon in the tablet market; in 2017 it rose past Samsung to become the second biggest manufacturer of tablets worldwide, after Apple.

Second, in order to be able to offer Android for free to smartphone manufacturers, Google sought to develop unique revenue streams (because, although the software is offered for free, it turns out that software developers generally don’t work for free). The main way Google did this was by requiring manufacturers that choose to install Google Play also to install its browser (Chrome) and search tools, which generate revenue from advertising. At the same time, Google kept its platform open by permitting preloads of rivals’ apps and creating a marketplace where rivals can also reach scale. Mozilla’s Firefox browser, for example, has been downloaded over 100 million times on Android.

The importance of these factors to the success of Android is acknowledged by the EC. But instead of treating them as legitimate business practices that enabled the development of high-quality, low-cost smartphones and a universe of apps that benefits billions of people, the Commission simply asserts that they are harmful, anticompetitive practices.

For example, the Commission asserts that

In order to be able to pre-install on their devices Google’s proprietary apps, including the Play Store and Google Search, manufacturers had to commit not to develop or sell even a single device running on an Android fork. The Commission found that this conduct was abusive as of 2011, which is the date Google became dominant in the market for app stores for the Android mobile operating system.

This is simply absurd, to say nothing of ahistorical. As noted, the restrictions on Android forks plays an important role in maintaining the coherency of the Android ecosystem. If device manufacturers were able to freely install Google apps (and other apps via the Play Store) on devices running problematic Android forks that were unable to run the apps properly, consumers — and app developers — would be frustrated, Google’s brand would suffer, and the value of the ecosystem would be diminished. Extending this restriction to all devices produced by a specific manufacturer, regardless of whether they come with Google apps preinstalled, reinforces the importance of the prohibition to maintaining the coherency of the ecosystem.

It is ridiculous to say that something (efforts to rein in Android forking) that made perfect sense until 2011 and that was central to the eventual success of Android suddenly becomes “abusive” precisely because of that success — particularly when the pre-2011 efforts were often viewed as insufficient and unsuccessful (a January 2012 Guardian Technology Blog post, “How Google has lost control of Android,” sums it up nicely).

Meanwhile, if Google is unable to tie pre-installation of its search and browser apps to the installation of its app store, then it will have less financial incentive to continue to maintain the Android ecosystem. Or, more likely, it will have to find other ways to generate revenue from the sale of devices in the EU — such as charging device manufacturers for Android or Google Play. The result is that consumers will be harmed, either because the ecosystem will be degraded, or because smartphones will become more expensive.

The troubling absence of Apple from the Commission’s decision

In addition, the EC’s decision is troublesome in other ways. First, for its definition of the market. The ruling asserts that “Through its control over Android, Google is dominant in the worldwide market (excluding China) for licensable smart mobile operating systems, with a market share of more than 95%.” But “licensable smart mobile operating systems” is a very narrow definition, as it necessarily precludes operating systems that are not licensable — such as Apple’s iOS and RIM’s Blackberry OS. Since Apple has nearly 25% of the market share of smartphones in Europe, the European Commission has — through its definition of the market — presumed away the primary source of effective competition. As Pinar Akman has noted:

How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

The EU then invents a series of claims regarding the lack of competition with Apple:

  • end user purchasing decisions are influenced by a variety of factors (such as hardware features or device brand), which are independent from the mobile operating system;

It is not obvious that this is evidence of a lack of competition. A better explanation is that the EU’s narrow definition of the market is defective. In fact, one could easily draw the opposite conclusion of that drawn by the Commission: the fact that purchasing decisions are driven by various factors suggests that there is substantial competition, with phone manufacturers seeking to design phones that offer a range of features, on a number of dimensions, to best capture diverse consumer preferences. They are able to do this in large part precisely because consumers are able to rely upon a generally similar operating system and continued access to the apps that they have downloaded. As Tim Cook likes to remind his investors, Apple is quite successful at targeting “Android switchers” to switch to iOS.

 

  • Apple devices are typically priced higher than Android devices and may therefore not be accessible to a large part of the Android device user base;

 

And yet, in the first quarter of 2018, Apple phones accounted for five of the top ten selling smartphones worldwide. Meanwhile, several competing phones, including the fifth and sixth best-sellers, Samsung’s Galaxy S9 and S9+, sell for similar prices to the most expensive iPhones. And a refurbished iPhone 6 can be had for less than $150.

 

  • Android device users face switching costs when switching to Apple devices, such as losing their apps, data and contacts, and having to learn how to use a new operating system;

 

This is, of course, true for any system switch. And yet the growing market share of Apple phones suggests that some users are willing to part with those sunk costs. Moreover, the increasing predominance of cloud-based and cross-platform apps, as well as Apple’s own “Move to iOS” Android app (which facilitates the transfer of users’ data from Android to iOS), means that the costs of switching border on trivial. As mentioned above, Tim Cook certainly believes in “Android switchers.”

 

  • even if end users were to switch from Android to Apple devices, this would have limited impact on Google’s core business. That’s because Google Search is set as the default search engine on Apple devices and Apple users are therefore likely to continue using Google Search for their queries.

 

This is perhaps the most bizarre objection of them all. The fact that Apple chooses to install Google search as the default demonstrates that consumers prefer that system over others. Indeed, this highlights a fundamental problem with the Commission’s own rationale, As Akman notes:

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Conclusion

As the foregoing demonstrates, the EC’s decision is based on a fundamental misunderstanding of the nature and evolution of the market for smartphones and associated applications. The statement by Commissioner Vestager quoted above — that “What would serve competition is to have more players” — belies this misunderstanding and highlights the erroneous assumptions underpinning the Commission’s analysis, which is wedded to a theory of market competition that was long ago thrown out by economists.

And, thankfully, it appears that the FTC Chairman is aware of at least some of the flaws in the EC’s conclusions.

Google will undoubtedly appeal the Commission’s decision. For the sakes of the millions of European consumers who rely on Android-based phones and the millions of software developers who provide Android apps, let’s hope that they succeed.