Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

Farewell

Alden Abbott —  29 March 2018 — 2 Comments

On Monday, April 2, I will leave the Heritage Foundation to enter federal government service.  Accordingly, today I am signing off as a regular contributor to Truth on the Market.  First and foremost, I owe a great debt of gratitude to Geoff Manne, who was kind enough to afford me access to TOTM.  Geoff’s outstanding leadership has made TOTM the leading blog site bringing to bear sound law and economics insights on antitrust and related regulatory topics.  I was also privileged to have the opportunity to work on an article with TOTM stalwart Thom Lambert, whose concise book How To Regulate is by far the best general resource on sound regulatory principles (it should sit on the desk of the head of every regulatory agency).  I have also greatly benefited from the always insightful analyses of fellow TOTM bloggers Allen Gibby, Eric Fruits, Joanna Shepherd, Kristian Stout, Mike Sykuta, and Neil Turkewitz.  Thanks to all!  I look forward to continuing to seek enlightenment at truthonthemarket.com.

If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.

Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.

In 2017, he published “The Power of Bias in Economics Research.” He recently talked to Russ Roberts on the EconTalk podcast about his research and what it means for economics.

He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.

What is bias?

We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.

In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.

Publication bias

Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.

Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.

The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).

“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.

On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.

Low power

A study with low statistical power has a reduced chance of detecting a true effect.

Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.

Type1-Type2-Error

An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.

Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.

By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.

Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.

In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.

Now, we’re getting to power.

Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.

By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.

Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:

  • If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
  • If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.

In other words, a low power test is more likely to convict the innocent than a high power test.

In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.

MinimumWageResearchFunnelGraph

Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.

(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)

Is economics bogus?

It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:

Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.

For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:

[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.

Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.

In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”

 

Introduction

Last week I attended the 17th Annual Conference of the International Competition Network (ICN) held in New Delhi, India from March 21-23.  The Delhi Conference highlighted the key role of the ICN in promoting global convergence toward “best practices” in substantive and procedural antitrust analysis by national antitrust (“competition”) agencies.  The ICN operates as a virtual network of competition agencies and expert “non-governmental advisers” (NGAs), not governments.  As such, the ICN promulgates “recommended practices,” provides online training and other assistance to competition agencies, and serves as a forum for the building of relationships among competition officials (an activity which facilitates cooperation on particular matters and the exchange of advice on questions of antitrust policy and administration).  There is a general consensus among competition agencies and NGAs (I am one) that the ICN has accomplished a great deal since its launch in 2001 – indeed, it has far surpassed expectations.  Although (not surprisingly) inter-jurisdictional differences in perspective on particular competition issues remain, the ICN has done an excellent job in helping ensure that national competition agencies understand each other as they carry out their responsibilities.  By “speaking a common antitrust language,” informed by economic reasoning, agencies are better able to cooperate on individual matters and evaluate the merits of potential changes in law and procedure.

Pre-ICN Program Hosted by Competition Policy International (CPI)

Special one-day programs immediately preceding the ICN have proliferated in recent years.  On March 20, I participated in the small group one-day program hosted by Competition Policy International (CPI), attended by senior competition agency officials, private practitioners, and scholars.  This program featured a morning roundtable covering problems of extraterritoriality and an afternoon roundtable focused on competition law challenges in the digital economy.

The extraterritoriality session reflected the growing number of competition law matters (particularly cartels and mergers) that have effects in multiple jurisdictions.  There appeared to be general support for the proposition that a competition authority should impose remedies that have extraterritorial application only to the extent necessary to remedy harm to competition within the enforcing jurisdiction.  There also was a general consensus that it is very difficult for a competition authority to cede enforcement jurisdiction to a foreign authority, when the first authority finds domestic harm attributable to extraterritorial conduct and has the ability to assert jurisdiction.  Thus, although efforts to promote comity in antitrust enforcement are worthwhile, it must be recognized that there are practical limitations to such initiatives.  As such, a focus on enhancing coordination and cooperation among multiple agencies investigating the same conduct will be of paramount importance.

The digital economy roundtable directed particular attention to enforcement challenges raised by Internet “digital platforms” (e.g., Google, Facebook, Amazon).  In particular, with respect to digital platforms, roundtable participants discussed whether new business models and disruptive innovations create challenges to existing competition law and practices; what recent technology changes portend for market definition, assessment of market power, and other antitrust enforcement concepts; whether new analytic tools are required; and what are good mechanisms to harmonize regulation and competition enforcement.  Although there was no overall consensus on these topics, there was robust discussion of multi-sided market analysis and differences in approach to digital platform oversight.

An ICN Conference Overview

As in recent years, the ICN Conference itself featured set-piece (no Q&A) plenary sessions involving colloquies among top agency officials regarding cartels, unilateral conduct, mergers, advocacy, and agency effectiveness – the areas covered during the year by the ICN’s specialized working groups.  Numerous break-out sessions allowed ICN delegates to discuss in detail particular developments in these areas, and to evaluate and hash out the relative merits of competing approaches to problems.  At least seven generalizations can be drawn from the Delhi Conference’s deliberations.

First, other international organizations that initially had kept their distance from the ICN, specifically the OECD, the World Bank, and UNCTAD, now engage actively with the ICN.  This is a very positive development indeed.  Research carried out by the OECD on competition policy – for example, on the economic evaluation of regulatory approaches (important for competition advocacy), digital platforms, and public tenders – has been injected as “policy inputs” to discrete ICN initiatives.  Annual Competition advocacy contests cosponsored by the ICN and the World Bank have enabled a large number of agencies (particularly in developing countries) to showcase their successes in helping improve the competitive climate within their jurisdictions.  UNCTAD initiatives on competition and economic development can be effectively presented to new competition agencies through ICN involvement.

Second, competition authorities are focusing more intensively on “vertical mergers” involving firms at different levels of the distribution chain.  The ICN can help agencies be attentive to the need to weigh procompetitive efficiencies as well as new theories of anticompetitive harm in investigating these mergers.

Third, the transformation of economies worldwide through the Internet and the “digital revolution” is posing new challenges – and opportunities – for enforcers.  Policy analysis, informed by economics, is evolving in this area.

Fourth, cartels and bid rigging (collusion in public tenders was the showcase “special project” at the Delhi Conference) investigations remain as significant as ever.  Thinking on the administration of government leniency programs and “ex officio” investigations aimed at ferreting out cartels continues to be refined.

Fifth, the continuing growth in the number and scope of competition laws and the application of those laws to international commerce places a premium on enhanced coordination among competition agencies.  The ICN’s role in facilitating such cooperation thus assumes increased importance.

Sixth, issues of due process, or procedural fairness, commendably are generally recognized as important elements of effective agency administration.  Nevertheless, the precise contours of due process, and its specific application, are not uniform across agencies, and merit continued exploration by the ICN.

Seventh, the question of whether non-purely economic factors (such as fairness, corporate size, and the rights of workers) should be factored into competition analysis is gaining increased traction in a number of jurisdictions, and undoubtedly will be a subject of considerable debate in the years to come.

Conclusion

The ICN is by now a mature organization.  As a virtual network that relies on the power to persuade, not to dictate, it is dynamic, not static.  The ICN continues to respond flexibly to the changing needs of its many members and to global economic developments, within the context of the focused work carried out by its various substantive and process-related working groups.  The Delhi Conference provided a welcome opportunity for a timely review of its accomplishments and an assessment of its future direction.  In short, the ICN remains a highly useful vehicle for welfare-enhancing “soft convergence” among competition law regimes.

 

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

In January a Food and Drug Administration advisory panel, the Tobacco Products Scientific Advisory Committee (TPSAC), voted 8-1 that the weight of scientific evidence shows that switching from cigarettes to an innovative, non-combustible tobacco product such as Philip Morris International’s (PMI’s) IQOS system significantly reduces a user’s exposure to harmful or potentially harmful chemicals.

This finding should encourage the FDA to allow manufacturers to market smoke-free products as safer alternatives to cigarettes. But, perhaps predictably, the panel’s vote has incited a regulatory furor among certain politicians.

Last month, several United States senators, including Richard Blumenthal, Dick Durbin, and Elizabeth Warren, sent a letter to FDA Commissioner Scott Gottlieb urging the agency to

avoid rushing through new products, such as IQOS, … without requiring strong evidence that any such product will reduce the risk of disease, result in a large number of smokers quitting, and not increase youth tobacco use.

At the TPSAC meeting, nine members answered five multi-part questions about proposed marketing claims for the device. Taken as a whole, the panel’s votes indicate considerable agreement that non-combustible tobacco products like IQOS should, in fact, allay the senators’ concerns. And a closer look at the results reveals a much more nuanced outcome than either the letter or much of the media coverage has suggested.

“Reduce the risk of disease”: Despite the finding that IQOS reduces exposure to harmful chemicals, the panel nominally rejected a claim that it would reduce the risk of tobacco-related diseases. The panel’s objection, however, centered on the claim’s wording that IQOS “can reduce” risk, rather than “may reduce” risk. And, in the panel’s closest poll, it rejected by just a single vote the claim that “switching completely to IQOS presents less risk of harm than continuing to smoke cigarettes.”

“Result in large number of smokers quitting”: The panel unanimously concluded that PMI demonstrated a “low” likelihood that former smokers would re-initiate tobacco use with the IQOS system. The only options were “low,” “medium,” and “high.” This doesn’t mean it will necessarily help non-users quit in the first place, of course, but for smokers who do switch, it means the device helps them stay away from cigarettes.

“Not increase youth tobacco use”: A majority of the voting panel members agreed that PMI demonstrated a “low” likelihood that youth “never smokers” would become established IQOS users.

By definition, the long-term health benefits of innovative new products like IQOS are uncertain. But the cost of waiting for perfect information may be substantial.

It’s worth noting that the American Cancer Society recently shifted its position on electronic cigarettes, recommending that individuals who do not quit smoking

should be encouraged to switch to the least harmful form of tobacco product possible; switching to the exclusive use of e-cigarettes is preferable to continuing to smoke combustible products.

Dr. Nancy Rigotti agrees. A professor of medicine at Harvard and Director of the Tobacco Research and Treatment Center at Massachusetts General Hospital, Dr. Rigotti is a prominent tobacco-cessation researcher and the author of a February 2018 National Academies of Science, Engineering, and Medicine Report that examined over 800 peer-reviewed scientific studies on the health effects of e-cigarettes. As she has said:

The field of tobacco control recognizes cessation is the goal, but if the patient can’t quit then I think we should look at harm reduction.

About her recent research, Dr. Rigotti noted:

I think the major takeaway is that although there’s a lot we don’t know, and although they have some health risks, [e-cigarettes] are clearly better than cigarettes….

Unlike the senators pushing the FDA to prohibit sales of non-combustible tobacco products, experts recognize that there is enormous value in these products: the reduction of imminent harm relative to the alternative.

Such harm-reduction strategies are commonplace, even when the benefits aren’t perfectly quantifiable. Bike helmet use is encouraged (or mandated) to reduce the risk and harm associated with bicycling. Schools distribute condoms to reduce teen pregnancy and sexually transmitted diseases. Local jurisdictions offer needle exchange programs to reduce the spread of AIDS and other infectious diseases; some offer supervised injection facilities to reduce the risk of overdose. Methadone and Suboxone are less-addictive opioids used to treat opioid use disorder.

In each of these instances, it is understood that the underlying, harmful behaviors will continue. But it is also understood that the welfare benefits from reducing the harmful effects of such behavior outweigh any gain that might be had from futile prohibition efforts.

By the same token — and seemingly missed by the senators urging an FDA ban on non-combustible tobacco technologies — constraints placed on healthier alternatives induce people, on the margin, to stick with the less-healthy option. Thus, many countries that have adopted age restrictions on their needle exchange programs and supervised injection facilities have seen predictably higher rates of infection and overdose among substance-using youth.

Under the Food, Drug & Cosmetic Act, in order to market “safer” tobacco products manufacturers must demonstrate that they would (1) significantly reduce harm and the risk of tobacco-related disease to individual tobacco users, and (2) benefit the health of the population as a whole. In addition, the Act limits the labeling and advertising claims that manufacturers can make on their products’ behalf.

These may be well-intentioned restraints, but overly strict interpretation of the rules can do far more harm than good.

In 2015, for example, the TPSAC expressed concerns about consumer confusion in an application to market “snus” (a smokeless tobacco product placed between the lip and gum) as a safer alternative to cigarettes. The manufacturer sought to replace the statement on snus packaging, “WARNING: This product is not a safe alternative to cigarettes,” with one reading, “WARNING: No tobacco product is safe, but this product presents substantially lower risks to health than cigarettes.”

The FDA denied the request, stating that the amended warning label “asserts a substantial reduction in risks, which may not accurately convey the risks of [snus] to consumers” — even though it agreed that snus “substantially reduce the risks of some, but not all, tobacco-related diseases.”

But under this line of reasoning, virtually no amount of net health benefits would merit approval of marketing language designed to encourage the use of less-harmful products as long as any risk remains. And yet consumers who refrain from using snus after reading the stronger warning might instead — and wrongly — view cigarettes as equally healthy (or healthier), precisely because of the warning. That can’t be sound policy if the aim is actually to reduce harm overall.

To be sure, there is a place for government to try to ensure accuracy in marketing based on health claims. But it is impossible for regulators to fine-tune marketing materials to convey the full range of truly relevant information for all consumers. And pressuring the FDA to limit the sale and marketing of smoke-free products as safer alternatives to cigarettes — in the face of scientific evidence that they would likely achieve significant harm-reduction goals — could do far more harm than good.

Excess is unflattering, no less when claiming that every evolution in legal doctrine is a slippery slope leading to damnation. In Friday’s New York Times, Lina Khan trots down this alarmist path while considering the implications for the pending Supreme Court case of Ohio v. American Express. One of the core issues in the case is the proper mode of antitrust analysis for credit card networks as two-sided markets. The Second Circuit Court of Appeals agreed with arguments, such as those that we have made, that it is important to consider the costs and benefits to both sides of a two-sided market when conducting an antitrust analysis. The Second Circuit’s opinion is under review in the American Express case.

Khan regards the Second Circuit approach of conducting a complete analysis of these markets as a mistake.

On her reading, the idea that an antitrust analysis of credit card networks should reflect their two-sided-ness would create “de facto antitrust immunity” for all platforms:

If affirmed, the Second Circuit decision would create de facto antitrust immunity for the most powerful companies in the economy. Since internet technologies have enabled the growth of platform companies that serve multiple groups of users, firms like Alphabet, Amazon, Apple, Facebook, and Uber are set to be prime beneficiaries of the Second Circuit’s warped analysis. Amazon, for example, could claim status as a two-sided platform because it connects buyers and sellers of goods; Google because it facilitates a market between advertisers and search users… Indeed, the reason that the tech giants are lining up behind the Second Circuit’s approach is that — if ratified — it would make it vastly more difficult to use antitrust laws against them.

This paragraph is breathtaking. First, its basic premise is wrong. Requiring a complete analysis of the complicated economic effects of conduct undertaken in two sided markets before imposing antitrust liability would not create “de facto antitrust immunity.” It would require that litigants present, and courts evaluate, credible evidence sufficient to establish a claim upon which an enforcement action can be taken — just like in any other judicial proceeding in any area of law. Novel market structures may require novel analytical models and novel evidence, but that is no different with two-sided markets than with any other complicated issue before a court.

Second, the paragraph’s prescribed response would be, in fact, de facto antitrust liability for any firm competing in a two-sided market — that is, as Kahn notes, almost every major tech firm.

A two-sided platform competes with other platforms by facilitating interactions between the two sides of the market. This often requires a careful balancing of the market: in most of these markets too many or too few participants on one side of the market reduces participation on the other side. So these markets play the role of matchmaker, charging one side of the market a premium in order to cross-subsidize a desirable level of participation on the other. This will be discussed more below, but the takeaway for now is that most of these platforms operate by charging one side of the market (or some participants on one side of the market) an above-cost price in order to charge the other side of the market a below-cost price. A platform’s strategy on either side of the market makes no sense without the other, and it does not adopt practices on one side without carefully calibrating them with the other. If one does not consider both sides of these markets, therefore, the simplistic approach that Kahn demands will systematically fail to capture both the intent and the effect of business practices in these markets. More importantly, such an approach could be used to find antitrust violations throughout these industries — no matter the state of competition, market share, or actual consumer effects.

What are two-sided markets?

Khan notes that there is some element of two-sidedness in many (if not most) markets:

Indeed, almost all markets can be understood as having two sides. Firms ranging from airlines to meatpackers could reasonably argue that they meet the definition of “two-sided,” thereby securing less stringent review.

This is true, as far as it goes, as any sale of goods likely involves the selling party acting as some form of intermediary between chains of production and consumption. But such a definition is unworkably broad from the point of view of economic or antitrust analysis. If two-sided markets exist as distinct from traditional markets there must be salient features that define those specialized markets.

Economists have been intensively studying two-sided markets (see, e.g., here, here, and here) for the past two decades (and had recognized many of their basic characteristics even before then). As Khan notes, multi-sided platforms have indeed existed for a long time in the economy. Newspapers, for example, provide a targeted outlet for advertisers and incentives for subscribers to view advertisements; shopping malls aggregate retailers in one physical location to lower search costs for customers, while also increasing the retailers’ sales volume. Relevant here, credit card networks are two-sided platforms, facilitating credit-based transactions between merchants and consumers.

One critical feature of multi-sided platforms is the interdependent demand of platform participants. Thus, these markets require a simultaneous critical mass of users on each side in order to ensure the viability of the platform. For instance, a credit card is unlikely to be attractive to consumers if few merchants accept it; and few merchants will accept a credit card that isn’t used by a sufficiently large group of consumers. To achieve critical mass, a multi-sided platform uses both pricing and design choices, and, without critical mass on all sides, the positive feedback effects that enable the platform’s unique matching abilities might not be achieved.

This highlights the key distinction between traditional markets and multi-sided markets. Most markets have two sides (e.g., buyers and sellers), but that alone doesn’t make them meaningfully multi-sided. In a multi-sided market a key function of the platform is to facilitate the relationship between the sides of the market in order to create and maintain an efficient relationship between them. The platform isn’t merely a reseller of a manufacturer’s goods, for instance, but is actively encouraging or discouraging participation by users on both sides of the platform in order to maximize the value of the platform itself — not the underlying transaction — for those users. Consumers, for instance, don’t really care how many pairs of jeans a clothier stocks; but a merchant does care how many cardholders an issuer has on its network. This is most often accomplished by using prices charged to each side (in the case of credit cards, so-called interchange fees) to keep each side an appropriate size.

Moreover, the pricing that occurs on a two-sided platform is secondary, to a varying extent, to the pricing of the subject of the transaction. In a two-sided market, the prices charged to either side of the market are an expression of the platform’s ability to control the terms on which the different sides meet to transact and is relatively indifferent to the thing about which the parties are transacting.

The nature of two-sided markets highlights the role of these markets as more like facilitators of transactions and less like traditional retailers of goods (though this distinction is a matter of degree, and different two-sided markets can be more-or-less two-sided). Because the platform uses prices charged to each side of the market in order to optimize overall use of the platform (that is, output or volume of transactions), pricing in these markets operates differently than pricing in traditional markets. In short, the pricing on one side of the platform is often used to subsidize participation on the other side of the market, because the overall value to both sides is increased as a result. Or, conversely, pricing to one side of the market may appear to be higher than the equilibrium level when viewed for that side alone, because this funds a subsidy to increase participation on another side of the market that, in turn, creates valuable network effects for the side of the market facing the higher fees.

The result of this dynamic is that it is more difficult to assess the price and output effects in multi-sided markets than in traditional markets. One cannot look at just one side of the platform — at the level of output and price charged to consumers of the underlying product, say — but must look at the combined pricing and output of both the underlying transaction as well as the platform’s service itself, across all sides of the platform.

Thus, as David Evans and Richard Schmalensee have observed, traditional antitrust reasoning is made more complicated in the presence of a multi-sided market:

[I]t is not possible to know whether standard economic models, often relied on for antitrust analysis, apply to multi-sided platforms without explicitly considering the existence of multiple customer groups with interdependent demand…. [A] number of results for single-sided firms, which are the focus of much of the applied antitrust economics literature, do not apply directly to multi-sided platforms.

The good news is that antitrust economists have been focusing significant attention on two- and multi-sided markets for a long while. Their work has included attention to modelling the dynamics and effects of competition in these markets, including how to think about traditional antitrust concepts such as market definition, market power and welfare analysis. What has been lacking, however, has been substantial incorporation of this analysis into judicial decisions. Indeed, this is one of the reasons that the Second Circuit’s opinion in this case was, and why the Supreme Court’s opinion will be, so important: this work has reached the point that courts are recognizing that these markets can and should be analyzed differently than traditional markets.

Getting the two-sided analysis wrong in American Express would harm consumers

Khan describes credit card networks as a “classic case of oligopoly,” and opines that American Express’s contractual anti-steering provision is, “[a]s one might expect, the credit card companies us[ing] their power to block competition.” The initial, inherent tension in this statement should be obvious: the assertion is simultaneously that this a non-competitive, oligopolistic market and that American Express is using the anti-steering provision to harm its competitors. Indeed, rather than demonstrating a classic case of oligopoly, this demonstrates the competitive purpose that the anti-steering provision serves: facilitating competition between American Express and other card issuers.

The reality of American Express’s anti-steering provision, which prohibits merchants who choose to accept AmEx cards from “steering” their customers to pay for purchases with other cards, is that it is necessary in order for American Express to compete with other card issuers. Just like analysis of multi-sided markets needs to consider all sides of the market, platforms competing in these markets need to compete on all sides of the market.

But the use of complex pricing schemes to determine prices on each side of the market to maintain an appropriate volume of transactions in the overall market creates a unique opportunity for competitors to behave opportunistically. For instance, if one platform charges a high fee to one side of the market in order to subsidize another side of the market (say, by offering generous rewards), this creates an opportunity for a savvy competitor to undermine that balancing by charging the first side of the market a lower fee, thus attracting consumers from its competitor and, perhaps, making its pricing strategy unprofitable. This may appear to be mere price competition. But the effects of price competition on one side of a multi-sided market are more complicated to evaluate than those of traditional price competition.

Generally, price competition has the effect of lowering prices for goods, increasing output, decreasing deadweight losses, and benefiting consumers. But in a multi-sided market, the high prices charged to one side of the market can be used to benefit consumers on the other side of the market; and that consumer benefit can increase output on that side of the market in ways that create benefits for the first side of the market. When a competitor poaches a platform’s business on a single side of a multi-sided market, the effects can be negative for users on every side of that platform’s market.

This is most often seen in cases, like with credit cards, where platforms offer differentiated products. American Express credit cards are qualitatively different than Visa and Mastercard credit cards; they charge more (to both sides of the market) but offer consumers a more expansive rewards program (funded by the higher transaction fees charged to merchants) and offer merchants access to what are often higher-value customers (ensured by the higher fees charged to card holders).

If American Express did not require merchants to abide by its anti-steering rule, it wouldn’t be able to offer this form of differentiated product; it would instead be required to compete solely on price. Cardholders exist who prefer higher-status cards with a higher-tier of benefits, and there are merchants that prefer to attract a higher-value pool of customers.

But without the anti-steering provisions, the only competition available is on the price to merchants. The anti-steering rule is needed in order to prevent merchants from free-riding on American Express’s investment in attracting a unique group of card holders to its platform. American Express maintains that differentiation from other cards by providing its card holders with unique and valuable benefits — benefits that are subsidized in part by the fees charged to merchants. But merchants that attract customers by advertising that they accept American Express cards but who then steer those customers to other cards erode the basis of American Express’s product differentiation. Because of the interdependence of both sides of the platform, this ends up undermining the value that consumers receive from the platform as American Express ultimately withdraws consumer-side benefits. In the end, the merchants who valued American Express in the first place are made worse off by virtue of being permitted to selectively free-ride on American Express’s network investment.

At this point it is important to note that many merchants continue to accept American Express cards in light of both the cards’ higher merchant fees and these anti-steering provisions. Meanwhile, Visa and Mastercard have much larger market shares, and many merchants do not accept Amex. The fact that merchants who may be irritated by the anti-steering provision continue to accept Amex despite it being more costly, and the fact that they could readily drop Amex and rely on other, larger, and cheaper networks, suggests that American Express creates real value for these merchants. In other words, American Express, in fact, must offer merchants access to a group of consumers who are qualitatively different from those who use Visa or Mastercard cards — and access to this group of consumers must be valuable to those merchants.

An important irony in this case is that those who criticize American Express’s practices, who are arguing these practices limit price competition and that merchants should be able to steer customers to lower-fee cards, generally also argue that modern antitrust law focuses too myopically on prices and fails to account for competition over product quality. But that is precisely what American Express is trying to do: in exchange for a higher price it offers a higher quality card for consumers, and access to a different quality of consumers for merchants.

Anticompetitive conduct here, there, everywhere! Or nowhere.

The good news is that many on the court — and, for that matter, even Ohio’s own attorney — recognize that the effects of the anti-steering rule on the cardholder side of the market need to be considered alongside their effects on merchants:

JUSTICE KENNEDY: Does output include premiums or rewards to customers?
MR. MURPHY: Yeah. Output would include quality considerations as well.

The bad news is that several justices don’t seem to get it. Justice Kagan, for instance, suggested that “the effect of these anti-steering provisions means a market where we will only have high-cost/high-service products.” Justice Kagan’s assertion reveals the hubris of the would-be regulator, bringing to her evaluation of the market a preconception of what that market is supposed to look like. To wit: following her logic, one can say just as much that without the anti-steering provisions we would have a market with only low-cost/low-service products. Without an evaluation of the relative effects — which is more complicated than simple intuition suggests, especially since one can always pay cash — there is no reason to say that either of these would be a better outcome.

The reality, however, is that it is possible for the market to support both high- and low-cost, and high- and low-service products. In fact, this is the market in which we live today. As Justice Gorsuch said, “American Express’s agreements don’t affect MasterCard or Visa’s opportunity to cut their fees … or to advertise that American Express’s are higher. There is room for all kinds of competition here.” Indeed, one doesn’t need to be particularly creative to come up with competitive strategies that other card issuers could adopt, from those that Justice Gorsuch suggests, to strategies where card issuers are, in fact, “forced” to accept higher fees, which they in turn use to attract more card holders to their networks, such as through sign-up bonuses or awards for American Express customers who use non-American Express cards at merchants who accept them.

A standard response to such proposals is “if that idea is so good, why isn’t the market already doing it?” An important part of the answer in this case is that MasterCard and Visa know that American Express relies on the anti-steering provision in order to maintain its product differentiation.

Visa and Mastercard were initially defendants in this case, as well, as they used similar rules to differentiate some of their products. It’s telling that these larger market participants settled because, to some extent, harming American Express is worth more to them than their own product differentiation. After all, consumers steered away from American Express will generally use Visa or Mastercard (and their own high-priced cards may be cannibalizing from their own low-priced cards anyway, so reducing their value may not hurt so much). It is therefore a better strategy for them to try to use the courts to undermine that provision so that they don’t actually need to compete with American Express.

Without the anti-steering provision, American Express loses its competitive advantage compared to MasterCard and Visa and would be forced to compete against those much larger platforms on their preferred terms. What’s more, this would give those platforms access to American Express’s vaunted high-value card holders without the need to invest resources in competing for them. In other words, outlawing anti-steering provisions could in fact have both anti-competitive intent and effect.

Of course, card networks aren’t necessarily innocent of anticompetitive conduct, one way or the other. Showing that they are on either side of the anti-steering rule requires a sufficiently comprehensive analysis of the industry and its participants’ behavior. But liability cannot be simply determined based on behavior on one side of a two-sided market. These companies can certainly commit anticompetitive mischief, and they need to be held accountable when that happens. But this case is not about letting American Express or tech companies off the hook for committing anticompetitive conduct. This case is about how we evaluate such allegations, weigh them against possible beneficial effects, and put in place the proper thorough analysis for this particular form of business.

Over the last two decades, scholars have studied the nature of multi-sided platforms, and have made a good deal of progress. We should rely on this learning, and make sure that antitrust analysis is sound, not expedient.

The U.S. Federal Trade Commission’s (FTC) well-recognized expertise in assessing unfair or deceptive acts or practices can play a vital role in policing abusive broadband practices.  Unfortunately, however, because Section 5(a)(2) of the FTC Act exempts common carriers from the FTC’s jurisdiction, serious questions have been raised about the FTC’s authority to deal with unfair or deceptive practices in cyberspace that are carried out by common carriers, but involve non-common-carrier activity (in contrast, common carrier services have highly regulated terms and must be made available to all potential customers).

Commendably, the Ninth Circuit held on February 26, in FTC v. AT&T Mobility, that harmful broadband data throttling practices by a common carrier were subject to the FTC’s unfair acts or practices jurisdiction, because the common carrier exception is “activity-based,” and the practices in question did not involve common carrier services.  Key excerpts from the summary of the Ninth Circuit’s opinion follow:

The en banc court affirmed the district court’s denial of AT&T Mobility’s motion to dismiss an action brought by th Federal Trade Commission (“FTC”) under Section 5 of the FTC Act, alleging that AT&T’s data-throttling plan was unfair and deceptive. AT&T Mobility’s data-throttling is a practice by which the company reduced customers’ broadband data speed without regard to actual network congestion. Section 5 of the FTC Act gives the agency enforcement authority over “unfair or deceptive acts or practices,” but exempts “common carriers subject to the Acts to regulate commerce.” 15 U.S.C § 45(a)(1), (2). AT&T moved to dismiss the action, arguing that it was exempt from FTC regulation under Section 5. . . .

The en banc court held that the FTC Act’s common carrier exemption was activity-based, and therefore the phrase “common carriers subject to the Acts to regulate commerce” provided immunity from FTC regulation only to the extent that a common carrier was engaging in common carrier services. In reaching this conclusion, the en banc court looked to the FTC Act’s text, the meaning of “common carrier” according to the courts around the time the statute was passed in 1914, decades of judicial interpretation, the expertise of the FTC and Federal Communications Commission (“FCC”), and legislative history.

Addressing the FCC’s order, issued on March 12, 2015, reclassifying mobile data service from a non-common carriage service to a common carriage service, the en banc court held that the prospective reclassification order did not rob the FTC of its jurisdiction or authority over conduct occurring before the order. Accordingly, the en banc court affirmed the district court’s denial of AT&T’s motion to dismiss.

A key introductory paragraph in the Ninth Circuit’s opinion underscores the importance of the court’s holding for sound regulatory policy:

This statutory interpretation [that the common carrier exception is activity-based] also accords with common sense. The FTC is the leading federal consumer protection agency and, for many decades, has been the chief federal agency on privacy policy and enforcement. Permitting the FTC to oversee unfair and deceptive non-common-carriage practices of telecommunications companies has practical ramifications. New technologies have spawned new regulatory challenges. A phone company is no longer just a phone company. The transformation of information services and the ubiquity of digital technology mean that telecommunications operators have expanded into website operation, video distribution, news and entertainment production, interactive entertainment services and devices, home security and more. Reaffirming FTC jurisdiction over activities that fall outside of common-carrier services avoids regulatory gaps and provides consistency and predictability in regulatory enforcement.

But what can the FTC do about unfair or deceptive practices affecting broadband services, offered by common carriers, subsequent to the FCC’s 2015 reclassification of mobile data service as a common carriage service?  The FTC will be able to act, assuming that the Federal Communications Commission’s December 2017 rulemaking, reclassifying mobile broadband Internet access service as not involving a common carrier service, passes legal muster (as it should).  In order to avoid any legal uncertainty, however, Congress could take the simple step of eliminating the FTC Act’s common carrier exception – an outdated relic that threatens to generate disparate enforcement outcomes toward the same abusive broadband practice, based merely upon whether the parent company is deemed a “common carrier.”

The cause of basing regulation on evidence-based empirical science (rather than mere negative publicity) – and of preventing regulatory interference with First Amendment commercial speech rights – got a judicial boost on February 26.

Specifically, in National Association of Wheat Growers et al. v. Zeise (Monsanto Case), a California federal district court judge preliminarily enjoined application against Monsanto of a labeling requirement imposed by a California regulatory law, Proposition 65.  Proposition 65 mandates that the Governor of California publish a list of chemicals known to the State to cause cancer, and also prohibits any person in the course of doing business from knowingly and intentionally exposing anyone to the listed chemicals without a prior “clear and reasonable” warning.  In this case, California sought to make Monsanto place warning labels on its popular Roundup weed killer products, stating that glyphosate, a widely-used herbicide and key Roundup ingredient, was known to cause cancer.  Monsanto, joined by various agribusiness entities, sued to enjoin California from taking that action.  Judge William Shubb concluded that there was insufficient evidence that the active ingredient in Roundup causes cancer, and that requiring Roundup to publish warning labels would violate Monsanto’s First Amendment rights by compelling it to engage in false and misleading speech.  Salient excerpts from Judge Shubb’s opinion are set forth below:

[When, as here, it compels commercial speech, in order to satisfy the First Amendment,] [t]he State has the burden of demonstrating that a disclosure requirement is purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest. . . .  The dispute in the present case is over whether the compelled disclosure is of purely factual and uncontroversial information. In this context, “uncontroversial” “refers to the factual accuracy of the compelled disclosure, not to its subjective impact on the audience.” [citation omitted]

 On the evidence before the court, the required warning for glyphosate does not appear to be factually accurate and uncontroversial because it conveys the message that glyphosate’s carcinogenicity is an undisputed fact, when almost all other regulators have concluded that there is insufficient evidence that glyphosate causes cancer. . . .

It is inherently misleading for a warning to state that a chemical is known to the state of California to cause cancer based on the finding of one organization [, the International Agency for Research on Cancer] (which as noted above, only found that substance is probably carcinogenic), when apparently all other regulatory and governmental bodies have found the opposite, including the EPA, which is one of the bodies California law expressly relies on in determining whether a chemical causes cancer. . . .  [H]ere, given the heavy weight of evidence in the record that glyphosate is not in fact known to cause cancer, the required warning is factually inaccurate and controversial. . . .

The court’s First Amendment inquiry here boils down to what the state of California can compel businesses to say. Whether Proposition 65’s statutory and regulatory scheme is good policy is not at issue. However, where California seeks to compel businesses to provide cancer warnings, the warnings must be factually accurate and not misleading. As applied to glyphosate, the required warnings are false and misleading. . . .

As plaintiffs have shown that they are likely to succeed on the merits of their First Amendment claim, are likely to suffer irreparable harm absent an injunction, and that the balance of equities and public interest favor an injunction, the court will grant plaintiffs’ request to enjoin Proposition 65’s warning requirement for glyphosate.

The Monsanto Case commendably highlights a little-appreciated threat of government overregulatory zeal.  Not only may excessive regulation fail a cost-benefit test, and undermine private property rights, it may violates the First Amendment speech rights of private actors when it compels inaccurate speech.  The negative economic consequences may be substantial  when the government-mandated speech involves a claim about a technical topic that not only lacks empirical support (and thus may be characterized as “junk science”), but is deceptive and misleading (if not demonstrably false).  Deceptive and misleading speech in the commercial market place reduces marketplace efficiency and reduces social welfare (both consumer’s surplus and producer’s surplus).  In particular, it does this by deterring mutually beneficial transactions (for example, purchases of Roundup that would occur absent misleading labeling about cancer risks), generating suboptimal transactions (for example, purchases of inferior substitutes to Roundup due to misleading Roundup labeling), and distorting competition within the marketplace (the reallocation of market shares among Roundup and substitutes not subject to labeling).  The short-term static effects of such market distortions may be dwarfed by the  dynamic effects, such as firms’ disincentives to invest in innovation (or even participate) in markets subject to inaccurate information concerning the firms’ products or services.

In short, the Monsanto Case highlights the fact that government regulation not only imposes an implicit tax on business – it affirmatively distorts the workings of individual markets if it causes the introduction misleading or deceptive information that is material to marketplace decision-making.  The threat of such distortive regulation may be substantial, especially in areas where regulators interact with “public interest clients” that have an incentive to demonize disfavored activities by private commercial actors – one example being the health and safety regulation of agricultural chemicals.  In those areas, there may be a case for federal preemption of state regulation, and for particularly close supervision of federal agencies to avoid economically inappropriate commercial speech mandates.  Stay tuned for future discussion of such potential legal reforms.

Over the last two decades, the United States government has taken the lead in convincing jurisdictions around the world to outlaw “hard core” cartel conduct.  Such cartel activity reduces economic welfare by artificially fixing prices and reducing the output of affected goods and services.  At the same, the United States has acted to promote international cooperation among government antitrust enforcers to detect, investigate, and punish cartels.

In 2017, however, the U.S. Court of Appeal for the Second Circuit (citing concerns of “international comity”) held that a Chinese export cartel that artificially raised the price of vitamin imports into the United States should be shielded from U.S. antitrust penalties—based merely on one brief from a Chinese government agency that said it approved of the conduct. The U.S. Supreme Court is set to review that decision later this year, in a case styled Animal Science Products, Inc., v. Hebei Welcome Pharmaceutical Co. Ltd.  By overturning the Second Circuit’s ruling (and disavowing the overly broad “comity doctrine” cited by that court), the Supreme Court would reaffirm the general duty of federal courts to apply federal law as written, consistent with the constitutional separation of powers.  It would also reaffirm the importance of the global fight against cartels, which has reflected consistent U.S. executive branch policy for decades (and has enjoyed strong support from the International Competition Network, the OECD, and the World Bank).

Finally, as a matter of economic policy, the Animal Science Products case highlights the very real harm that occurs when national governments tolerate export cartels that reduce economic welfare outside their jurisdictions, merely because domestic economic interests are not directly affected.  In order to address this problem, the U.S. government should negotiate agreements with other nations under which the signatory states would agree:  (1) not to legally defend domestic exporting entities that impose cartel harm in other jurisdictions; and (2) to cooperate more fully in rooting out harmful export-cartel activity, wherever it is found.

For a more fulsome discussion of the separation of powers, international relations, and economic policy issues raised by the Animal Science Products case, see my recent Heritage Foundation Legal Memorandum entitled The Supreme Court and Animal Science Products: Sovereignty and Export Cartels.