Archives For regulation

At this point, only the most masochistic and cynical among DC’s policy elite actually desire for the net neutrality conflict to continue. And yet, despite claims that net neutrality principles are critical to protecting consumers, passage of the current Congressional Review Act (“CRA”) disapproval resolution in Congress would undermine consumer protection and promise only to drag out the fight even longer.

The CRA resolution is primarily intended to roll back the FCC’s re-re-classification of broadband as a Title I service under the Communications Act in the Restoring Internet Freedom Order (“RIFO”). The CRA allows Congress to vote to repeal rules recently adopted by federal agencies; upon a successful CRA vote, the rules are rescinded and the agency is prohibited from adopting substantially similar rules in the future.

But, as TechFreedom has noted, it’s not completely clear that a CRA on a regulatory classification decision will work quite the way Congress intends it and could just trigger more litigation cycles, largely because it is unclear what parts of the RIFO are actually “rules” subject to the CRA. Harold Feld has written a critique of TechFreedom’s position, arguing, in effect, that of course the RIFO is a rule; TechFreedom responded with a pretty devastating rejoinder.

But this exchange really demonstrates TechFreedom’s central argument: It is sufficiently unclear how or whether the CRA will apply to the various provisions of the RIFO, such that the only things the CRA is guaranteed to do are 1) to strip consumers of certain important protections — it would take away the FCC’s transparency requirements for ISPs, and imperil privacy protections currently ensured by the FTC — while 2) prolonging the already interminable litigation and political back-and-forth over net neutrality.

The CRA is political theater

The CRA resolution effort is not about good Internet regulatory policy; rather, it’s pure political opportunism ahead of the midterms. Democrats have recognized net neutrality as a good wedge issue because of its low political opportunity cost. The highest-impact costs of over-regulating broadband through classification decisions are hard to see: Rather than bad things happening, the costs arrive in the form of good things not happening. Eventually those costs work their way to customers through higher access prices or less service — especially in rural areas most in need of it — but even these effects take time to show up and, when they do, are difficult to pin on any particular net neutrality decision, including the CRA resolution. Thus, measured in electoral time scales, prolonging net neutrality as a painful political issue — even though actual resolution of the process by legislation would be the sensible course — offers tremendous upside for political challengers and little cost.  

The truth is, there is widespread agreement that net neutrality issues need to be addressed by Congress: A constant back and forth between the FCC (and across its own administrations) and the courts runs counter to the interests of consumers, broadband companies, and edge providers alike. Virtually whatever that legislative solution ends up looking like, it would be an improvement over the unstable status quo.

There have been various proposals from Republicans and Democrats — many of which contain provisions that are likely bad ideas — but in the end, a bill passed with bipartisan input should have the virtue of capturing an open public debate on the issue. Legislation won’t be perfect, but it will be tremendously better than the advocacy playground that net neutrality has become.

What would the CRA accomplish?

Regardless of what one thinks of the substantive merits of TechFreedom’s arguments on the CRA and the arcana of legislative language distinguishing between agency “rules” and “orders,” if the CRA resolution is successful (a prospect that is a bit more likely following the Senate vote to pass it) what follows is pretty clear.

The only certain result of the the CRA resolution becoming law would be to void the transparency provisions that the FCC introduced in the RIFO — the one part of the Order that is pretty clearly a “rule” subject to CRA review — and it would disable the FCC from offering another transparency rule in its place. Everything else is going to end up — surprise! — before the courts, which would serve only to keep the issues surrounding net neutrality unsettled for another several years. (A cynic might suggest that this is, in fact, the goal of net neutrality proponents, for whom net neutrality has been and continues to have important political valence.)

And if the CRA resolution withstands the inevitable legal challenge to its rescision of the rest of the RIFO, it would also (once again) remove broadband privacy from the FTC’s purview, placing it back into the FCC’s lap — which is already prohibited from adopting privacy rules following last year’s successful CRA resolution undoing the Wheeler FCC’s broadband privacy regulations. The result is that we could be left without any broadband privacy regulator at all — presumably not the outcome strong net neutrality proponents want — but they persevere nonetheless.

Moreover, TechFreedom’s argument that the CRA may not apply to all parts of the RIFO could have a major effect on whether or not Congress is even accomplishing anything at all (other than scoring political points) with this vote. It could be the case that the CRA applies only to “rules” and not “orders,” or it could be the case that even if the CRA does apply to the RIFO, its passage would not force the FCC to revive the abrogated 2015 Open Internet Order, as proponents of the CRA vote hope.

Whatever one thinks of these arguments, however, they are based on a sound reading of the law and present substantial enough questions to sustain lengthy court challenges. Thus, far from a CRA vote actually putting to rest the net neutrality issue, it is likely to spawn litigation that will drag out the classification uncertainty question for at least another year (and probably more, with appeals).

Stop playing net neutrality games — they aren’t fun

Congress needs to stop trying to score easy political points on this issue while avoiding the hard and divisive work of reaching a compromise on actual net neutrality legislation. Despite how the CRA is presented in the popular media, a CRA vote is the furthest thing from a simple vote for net neutrality: It’s a political calculation to avoid accountability.

One of the hottest antitrust topics of late has been institutional investors’ “common ownership” of minority stakes in competing firms.  Writing in the Harvard Law Review, Einer Elhauge proclaimed that “[a]n economic blockbuster has recently been exposed”—namely, “[a] small group of institutions has acquired large shareholdings in horizontal competitors throughout our economy, causing them to compete less vigorously with each other.”  In the Antitrust Law Journal, Eric Posner, Fiona Scott Morton, and Glen Weyl contended that “the concentration of markets through large institutional investors is the major new antitrust challenge of our time.”  Those same authors took to the pages of the New York Times to argue that “[t]he great, but mostly unknown, antitrust story of our time is the astonishing rise of the institutional investor … and the challenge that it poses to market competition.”

Not surprisingly, these scholars have gone beyond just identifying a potential problem; they have also advocated policy solutions.  Elhauge has called for allowing government enforcers and private parties to use Section 7 of the Clayton Act, the provision primarily used to prevent anticompetitive mergers, to police institutional investors’ ownership of minority positions in competing firms.  Posner et al., concerned “that private litigation or unguided public litigation could cause problems because of the interactive nature of institutional holdings on competition,” have proposed that federal antitrust enforcers adopt an enforcement policy that would encourage institutional investors either to avoid common ownership of firms in concentrated industries or to limit their influence over such firms by refraining from voting their shares.

The position of these scholars is thus (1) that common ownership by institutional investors significantly diminishes competition in concentrated industries, and (2) that additional antitrust intervention—beyond generally applicable rules on, say, hub-and-spoke conspiracies and anticompetitive information exchanges—is appropriate to prevent competitive harm.

Mike Sykuta and I have recently posted a paper taking issue with this two-pronged view.  With respect to the first prong, we contend that there are serious problems with both the theory of competitive harm stemming from institutional investors’ common ownership and the empirical evidence that has been marshalled in support of that theory.  With respect to the second, we argue that even if competition were softened by institutional investors’ common ownership of small minority interests in competing firms, the unintended negative consequences of an antitrust fix would outweigh any benefits from such intervention.

Over the next few days, we plan to unpack some of the key arguments in our paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.  In the meantime, we encourage readers to download the paper and send us any comments.

The paper’s abstract is below the fold. Continue Reading…

Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

In January a Food and Drug Administration advisory panel, the Tobacco Products Scientific Advisory Committee (TPSAC), voted 8-1 that the weight of scientific evidence shows that switching from cigarettes to an innovative, non-combustible tobacco product such as Philip Morris International’s (PMI’s) IQOS system significantly reduces a user’s exposure to harmful or potentially harmful chemicals.

This finding should encourage the FDA to allow manufacturers to market smoke-free products as safer alternatives to cigarettes. But, perhaps predictably, the panel’s vote has incited a regulatory furor among certain politicians.

Last month, several United States senators, including Richard Blumenthal, Dick Durbin, and Elizabeth Warren, sent a letter to FDA Commissioner Scott Gottlieb urging the agency to

avoid rushing through new products, such as IQOS, … without requiring strong evidence that any such product will reduce the risk of disease, result in a large number of smokers quitting, and not increase youth tobacco use.

At the TPSAC meeting, nine members answered five multi-part questions about proposed marketing claims for the device. Taken as a whole, the panel’s votes indicate considerable agreement that non-combustible tobacco products like IQOS should, in fact, allay the senators’ concerns. And a closer look at the results reveals a much more nuanced outcome than either the letter or much of the media coverage has suggested.

“Reduce the risk of disease”: Despite the finding that IQOS reduces exposure to harmful chemicals, the panel nominally rejected a claim that it would reduce the risk of tobacco-related diseases. The panel’s objection, however, centered on the claim’s wording that IQOS “can reduce” risk, rather than “may reduce” risk. And, in the panel’s closest poll, it rejected by just a single vote the claim that “switching completely to IQOS presents less risk of harm than continuing to smoke cigarettes.”

“Result in large number of smokers quitting”: The panel unanimously concluded that PMI demonstrated a “low” likelihood that former smokers would re-initiate tobacco use with the IQOS system. The only options were “low,” “medium,” and “high.” This doesn’t mean it will necessarily help non-users quit in the first place, of course, but for smokers who do switch, it means the device helps them stay away from cigarettes.

“Not increase youth tobacco use”: A majority of the voting panel members agreed that PMI demonstrated a “low” likelihood that youth “never smokers” would become established IQOS users.

By definition, the long-term health benefits of innovative new products like IQOS are uncertain. But the cost of waiting for perfect information may be substantial.

It’s worth noting that the American Cancer Society recently shifted its position on electronic cigarettes, recommending that individuals who do not quit smoking

should be encouraged to switch to the least harmful form of tobacco product possible; switching to the exclusive use of e-cigarettes is preferable to continuing to smoke combustible products.

Dr. Nancy Rigotti agrees. A professor of medicine at Harvard and Director of the Tobacco Research and Treatment Center at Massachusetts General Hospital, Dr. Rigotti is a prominent tobacco-cessation researcher and the author of a February 2018 National Academies of Science, Engineering, and Medicine Report that examined over 800 peer-reviewed scientific studies on the health effects of e-cigarettes. As she has said:

The field of tobacco control recognizes cessation is the goal, but if the patient can’t quit then I think we should look at harm reduction.

About her recent research, Dr. Rigotti noted:

I think the major takeaway is that although there’s a lot we don’t know, and although they have some health risks, [e-cigarettes] are clearly better than cigarettes….

Unlike the senators pushing the FDA to prohibit sales of non-combustible tobacco products, experts recognize that there is enormous value in these products: the reduction of imminent harm relative to the alternative.

Such harm-reduction strategies are commonplace, even when the benefits aren’t perfectly quantifiable. Bike helmet use is encouraged (or mandated) to reduce the risk and harm associated with bicycling. Schools distribute condoms to reduce teen pregnancy and sexually transmitted diseases. Local jurisdictions offer needle exchange programs to reduce the spread of AIDS and other infectious diseases; some offer supervised injection facilities to reduce the risk of overdose. Methadone and Suboxone are less-addictive opioids used to treat opioid use disorder.

In each of these instances, it is understood that the underlying, harmful behaviors will continue. But it is also understood that the welfare benefits from reducing the harmful effects of such behavior outweigh any gain that might be had from futile prohibition efforts.

By the same token — and seemingly missed by the senators urging an FDA ban on non-combustible tobacco technologies — constraints placed on healthier alternatives induce people, on the margin, to stick with the less-healthy option. Thus, many countries that have adopted age restrictions on their needle exchange programs and supervised injection facilities have seen predictably higher rates of infection and overdose among substance-using youth.

Under the Food, Drug & Cosmetic Act, in order to market “safer” tobacco products manufacturers must demonstrate that they would (1) significantly reduce harm and the risk of tobacco-related disease to individual tobacco users, and (2) benefit the health of the population as a whole. In addition, the Act limits the labeling and advertising claims that manufacturers can make on their products’ behalf.

These may be well-intentioned restraints, but overly strict interpretation of the rules can do far more harm than good.

In 2015, for example, the TPSAC expressed concerns about consumer confusion in an application to market “snus” (a smokeless tobacco product placed between the lip and gum) as a safer alternative to cigarettes. The manufacturer sought to replace the statement on snus packaging, “WARNING: This product is not a safe alternative to cigarettes,” with one reading, “WARNING: No tobacco product is safe, but this product presents substantially lower risks to health than cigarettes.”

The FDA denied the request, stating that the amended warning label “asserts a substantial reduction in risks, which may not accurately convey the risks of [snus] to consumers” — even though it agreed that snus “substantially reduce the risks of some, but not all, tobacco-related diseases.”

But under this line of reasoning, virtually no amount of net health benefits would merit approval of marketing language designed to encourage the use of less-harmful products as long as any risk remains. And yet consumers who refrain from using snus after reading the stronger warning might instead — and wrongly — view cigarettes as equally healthy (or healthier), precisely because of the warning. That can’t be sound policy if the aim is actually to reduce harm overall.

To be sure, there is a place for government to try to ensure accuracy in marketing based on health claims. But it is impossible for regulators to fine-tune marketing materials to convey the full range of truly relevant information for all consumers. And pressuring the FDA to limit the sale and marketing of smoke-free products as safer alternatives to cigarettes — in the face of scientific evidence that they would likely achieve significant harm-reduction goals — could do far more harm than good.

The cause of basing regulation on evidence-based empirical science (rather than mere negative publicity) – and of preventing regulatory interference with First Amendment commercial speech rights – got a judicial boost on February 26.

Specifically, in National Association of Wheat Growers et al. v. Zeise (Monsanto Case), a California federal district court judge preliminarily enjoined application against Monsanto of a labeling requirement imposed by a California regulatory law, Proposition 65.  Proposition 65 mandates that the Governor of California publish a list of chemicals known to the State to cause cancer, and also prohibits any person in the course of doing business from knowingly and intentionally exposing anyone to the listed chemicals without a prior “clear and reasonable” warning.  In this case, California sought to make Monsanto place warning labels on its popular Roundup weed killer products, stating that glyphosate, a widely-used herbicide and key Roundup ingredient, was known to cause cancer.  Monsanto, joined by various agribusiness entities, sued to enjoin California from taking that action.  Judge William Shubb concluded that there was insufficient evidence that the active ingredient in Roundup causes cancer, and that requiring Roundup to publish warning labels would violate Monsanto’s First Amendment rights by compelling it to engage in false and misleading speech.  Salient excerpts from Judge Shubb’s opinion are set forth below:

[When, as here, it compels commercial speech, in order to satisfy the First Amendment,] [t]he State has the burden of demonstrating that a disclosure requirement is purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest. . . .  The dispute in the present case is over whether the compelled disclosure is of purely factual and uncontroversial information. In this context, “uncontroversial” “refers to the factual accuracy of the compelled disclosure, not to its subjective impact on the audience.” [citation omitted]

 On the evidence before the court, the required warning for glyphosate does not appear to be factually accurate and uncontroversial because it conveys the message that glyphosate’s carcinogenicity is an undisputed fact, when almost all other regulators have concluded that there is insufficient evidence that glyphosate causes cancer. . . .

It is inherently misleading for a warning to state that a chemical is known to the state of California to cause cancer based on the finding of one organization [, the International Agency for Research on Cancer] (which as noted above, only found that substance is probably carcinogenic), when apparently all other regulatory and governmental bodies have found the opposite, including the EPA, which is one of the bodies California law expressly relies on in determining whether a chemical causes cancer. . . .  [H]ere, given the heavy weight of evidence in the record that glyphosate is not in fact known to cause cancer, the required warning is factually inaccurate and controversial. . . .

The court’s First Amendment inquiry here boils down to what the state of California can compel businesses to say. Whether Proposition 65’s statutory and regulatory scheme is good policy is not at issue. However, where California seeks to compel businesses to provide cancer warnings, the warnings must be factually accurate and not misleading. As applied to glyphosate, the required warnings are false and misleading. . . .

As plaintiffs have shown that they are likely to succeed on the merits of their First Amendment claim, are likely to suffer irreparable harm absent an injunction, and that the balance of equities and public interest favor an injunction, the court will grant plaintiffs’ request to enjoin Proposition 65’s warning requirement for glyphosate.

The Monsanto Case commendably highlights a little-appreciated threat of government overregulatory zeal.  Not only may excessive regulation fail a cost-benefit test, and undermine private property rights, it may violates the First Amendment speech rights of private actors when it compels inaccurate speech.  The negative economic consequences may be substantial  when the government-mandated speech involves a claim about a technical topic that not only lacks empirical support (and thus may be characterized as “junk science”), but is deceptive and misleading (if not demonstrably false).  Deceptive and misleading speech in the commercial market place reduces marketplace efficiency and reduces social welfare (both consumer’s surplus and producer’s surplus).  In particular, it does this by deterring mutually beneficial transactions (for example, purchases of Roundup that would occur absent misleading labeling about cancer risks), generating suboptimal transactions (for example, purchases of inferior substitutes to Roundup due to misleading Roundup labeling), and distorting competition within the marketplace (the reallocation of market shares among Roundup and substitutes not subject to labeling).  The short-term static effects of such market distortions may be dwarfed by the  dynamic effects, such as firms’ disincentives to invest in innovation (or even participate) in markets subject to inaccurate information concerning the firms’ products or services.

In short, the Monsanto Case highlights the fact that government regulation not only imposes an implicit tax on business – it affirmatively distorts the workings of individual markets if it causes the introduction misleading or deceptive information that is material to marketplace decision-making.  The threat of such distortive regulation may be substantial, especially in areas where regulators interact with “public interest clients” that have an incentive to demonize disfavored activities by private commercial actors – one example being the health and safety regulation of agricultural chemicals.  In those areas, there may be a case for federal preemption of state regulation, and for particularly close supervision of federal agencies to avoid economically inappropriate commercial speech mandates.  Stay tuned for future discussion of such potential legal reforms.

The terms of the United Kingdom’s (UK) exit from the European Union (EU) – “Brexit” – are of great significance not just to UK and EU citizens, but for those in the United States and around the world who value economic liberty (see my Heritage Foundation memorandum giving the reasons why, here).

If Brexit is to promote economic freedom and enhanced economic welfare, Brexit negotiations between the UK and the EU must not limit the ability of the United Kingdom to pursue (1) efficiency-enhancing regulatory reform and (2) trade liberalizing agreements with non-EU nations.  These points are expounded upon in a recent economic study (The Brexit Inflection Point) by the non-profit UK think tank the Legatum Institute, which has produced an impressive body of research on the benefits of Brexit, if implemented in a procompetitive, economically desirable fashion.  (As a matter of full disclosure, I am a member of Legatum’s “Special Trade Commission,” which “seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  Members of the Special Trade Commission are unpaid – they serve on a voluntary pro bono basis.)

Unfortunately, however, leading UK press commentators have urged the UK Government to accede to a full harmonization of UK domestic regulations and trade policy with the EU.  Such a deal would be disastrous.  It would prevent the UK from entering into mutually beneficial trade liberalization pacts with other nations or groups of nations (e.g., with the U.S. and with the members of the Transpacific Partnership (TPP) trade agreement), because such arrangements by necessity would lead to a divergence with EU trade strictures.  It would also preclude the UK from unilaterally reducing harmful regulatory burdens that are a byproduct of economically inefficient and excessive EU rules.  In short, it would be antithetical to economic freedom and economic welfare.

Notably, in a November 30 article (Six Impossible Notions About “Global Britain”), a well-known business journalist, Martin Wolf of the Financial Times, sharply criticized The Brexit Inflection Point’s recommendation that the UK should pursue trade and regulatory policies that would diverge from EU standards.  Notably, Wolf characterized as an “impossible thing” Legatum’s point that the UK should not “’allow itself to be bound by the EU’s negotiating mandate.’  We all now know this is infeasible.  The EU holds the cards and it knows it holds the cards. The Legatum authors still do not.”

Shanker Singham, Director of Economic Policy and Prosperity Studies at Legatum, brilliantly responded to Wolf’s critique in a December 4 article (published online by CAPX) entitled A Narrow-Minded Brexit Is Doomed to Fail.  Singham’s trenchant analysis merits being set forth in its entirety (by permission of the author):

“Last week, the Financial Times’s chief economics commentator, Martin Wolf, dedicated his column to criticising The Brexit Inflection Point, a report for the Legatum Institute in which Victoria Hewson, Radomir Tylecote and I discuss what would constitute a good end state for the UK as it seeks to exercise an independent trade and regulatory policy post Brexit, and how we get from here to there.

We write these reports to advance ideas that we think will help policymakers as they tackle the single biggest challenge this country has faced since the Second World War. We believe in a market place of ideas, and we welcome challenge. . . .

[W]e are thankful that Martin Wolf, an eminent economist, has chosen to engage with the substance of our arguments. However, his article misunderstands the nature of modern international trade negotiations, as well as the reality of the European Union’s regulatory system – and so his claim that, like the White Queen, we “believe in impossible things” simply doesn’t stack up.

Mr Wolf claims there are six impossible things that we argue. We will address his rebuttals in turn.

But first, in discussions about the UK’s trade policy, it is important to bear in mind that the British government is currently discussing the manner in which it will retake its independent WTO membership. This includes agricultural import quotas, and its WTO rectification processes with other WTO members.

If other countries believe that the UK will adopt the position of maintaining regulatory alignment with the EU, as advocated by Mr Wolf and others, the UK’s negotiating strategy would be substantially weaker. It would quite wrongly suggest that the UK will be unable to lower trade barriers and offer the kind of liberalisation that our trading partners seek and that would work best for the UK economy. This could negatively impact both the UK and the EU’s ongoing discussions in the WTO.

Has the EU’s trading system constrained growth in the World?

The first impossible thing Mr Wolf claims we argue is that the EU system of protectionism and harmonised regulation has constrained economic growth for Britain and the world. He is right to point out that the volume of world trade has increased, and the UK has, of course, experienced GDP growth while a member of the EU.

However, as our report points out, the EU’s prescriptive approach to regulation, especially in the recent past (for example, its approach on data protection, audio-visual regulation, the restrictive application of the precautionary principle, REACH chemicals regulation, and financial services regulations to name just a few) has led to an increase in anti-competitive regulation and market distortions that are wealth destructive.

As the OECD notes in various reports on regulatory reform, regulation can act as a behind-the-border barrier to trade and impede market openness for trade and investment. Inefficient regulation imposes unnecessary burdens on firms, increases barriers to entry, impacts on competition and incentives for innovation, and ultimately hurts productivity. The General Data Protection Regulation (GDPR) is an example of regulation that is disproportionate to its objectives; it is highly prescriptive and imposes substantial compliance costs for business that want to use data to innovate.

Rapid growth during the post-war period is in part thanks to the progressive elimination of border trade barriers. But, in terms of wealth creation, we are no longer growing at that rate. Since before the financial crisis, measures of actual wealth creation (not GDP which includes consumer and government spending) such as industrial output have stalled, and the number of behind-the-border regulatory barriers has been increasing.

The global trading system is in difficulty. The lack of negotiation of a global trade round since the Uruguay Round, the lack of serious services liberalisation in either the built-in agenda of the WTO or sectorally following on from the Basic Telecoms Agreement and its Reference Paper on Competition Safeguards in 1997 has led to an increase in behind-the-border barriers and anti-competitive distortions and regulation all over the world. This stasis in international trade negotiations is an important contributory factor to what many economists have talked about as a “new normal” of limited growth, and a global decline in innovation.

Meanwhile the EU has sought to force its regulatory system on the rest of the world (the GDPR is an example of this). If it succeeds, the result would be the kind of wealth destruction that pushes more people into poverty. It is against this backdrop that the UK is negotiating with both the EU and the rest of the world.

The question is whether an independent UK, the world’s sixth biggest economy and second biggest exporter of services, is able to contribute to improving the dynamics of the global economic architecture, which means further trade liberalisation. The EU is protectionist against outside countries, which is antithetical to the overall objectives of the WTO. This is true in agriculture and beyond. For example, the EU imposes tariffs on cars at four times the rate applied by the US, while another large auto manufacturing country, Japan, has unilaterally removed its auto tariffs.

In addition, the EU27 represents a declining share of UK exports, which is rather counter-intuitive for a Customs Union and single market. In 1999, the EU represented 55 per cent of UK exports, and by 2016, this was 43 per cent. That said, the EU will remain an important, albeit declining, market for the UK, which is why we advocate a comprehensive free trade agreement with it.

Can the UK secure meaningful regulatory recognition from the EU without being identical to it?

Second, Mr Wolf suggests that regulatory recognition between the UK and EU is possible only if there is harmonisation or identical regulation between the UK and EU.

This is at odds with WTO practice, stretching back to its rules on domestic laws and regulation as encapsulated in Article III of the GATT and Article VI of the GATS, and as expressed in the Technical Barriers to Trade (TBT) and Sanitary and Phytosanitary (SPS) agreements.

This is the critical issue. The direction of travel of international trade thinking is towards countries recognising each other’s regulatory systems if they achieve the same ultimate goal of regulation, even if the underlying regulation differs, and to regulate in ways that are least distortive to international trade and competition. There will be areas where this level of recognition will not be possible, in which case UK exports into the EU will of course have to satisfy the standards of the EU. But even here we can mitigate the trade costs to some extent by Mutual Recognition Agreements on conformity assessment and market surveillance.

Had the US taken the view that it would not receive regulatory recognition unless their regulatory systems were the same, the recent agreement on prudential measures in insurance and reinsurance services between the EU and US would not exist. In fact this point highlights the crucial issue which the UK must successfully negotiate, and one in which its interests are aligned with other countries and with the direction of travel of the WTO itself. The TBT and SPS agreements broadly provide that mutual recognition should not be denied where regulatory goals are aligned but technical regulation differs.

Global trade and regulatory policy increasingly looks for regulation that promotes competition. The EU is on a different track, as the GDPR demonstrates. This is the reason that both the Canada-EU agreement (CETA) and the EU offer in the Trade in Services agreement (TiSA) does not include new services. If GDPR were to become the global standard, trade in data would be severely constrained, slowing the development of big data solutions, the fourth industrial revolution, and new services trade generally.

As many firms recognise, this would be extremely damaging to global prosperity. In arguing that regulatory recognition is only available if the UK is fully harmonised with the EU, Mr Wolf may be in harmony with the EU approach to regulation. But that is exactly the approach that is damaging the global trading environment.

Can the UK exercise trade policy leadership?

Third, Mr Wolf suggests that other countries do not, and will not, look to the UK for trade leadership. He cites the US’s withdrawal from the trade negotiating space as an example. But surely the absence of the world’s biggest services exporter means that the world’s second biggest exporter of services will be expected to advocate for its own interests, and argue for greater services liberalisation.

Mr Wolf believes that the UK is a second-rank power in decline. We take a different view of the world’s sixth biggest economy, the financial capital of the world and the second biggest exporter of services. As former New Zealand High Commissioner, Sir Lockwood Smith, has said, the rest of the world does not see the UK as the UK too often seems to see itself.

The global companies that have their headquarters in the UK do not see things the same way as Mr Wolf. In fact, the lack of trade leadership since 1997 means that a country with significant services exports would be expected to show some leadership.

Mr Wolf’s point is that far from seeking to grandiosely lead global trade negotiations, the UK should stick to its current knitting, which consists of its WTO rectification, and includes the negotiation of its agricultural import quotas and production subsidies in agriculture. This is perhaps the most concerning part of his argument. Yes, the UK must rectify its tariff schedules, but for that process to be successful, especially on agricultural import quotas, it must be able to demonstrate to its partners that it will be able to grant further liberalisation in the near term future. If it can’t, then its trading partners will have no choice but to demand as much liberalisation as they can secure right now in the rectification process.

This will complicate that process, and cause damage to the UK as it takes up its independent WTO membership. Those WTO partners who see the UK as vulnerable on this point will no doubt see validation in Mr Wolf’s article and assume it means that no real liberalisation will be possible from the UK. The EU should note that complicating this process for the UK will not help the EU in its own WTO processes, where it is vulnerable.

Trade negotiations are dynamic not static and the UK must act quickly

Fourth, Mr Wolf suggests that the UK is not under time pressure to “escape from the EU”.  This statement does not account for how international trade negotiations work in practice. In order for countries to cooperate with the UK on its WTO rectification, and its TRQ negotiations, as well to seriously negotiate with it, they have to believe that the UK will have control over tariff schedules and regulatory autonomy from day one of Brexit (even if we may choose not to make changes to it for an implementation period).

If non-EU countries think that the UK will not be able to exercise its freedom for several years, they will simply demand their pound of flesh in the negotiations now, and get on with the rest of their trade policy agenda. Trade negotiations are not static. The US executive could lose trade-negotiating authority in the summer of next year if the NAFTA renegotiation is not going well. Other countries will seek to accede to the Trans Pacific Partnership (TPP). China is moving forward with its Regional Cooperation and Economic Partnership, which does not meaningfully touch on domestic regulatory barriers. Much as we might criticise Donald Trump, his administration has expressed strong political will for a UK-US agreement, and in that regard has broken with traditional US trade policy thinking. The UK has an opportunity to strike and must take it.

The UK should prevail on the EU to allow Customs Agencies to be inter-operable from day one

Fifth, with respect to the challenges raised on customs agencies working together, our report argued that UK customs and the customs agencies of the EU member states should discuss customs arrangements at a practical and technical level now. What stands in the way of this is the EU’s stubbornness. Customs agencies are in regular contact on a business-as-usual basis, so the inability of UK and member-state customs agencies to talk to each other about the critical issue of new arrangements would seem to border on negligence. Of course, the EU should allow member states to have these critical conversations now.  Given the importance of customs agencies interoperating smoothly from day one, the UK Government must press its case with the European Commission to allow such conversations to start happening as a matter of urgency.

Does the EU hold all the cards?

Sixth, Mr Wolf argues that the EU holds all the cards and knows it holds all the cards, and therefore disagrees with our claim that the the UK should “not allow itself to be bound by the EU’s negotiating mandate”. As with his other claims, Mr Wolf finds himself agreeing with the EU’s negotiators. But that does not make him right.

While absence of a trade deal will of course damage UK industries, the cost to EU industries is also very significant. Beef and dairy in Ireland, cars and dairy in Bavaria, cars in Catalonia, textiles and dairy in Northern Italy – all over Europe (and in politically sensitive areas), industries stands to lose billions of Euros and thousands of jobs. This is without considering the impact of no financial services deal, which would increase the cost of capital in the EU, aborting corporate transactions and raising the cost of the supply chain. The EU has chosen a mandate that risks neither party getting what it wants.

The notion that the EU is a masterful negotiator, while the UK’s negotiators are hopeless is not the global view of the EU and the UK. Far from it. The EU in international trade negotiations has a reputation for being slow moving, lacking in creative vision, and unable to conclude agreements. Indeed, others have generally gone to the UK when they have been met with intransigence in Brussels.

What do we do now?

Mr Wolf’s argument amounts to a claim that the UK is not capable of the kind of further and deeper liberalisation that its economy would suggest is both possible and highly desirable both for the UK and the rest of the world. According to Mr Wolf, the UK can only consign itself to a highly aligned regulatory orbit around the EU, unable to realise any other agreements, and unable to influence the regulatory system around which it revolves, even as that system becomes ever more prescriptive and anti-competitive. Such a position is at odds with the facts and would guarantee a poor result for the UK and also cause opportunities to be lost for the rest of the world.

In all of our [Legatum Brexit-related] papers, we have started from the assumption that the British people have voted to leave the EU, and the government is implementing that outcome. We have then sought to produce policy recommendations based on what would constitute a good outcome as a result of that decision. This can be achieved only if we maximise the opportunities and minimise the disruptions.

We all recognise that the UK has embarked on a very difficult process. But there is a difference between difficult and impossible. There is also a difference between tasks that must be done and take time, and genuine negotiation points. We welcome the debate that comes from constructive challenge of our proposals; and we ask in turn that those who criticise us suggest alternative plans that might achieve positive outcomes. We look forward to the opportunity of a broader debate so that collectively the country can find the best path forward.”

 

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.