Archives For consumer protection

Source: KC Green

GDPR is officially one year old. How have the first 12 months gone? As you can see from the mix of data and anecdotes below, it appears that compliance costs have been astronomical; individual “data rights” have led to unintended consequences; “privacy protection” seems to have undermined market competition; and there have been large unseen — but not unmeasurable! — costs in forgone startup investment. So, all-in-all, about what we expected.

GDPR cases and fines

Here is the latest data on cases and fines released by the European Data Protection Board:

  • €55,955,871 in fines
    • €50 million of which was a single fine on Google
  • 281,088 total cases
    • 144,376 complaints
    • 89,271 data breach notifications
    • 47,441 other
  • 37.0% ongoing
  • 62.9% closed
  • 0.1% appealed

Unintended consequences of new data privacy rights

GDPR can be thought of as a privacy “bill of rights.” Many of these new rights have come with unintended consequences. If your account gets hacked, the hacker can use the right of access to get all of your data. The right to be forgotten is in conflict with the public’s right to know a bad actor’s history (and many of them are using the right to memory hole their misdeeds). The right to data portability creates another attack vector for hackers to exploit. And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.

Article 15: Right of access

  • “Amazon sent 1,700 Alexa voice recordings to the wrong user following data request” [The Verge / Nick Statt]
  • “Today I discovered an unfortunate consequence of GDPR: once someone hacks into your account, they can request-—and potentially access—all of your data. Whoever hacked into my Spotify account got all of my streaming, song, etc. history simply by requesting it.” [Jean Yang]

Article 17: Right to be forgotten

  • “Since 2016, newspapers in Belgium and Italy have removed articles from their archives under [GDPR]. Google was also ordered last year to stop listing some search results, including information from 2014 about a Dutch doctor who The Guardian reported was suspended for poor care of a patient.” [NYT / Adam Satariano]
  • “French scam artist Michael Francois Bujaldon is using the GDPR to attempt to remove traces of his United States District Court case from the internet. He has already succeeded in compelling PacerMonitor to remove his case.” [PlainSite]
  • “In the last 5 days, we’ve had requests under GDPR to delete three separate articles … all about US lawsuits concerning scams committed by Europeans. That ‘right to be forgotten’ is working out just great, huh guys?” [Mike Masnick]

Article 20: Right to data portability

  • Data portability increases the attack surface for bad actors to exploit. In a sense, the Cambridge Analytica scandal was a case of too much data portability.
  • “The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?” [Stratechery / Ben Thompson]
  • “Presumably data portability would be imposed on Facebook’s competitors and potential competitors as well.  That would mean all future competing firms would have to slot their products into a Facebook-compatible template.  Let’s say that 17 years from now someone has a virtual reality social network innovation: does it have to be “exportable” into Facebook and other competitors?  It’s hard to think of any better way to stifle innovation.” [Marginal Revolution / Tyler Cowen]

Article 21: Right to opt out of data processing

  • “[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, these frameworks enable free riders—individuals that opt out but still expect the same services and price—and undercut access to free content and services.” [ITIF / Alan McQuinn and Daniel Castro]

Compliance costs are astronomical

  • Prior to GDPR going into effect, “PwC surveyed 200 companies with more than 500 employees and found that 68% planned on spending between $1 and $10 million to meet the regulation’s requirements. Another 9% planned to spend more than $10 million. With over 19,000 U.S. firms of this size, total GDPR compliance costs for this group could reach $150 billion.” [Fortune / Daniel Castro and Michael McLaughlin]
  • “[T]he International Association of Privacy Professionals (IAPP) estimates 500,000 European organizations have registered data protection officers (DPOs) within the first year of the General Data Protection Regulation (GDPR). According to a recent IAPP salary survey, the average DPO’s salary in Europe is $88,000.” [IAPP]
  • As of March 20, 2019, 1,129 US news sites are still unavailable in the EU due to GDPR. [Joseph O’Connor]
  • Microsoft had 1,600 engineers working on GDPR compliance. [Microsoft]
  • During a Senate hearing, Keith Enright, Google’s chief privacy officer, estimated that the company spent “hundreds of years of human time” to comply with the new privacy rules. [Quartz / Ashley Rodriguez]
    • However, French authorities ultimately decided Google’s compliance efforts were insufficient: “France fines Google nearly $57 million for first major violation of new European privacy regime” [Washington Post / Tony Romm]
  • “About 220,000 name tags will be removed in Vienna by the end of [2018], the city’s housing authority said. Officials fear that they could otherwise be fined up to $23 million, or about $1,150 per name.” [Washington Post / Rick Noack]

Tradeoff between privacy regulations and market competition

“On the big guys increasing market share? I don’t believe [the law] will have such a consequence.” Věra Jourová, the European Commissioner for Justice, Consumers and Gender Equality [WSJ / Sam Schechner and Nick Kostov]

“Mentioned GDPR to the head of a European media company. ‘Gift to Google and Facebook, enormous regulatory own-goal.'” [Benedict Evans]

Source: WSJ
  • “Hundreds of companies compete to place ads on webpages or collect data on their users, led by Google, Facebook and their subsidiaries. The European Union’s General Data Protection Regulation, which took effect in May, imposes stiff requirements on such firms and the websites who use them. After the rule took effect in May, Google’s tracking software appeared on slightly more websites, Facebook’s on 7% fewer, while the smallest companies suffered a 32% drop, according to Ghostery, which develops privacy-enhancing web technology.” [WSJ / Greg Ip]
  • Havas SA, one of the world’s largest buyers of ads, says it observed a low double-digit percentage increase in advertisers’ spending through DBM on Google’s own ad exchange on the first day the law went into effect, according to Hossein Houssaini, Havas’s global head of programmatic solutions. On the selling side, companies that help publishers sell ad inventory have seen declines in bids coming through their platforms from Google. Paris-based Smart says it has seen a roughly 50% drop. [WSJ / Nick Kostov and Sam Schechner]
  • “The consequence was that just hours after the law’s enforcement, numerous independent ad exchanges and other vendors watched their ad demand volumes drop between 20 and 40 percent. But with agencies free to still buy demand on Google’s marketplace, demand on AdX spiked. The fact that Google’s compliance strategy has ended up hurting its competitors and redirecting higher demand back to its own marketplace, where it can guarantee it has user consent, has unsettled publishers and ad tech vendors.” [Digiday / Jessica Davies]

Unseen costs of forgone investment & research

  • Startups: One study estimated that venture capital invested in EU startups fell by as much as 50 percent due to GDPR implementation: “Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR … We use our results to provide a back-of-the-envelope calculation of a range of job losses that may be incurred by these ventures, which we estimate to be between 3,604 to 29,819 jobs.” [NBER / Jian Jia, Ginger Zhe Jin, and Liad Wagman]
  • Mergers and acquisitions: “55% of respondents said they had worked on deals that fell apart because of concerns about a target company’s data protection policies and compliance with GDPR” [WSJ / Nina Trentmann]
  • Scientific research: “[B]iomedical researchers fear that the EU’s new General Data Protection Regulation (GDPR) will make it harder to share information across borders or outside their original research context.” [Politico / Sarah Wheaton]

GDPR graveyard

Small and medium-sized businesses (SMBs) have left the EU market in droves (or shut down entirely). Here is a partial list:

Blockchain & P2P Services

  • CoinTouch, peer-to-peer cryptocurrency exchange
  • FamilyTreeDNA, free and public genetic tools
    • Mitosearch
    • Ysearch
  • Monal, XMPP chat app
  • Parity, know-your-customer service for initial coin offerings (ICOs)
  • Seznam, social network for students
  • StreetLend, tool sharing platform for neighbors

Marketing

  • Drawbridge, cross-device identity service
  • Klout, social reputation service by Lithium
  • Unroll.me, inbox management app
  • Verve, mobile programmatic advertising

Video Games

Other

In a recent NY Times opinion piece, Tim Wu, like Elizabeth Holmes, lionizes Steve Jobs. Like Jobs with the iPod and iPhone, and Holmes with the Theranos Edison machine, Wu tells us we must simplify the public’s experience of complex policy into a simple box with an intuitive interface. In this spirit he argues that “what the public wants from government is help with complexity,” such that “[t]his generation of progressives … must accept that simplicity and popularity are not a dumbing-down of policy.”

This argument provides remarkable insight into the complexity problems of progressive thought. Three of these are taken up below: the mismatch of comparing the work of the government to the success of Jobs; the mismatch between Wu’s telling of and Jobs’s actual success; and the latent hypocrisy in Wu’s “simplicity for me, complexity for thee” argument.

Contra Wu’s argument, we need politicians that embrace and lay bare the complexity of policy issues. Too much of our political moment is dominated by demagogues on every side of policy debates offering simple solutions to simplified accounts of complex policy issues. We need public intellectuals, and hopefully politicians as well, to make the case for complexity. Our problems are complex and solutions to them hard (and sometimes unavailing). Without leaders willing to steer into complexity, we can never have a polity able to address complexity.

I. “Good enough for government work” isn’t good enough for Jobs

As an initial matter, there is a great deal of wisdom in Wu’s recognition that the public doesn’t want complexity. As I said at the annual Silicon Flatirons conference in February, consumers don’t want a VCR with lots of dials and knobs that let them control lots of specific features—they just want the damn thing to work. And as that example is meant to highlight, once it does work, most consumers are happy to leave well enough alone (as demonstrated by millions of clocks that would continue to blink 12:00 if VCRs weren’t so 1990s).

Where Wu goes wrong, though, is that he fails to recognize that despite this desire for simplicity, for two decades VCR manufacturers designed and sold VCRs with clocks that were never set—a persistent blinking to constantly remind consumers of their own inadequacies. Had the manufacturers had any insight into the consumer desire for simplicity, all those clocks would have been used for something—anything—other than a reminder that consumers didn’t know how to set them. (Though, to their credit, these devices were designed to operate as most consumers desired without imposing any need to set the clock upon them—a model of simplicity in basic operation that allows consumers to opt-in to a more complex experience.)

If the government were populated by visionaries like Jobs, Wu’s prescription would be wise. But Jobs was a once-in-a-generation thinker. No one in a generation of VCR designers had the insight to design a VCR without a clock (or at least a clock that didn’t blink in a constant reminder of the owner’s inability to set it). And similarly few among the ranks of policy designers are likely to have his abilities, either. On the other hand, the public loves the promise of easy solutions to complex problems. Charlatans and demagogues who would cast themselves in his image, like Holmes did with Theranos, can find government posts in abundance.

Of course, in his paean to offering the public less choice, Wu, himself an oftentime designer of government policy, compares the art of policy design to the work of Jobs—not of Holmes. But where he promises a government run in the manner of Apple, he would more likely give us one more in the mold of Theranos.

There is a more pernicious side to Wu’s argument. He speaks of respect for the public, arguing that “Real respect for the public involves appreciating what the public actually wants and needs,” and that “They would prefer that the government solve problems for them.” Another aspect of respect for the public is recognizing their fundamental competence—that progressive policy experts are not the only ones who are able to understand and address complexity. Most people never set their VCRs’ clocks because they felt no need to, not because they were unable to figure out how to do so. Most people choose not to master the intricacies of public policy. But this is not because the progressive expert class is uniquely able to do so. It is—as Wu notes, that most people do not have the unlimited time or attention that would be needed to do so—time and attention that is afforded to him by his social class.

Wu’s assertion that the public “would prefer that the government solve problems for them” carries echoes of Louis Brandeis, who famously said of consumers that they were “servile, self-indulgent, indolent, ignorant.” Such a view naturally gives rise to Wu’s assumption that the public wants the government to solve problems for them. It assumes that they are unable to solve those problems on their own.

But what Brandeis and progressives cast in his mold attribute to servile indolence is more often a reflection that hoi polloi simply do not have the same concerns as Wu’s progressive expert class. If they had the time to care about the issues Wu would devote his government to, they could likely address them on their own. The fact that they don’t is less a reflection of the public’s ability than of its priorities.

II. Jobs had no monopoly on simplicity

There is another aspect to Wu’s appeal to simplicity in design that is, again, captured well in his invocation of Steve Jobs. Jobs was exceptionally successful with his minimalist, simple designs. He made a fortune for himself and more for Apple. His ideas made Apple one of the most successful companies, with one of the largest user bases, in the history of the world.

Yet many people hate Apple products. Some of these users prefer to have more complex, customizable devices—perhaps because they have particularized needs or perhaps simply because they enjoy having that additional control over how their devices operate and the feeling of ownership that that brings. Some users might dislike Apple products because the interface that is “intuitive” to millions of others is not at all intuitive to them. As trivial as it sounds, most PC users are accustomed to two-button mice—transitioning to Apple’s one-button mouse is exceptionally  discomfitting for many of these users. (In fairness, the one-button mouse design used by Apple products is not attributable to Steve Jobs.) And other users still might prefer devices that are simple in other ways, so are drawn to other products that better cater to their precise needs.

Apple has, perhaps, experienced periods of market dominance with specific products. But this has never been durable—Apple has always faced competition. And this has ensured that those parts of the public that were not well-served by Jobs’s design choices were not bound to use them—they always had alternatives.

Indeed, that is the redeeming aspect of the Theranos story: the market did what it was supposed to. While too many consumers may have been harmed by Holmes’ charlatan business practices, the reality is that once she was forced to bring the company’s product to market it was quickly outed as a failure.

This is how the market works. Companies that design good products, like Apple, are rewarded; other companies then step in to compete by offering yet better products or by addressing other segments of the market. Some of those companies succeed; most, like Theranos, fail.

This dynamic simply does not exist with government. Government is a policy monopolist. A simplified, streamlined, policy that effectively serves half the population does not effectively serve the other half. There is no alternative government that will offer competing policy designs. And to the extent that a given policy serves part of the public better than others, it creates winners and losers.

Of course, the right response to the inadequacy of Wu’s call for more, less complex policy is not that we need more, more complex policy. Rather, it’s that we need less policy—at least policy being dictated and implemented by the government. This is one of the stalwart arguments we free market and classical liberal types offer in favor of market economies: they are able to offer a wider range of goods and services that better cater to a wider range of needs of a wider range of people than the government. The reason policy grows complex is because it is trying to address complex problems; and when it fails to address those problems on a first cut, the solution is more often than not to build “patch” fixes on top of the failed policies. The result is an ever-growing book of rules bound together with voluminous “kludges” that is forever out-of-step with the changing realities of a complex, dynamic world.

The solution to so much complexity is not to sweep it under the carpet in the interest of offering simpler, but only partial, solutions catered to the needs of an anointed subset of the public. The solution is to find better ways to address those complex problems—and often times it’s simply the case that the market is better suited to such solutions.

III. A complexity: What does Wu think of consumer protection?

There is a final, and perhaps most troubling, aspect to Wu’s argument. He argues that respect for the public does not require “offering complete transparency and a multiplicity of choices.” Yet that is what he demands of business. As an academic and government official, Wu has been a loud and consistent consumer protection advocate, arguing that consumers are harmed when firms fail to provide transparency and choice—and that the government must use its coercive power to ensure that they do so.

Wu derives his insight that simpler-design-can-be-better-design from the success of Jobs—and recognizes more broadly that the consumer experience of products of the technological revolution (perhaps one could even call it the tech industry) is much better today because of this simplicity than it was in earlier times. Consumers, in other words, can be better off with firms that offer less transparency and choice. This, of course, is intuitive when one recognizes (as Wu has) that time and attention are among the scarcest of resources.

Steve Jobs and Elizabeth Holmes both understood that the avoidance of complexity and minimizing of choices are hallmarks of good design. Jobs built an empire around this; Holmes cost investors hundreds of millions of dollars in her failed pursuit. But while Holmes failed where Jobs succeeded, her failure was not tragic: Theranos was never the only medical testing laboratory in the market and, indeed, was never more than a bit player in that market. For every Apple that thrives, the marketplace erases a hundred Theranoses. But we do not have a market of governments. Wu’s call for policy to be more like Apple is a call for most government policy to fail like Theranos. Perhaps where the challenge is to do more complex policy simply, the simpler solution is to do less, but simpler, policy well.

Conclusion

We need less dumbing down of complex policy in the interest of simplicity; and we need leaders who are able to make citizens comfortable with and understanding of complexity. Wu is right that good policy need not be complex. But the lesson from that is not that complex policy should be made simple. Rather, the lesson is that policy that cannot be made simple may not be good policy after all.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of  information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare  in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services.  The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.

 

This has been a big year for business in the courts. A U.S. district court approved the AT&T-Time Warner merger, the Supreme Court upheld Amex’s agreements with merchants, and a circuit court pushed back on the Federal Trade Commission’s vague and heavy handed policing of companies’ consumer data safeguards.

These three decisions mark a new era in the intersection of law and economics.

AT&T-Time Warner

AT&T-Time Warner is a vertical merger, a combination of firms with a buyer-seller relationship. Time Warner creates and broadcasts content via outlets such as HBO, CNN, and TNT. AT&T distributes content via services such as DirecTV.

Economists see little risk to competition from vertical mergers, although there are some idiosyncratic circumstances in which competition could be harmed. Nevertheless, the U.S. Department of Justice went to court to block the merger.

The last time the goverment sued to block a merger was more than 40 years ago, and the government lost. Since then, the government relied on the threat of litigation to extract settlements from the merging parties. For example, in the 1996 merger between Time Warner and Turner, the FTC required limits on how the new company could bundle HBO with less desirable channels and eliminated agreements that allowed TCI (a cable company that partially owned Turner) to carry Turner channels at preferential rates.

With AT&T-Time Warner, the government took a big risk, and lost. It was a big risk because (1) it’s a vertical merger, and (2) the case against the merger was weak. The government’s expert argued consumers would face an extra 45 cents a month on their cable bills if the merger went through, but under cross-examination, conceded it might be as little as 13 cents a month. That’s a big difference and raised big questions about the reliability of the expert’s model.

Judge Richard J. Leon’s 170+ page ruling agreed that the government’s case was weak and its expert was not credible. While it’s easy to cheer a victory of big business over big government, the real victory was the judge’s heavy reliance on facts, data, and analysis rather than speculation over the potential for consumer harm. That’s a big deal and may make the way for more vertical mergers.

Ohio v. American Express

The Supreme Court’s ruling in Amex may seem obscure. The court backed American Express Co.’s policy of preventing retailers from offering customers incentives to pay with cheaper cards.

Amex charges higher fees to merchants than do other cards, such as Visa, MasterCard, and Discover. Amex cardholders also have higher incomes and tend to spend more at stores than those associated with other networks. And, Amex offers its cardholders better benefits, services, and rewards than the other cards. Merchants don’t like Amex because of the higher fees, customers prefer Amex because of the card’s perks.

Amex, and other card companies, operate in what is known as a two-sided market. Put simply, they have two sets of customers: merchants who pay swipe fees, and consumers who pay fees and interest.

Part of Amex’s agreement with merchants is an “anti-steering” provision that bars merchants from offering discounts for using non-Amex cards. The U.S. Justice Department and a group of states sued the company, alleging the Amex rules limited merchants’ ability to reduce their costs from accepting credit cards, which meant higher retail prices. Amex argued that the higher prices charged to merchants were kicked back to its cardholders in the form of more and better perks.

The Supreme Court found that the Justice Department and states focused exclusively on one side (merchant fees) of the two-sided market. The courts says the government can’t meet its burden by showing some effect on some part of the market. Instead, they must demonstrate, “increased cost of credit card transactions … reduced number of credit card transactions, or otherwise stifled competition.” The government could not prove any of those things.

We live in a world two-sided markets. Amazon may be the biggest two-sided market in the history of the world, linking buyers and sellers. Smartphones such as iPhones and Android devices are two-sided markets, linking consumers with app developers. The Supreme Court’s ruling in Amex sets a standard for how antitrust law should treat the economics of two-sided markets.

LabMD

LabMD is another matter that seems obscure, but could have big impacts on the administrative state.

Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.

Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.

Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.

The 11th Circuit Court of Appeals ruled the FTC’s approach to developing security standards violates basic principles of due process. The court said the FTC’s basic approach—in which the FTC tries to improve general security practices by suing companies that experience security breaches—violates the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic.

My colleague at ICLE observes the lesson to learn from LabMD isn’t about the illegitimacy of the FTC’s approach to internet privacy and security. Instead, it says legality of the administrative state is premised on courts placing a check on abusive regulators.

The lessons learned from these three recent cases reflect a profound shift in thinkging about the laws governing economic activity:

  • AT&T-Time Warner indicates that facts matter. Mere speculation of potential harms will not satisfy the court.
  • Amex highlights the growing role two-sided markets play in our economy and provides framework for evaluating competition in these markets.
  • LabMD is a small step in reining in the administrative state. Regulations must be scrutinized before they are imposed and enforced.

In some ways none of these decisions are revolutionary. Instead, they reflect an evolution toward greater transparency in how the law is to be applied and greater scrutiny over how the regulations are imposed.

 

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

In January a Food and Drug Administration advisory panel, the Tobacco Products Scientific Advisory Committee (TPSAC), voted 8-1 that the weight of scientific evidence shows that switching from cigarettes to an innovative, non-combustible tobacco product such as Philip Morris International’s (PMI’s) IQOS system significantly reduces a user’s exposure to harmful or potentially harmful chemicals.

This finding should encourage the FDA to allow manufacturers to market smoke-free products as safer alternatives to cigarettes. But, perhaps predictably, the panel’s vote has incited a regulatory furor among certain politicians.

Last month, several United States senators, including Richard Blumenthal, Dick Durbin, and Elizabeth Warren, sent a letter to FDA Commissioner Scott Gottlieb urging the agency to

avoid rushing through new products, such as IQOS, … without requiring strong evidence that any such product will reduce the risk of disease, result in a large number of smokers quitting, and not increase youth tobacco use.

At the TPSAC meeting, nine members answered five multi-part questions about proposed marketing claims for the device. Taken as a whole, the panel’s votes indicate considerable agreement that non-combustible tobacco products like IQOS should, in fact, allay the senators’ concerns. And a closer look at the results reveals a much more nuanced outcome than either the letter or much of the media coverage has suggested.

“Reduce the risk of disease”: Despite the finding that IQOS reduces exposure to harmful chemicals, the panel nominally rejected a claim that it would reduce the risk of tobacco-related diseases. The panel’s objection, however, centered on the claim’s wording that IQOS “can reduce” risk, rather than “may reduce” risk. And, in the panel’s closest poll, it rejected by just a single vote the claim that “switching completely to IQOS presents less risk of harm than continuing to smoke cigarettes.”

“Result in large number of smokers quitting”: The panel unanimously concluded that PMI demonstrated a “low” likelihood that former smokers would re-initiate tobacco use with the IQOS system. The only options were “low,” “medium,” and “high.” This doesn’t mean it will necessarily help non-users quit in the first place, of course, but for smokers who do switch, it means the device helps them stay away from cigarettes.

“Not increase youth tobacco use”: A majority of the voting panel members agreed that PMI demonstrated a “low” likelihood that youth “never smokers” would become established IQOS users.

By definition, the long-term health benefits of innovative new products like IQOS are uncertain. But the cost of waiting for perfect information may be substantial.

It’s worth noting that the American Cancer Society recently shifted its position on electronic cigarettes, recommending that individuals who do not quit smoking

should be encouraged to switch to the least harmful form of tobacco product possible; switching to the exclusive use of e-cigarettes is preferable to continuing to smoke combustible products.

Dr. Nancy Rigotti agrees. A professor of medicine at Harvard and Director of the Tobacco Research and Treatment Center at Massachusetts General Hospital, Dr. Rigotti is a prominent tobacco-cessation researcher and the author of a February 2018 National Academies of Science, Engineering, and Medicine Report that examined over 800 peer-reviewed scientific studies on the health effects of e-cigarettes. As she has said:

The field of tobacco control recognizes cessation is the goal, but if the patient can’t quit then I think we should look at harm reduction.

About her recent research, Dr. Rigotti noted:

I think the major takeaway is that although there’s a lot we don’t know, and although they have some health risks, [e-cigarettes] are clearly better than cigarettes….

Unlike the senators pushing the FDA to prohibit sales of non-combustible tobacco products, experts recognize that there is enormous value in these products: the reduction of imminent harm relative to the alternative.

Such harm-reduction strategies are commonplace, even when the benefits aren’t perfectly quantifiable. Bike helmet use is encouraged (or mandated) to reduce the risk and harm associated with bicycling. Schools distribute condoms to reduce teen pregnancy and sexually transmitted diseases. Local jurisdictions offer needle exchange programs to reduce the spread of AIDS and other infectious diseases; some offer supervised injection facilities to reduce the risk of overdose. Methadone and Suboxone are less-addictive opioids used to treat opioid use disorder.

In each of these instances, it is understood that the underlying, harmful behaviors will continue. But it is also understood that the welfare benefits from reducing the harmful effects of such behavior outweigh any gain that might be had from futile prohibition efforts.

By the same token — and seemingly missed by the senators urging an FDA ban on non-combustible tobacco technologies — constraints placed on healthier alternatives induce people, on the margin, to stick with the less-healthy option. Thus, many countries that have adopted age restrictions on their needle exchange programs and supervised injection facilities have seen predictably higher rates of infection and overdose among substance-using youth.

Under the Food, Drug & Cosmetic Act, in order to market “safer” tobacco products manufacturers must demonstrate that they would (1) significantly reduce harm and the risk of tobacco-related disease to individual tobacco users, and (2) benefit the health of the population as a whole. In addition, the Act limits the labeling and advertising claims that manufacturers can make on their products’ behalf.

These may be well-intentioned restraints, but overly strict interpretation of the rules can do far more harm than good.

In 2015, for example, the TPSAC expressed concerns about consumer confusion in an application to market “snus” (a smokeless tobacco product placed between the lip and gum) as a safer alternative to cigarettes. The manufacturer sought to replace the statement on snus packaging, “WARNING: This product is not a safe alternative to cigarettes,” with one reading, “WARNING: No tobacco product is safe, but this product presents substantially lower risks to health than cigarettes.”

The FDA denied the request, stating that the amended warning label “asserts a substantial reduction in risks, which may not accurately convey the risks of [snus] to consumers” — even though it agreed that snus “substantially reduce the risks of some, but not all, tobacco-related diseases.”

But under this line of reasoning, virtually no amount of net health benefits would merit approval of marketing language designed to encourage the use of less-harmful products as long as any risk remains. And yet consumers who refrain from using snus after reading the stronger warning might instead — and wrongly — view cigarettes as equally healthy (or healthier), precisely because of the warning. That can’t be sound policy if the aim is actually to reduce harm overall.

To be sure, there is a place for government to try to ensure accuracy in marketing based on health claims. But it is impossible for regulators to fine-tune marketing materials to convey the full range of truly relevant information for all consumers. And pressuring the FDA to limit the sale and marketing of smoke-free products as safer alternatives to cigarettes — in the face of scientific evidence that they would likely achieve significant harm-reduction goals — could do far more harm than good.

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

As the Federal Communications (FCC) prepares to revoke its economically harmful “net neutrality” order and replace it with a free market-oriented “Restoring Internet Freedom Order,” the FCC and the Federal Trade Commission (FTC) commendably have announced a joint policy for cooperation on online consumer protection.  According to a December 11 FTC press release:

The Federal Trade Commission and Federal Communications Commission (FCC) announced their intent to enter into a Memorandum of Understanding (MOU) under which the two agencies would coordinate online consumer protection efforts following the adoption of the Restoring Internet Freedom Order.

“The Memorandum of Understanding will be a critical benefit for online consumers because it outlines the robust process by which the FCC and FTC will safeguard the public interest,” said FCC Chairman Ajit Pai. “Instead of saddling the Internet with heavy-handed regulations, we will work together to take targeted action against bad actors. This approach protected a free and open Internet for many years prior to the FCC’s 2015 Title II Order and it will once again following the adoption of the Restoring Internet Freedom Order.”

“The FTC is committed to ensuring that Internet service providers live up to the promises they make to consumers,” said Acting FTC Chairman Maureen K. Ohlhausen. “The MOU we are developing with the FCC, in addition to the decades of FTC law enforcement experience in this area, will help us carry out this important work.”

The draft MOU, which is being released today, outlines a number of ways in which the FCC and FTC will work together to protect consumers, including:

The FCC will review informal complaints concerning the compliance of Internet service providers (ISPs) with the disclosure obligations set forth in the new transparency rule. Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization, and congestion management. Should an ISP fail to make the required disclosures—either in whole or in part—the FCC will take enforcement action.

The FTC will investigate and take enforcement action as appropriate against ISPs concerning the accuracy of those disclosures, as well as other deceptive or unfair acts or practices involving their broadband services.

The FCC and the FTC will broadly share legal and technical expertise, including the secure sharing of informal complaints regarding the subject matter of the Restoring Internet Freedom Order. The two agencies also will collaborate on consumer and industry outreach and education.

The FCC’s proposed Restoring Internet Freedom Order, which the agency is expected to vote on at its December 14 meeting, would reverse a 2015 agency decision to reclassify broadband Internet access service as a Title II common carrier service. This previous decision stripped the FTC of its authority to protect consumers and promote competition with respect to Internet service providers because the FTC does not have jurisdiction over common carrier activities.

The FCC’s Restoring Internet Freedom Order would return jurisdiction to the FTC to police the conduct of ISPs, including with respect to their privacy practices. Once adopted, the order will also require broadband Internet access service providers to disclose their network management practices, performance, and commercial terms of service. As the nation’s top consumer protection agency, the FTC will be responsible for holding these providers to the promises they make to consumers.

Particularly noteworthy is the suggestion that the FCC and FTC will work to curb regulatory duplication and competitive empire building – a boon to Internet-related businesses that would be harmed by regulatory excess and uncertainty.  Stay tuned for future developments.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”