Archives For privacy

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

Last week the International Center for Law & Economics filed comments on the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As we note in our comments:

The Commission’s NPRM would shoehorn the business models of a subset of new economy firms into a regime modeled on thirty-year-old CPNI rules designed to address fundamentally different concerns about a fundamentally different market. The Commission’s hurried and poorly supported NPRM demonstrates little understanding of the data markets it proposes to regulate and the position of ISPs within that market. And, what’s more, the resulting proposed rules diverge from analogous rules the Commission purports to emulate. Without mounting a convincing case for treating ISPs differently than the other data firms with which they do or could compete, the rules contemplate disparate regulatory treatment that would likely harm competition and innovation without evident corresponding benefit to consumers.

In particular, we focus on the FCC’s failure to justify treating ISPs differently than other competitors, and its failure to justify more stringent treatment for ISPs in general:

In short, the Commission has not made a convincing case that discrimination between ISPs and edge providers makes sense for the industry or for consumer welfare. The overwhelming body of evidence upon which other regulators have relied in addressing privacy concerns urges against a hard opt-in approach. That same evidence and analysis supports a consistent regulatory approach for all competitors, and nowhere advocates for a differential approach for ISPs when they are participating in the broader informatics and advertising markets.

With respect to the proposed opt-in regime, the NPRM ignores the weight of economic evidence on opt-in rules and fails to justify the specific rules it prescribes. Of most significance is the imposition of this opt-in requirement for the sharing of non-sensitive data.

On net opt-in regimes may tend to favor the status quo, and to maintain or grow the position of a few dominant firms. Opt-in imposes additional costs on consumers and hurts competition — and it may not offer any additional protections over opt-out. In the absence of any meaningful evidence or rigorous economic analysis to the contrary, the Commission should eschew imposing such a potentially harmful regime on broadband and data markets.

Finally, we explain that, although the NPRM purports to embrace a regulatory regime consistent with the current “federal privacy regime,” and particularly the FTC’s approach to privacy regulation, it actually does no such thing — a sentiment echoed by a host of current and former FTC staff and commissioners, including the Bureau of Consumer Protection staff, Commissioner Maureen Ohlhausen, former Chairman Jon Leibowitz, former Commissioner Josh Wright, and former BCP Director Howard Beales.

Our full comments are available here.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

The lifecycle of a law is a curious one; born to fanfare, a great solution to a great problem, but ultimately doomed to age badly as lawyers seek to shoehorn wholly inappropriate technologies and circumstances into its ambit. The latest chapter in the book of badly aging laws comes to us courtesy of yet another dysfunctional feature of our political system: the Supreme Court nomination and confirmation process.

In 1988, President Reagan nominated Judge Bork for a spot on the US Supreme Court. During the confirmation process following his nomination, a reporter was able to obtain a list of videos he and his family had rented from local video rental stores (You remember those, right?). In response to this invasion of privacy — by a reporter whose intention was to publicize and thereby (in some fashion) embarrass or “expose” Judge Bork — Congress enacted the Video Privacy Protection Act (“VPPA”).

In short, the VPPA makes it illegal for a “video tape service provider” to knowingly disclose to third parties any “personally identifiable information” in connection with the viewing habits of a “consumer” who uses its services. Left as written and confined to the scope originally intended for it, the Act seems more or less fine. However, over the last few years, plaintiffs have begun to use the Act as a weapon with which to attack common Internet business models in a manner wholly out of keeping with drafters’ intent.

And with a decision that promises to be a windfall for hungry plaintiff’s attorneys everywhere, the First Circuit recently allowed a plaintiff, Alexander Yershov, to make it past a 12(b)(6) motion on a claim that Gannett violated the VPPA with its  USA Today Android mobile app.

What’s in a name (or Android ID) ?

The app in question allowed Mr. Yershov to view videos without creating an account, providing his personal details, or otherwise subscribing (in the generally accepted sense of the term) to USA Today’s content. What Gannett did do, however, was to provide to Adobe Systems the Android ID and GPS location data associated with Mr. Yershov’s use of the app’s video content.

In interpreting the VPPA in a post-Blockbuster world, the First Circuit panel (which, apropos of nothing, included retired Justice Souter) had to wrestle with whether Mr. Yershov counts as a “subscriber,” and to what extent an Android ID and location information count as “personally identifying information” under the Act. Relying on the possibility that Adobe might be able to infer the identity of the plaintiff given its access to data from other web properties, and given the court’s rather gut-level instinct that an app user is a “subscriber,” the court allowed the plaintiff to survive the 12(b)(6) motion.

The PII point is the more arguable of the two, as the statutory language is somewhat vague. Under the Act, PIII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” On this score the court decided that GPS data plus an Android ID (or each alone — it wasn’t completely clear) could constitute information protected under the Act (at least for purposes of a 12(b)(6) motion):

The statutory term “personally identifiable information” is awkward and unclear. The definition of that term… adds little clarity beyond training our focus on the question whether the information identifies the person who obtained the video…. Nevertheless, the language reasonably conveys the point that PII is not limited to information that explicitly names a person.

OK (maybe). But where the court goes off the rails is in its determination that an Android ID, GPS data, or a list of videos is, in itself, enough to identify anyone.

It might be reasonable to conclude that Adobe could use that information in combination with other information it collects from yet other third parties (fourth parties?) in order to build up a reliable, personally identifiable profile. But the statute’s language doesn’t hang on such a combination. Instead, the court’s reasoning finds potential liability by reading this exact sort of prohibition into the statute:

Adobe takes this and other information culled from a variety of sources to create user profiles comprised of a given user’s personal information, online behavioral data, and device identifiers… These digital dossiers provide Adobe and its clients with “an intimate look at the different types of materials consumed by the individual” … While there is certainly a point at which the linkage of information to identity becomes too uncertain, or too dependent on too much yet-to-be-done, or unforeseeable detective work, here the linkage, as plausibly alleged, is both firm and readily foreseeable to Gannett.

Despite its hedging about uncertain linkages, the court’s reasoning remains contingent on an awful lot of other moving parts — something not found in either the text of the law, nor the legislative history of the Act.

The information sharing identified by the court is in no way the sort of simple disclosure of PII that easily identifies a particular person in the way that, say, Blockbuster Video would have been able to do in 1988 with disclosure of its viewing lists.  Yet the court purports to find a basis for its holding in the abstract nature of the language in the VPPA:

Had Congress intended such a narrow and simple construction [as specifying a precise definition for PII], it would have had no reason to fashion the more abstract formulation contained in the statute.

Again… maybe. Maybe Congress meant to future-proof the provision, and didn’t want the statute construed as being confined to the simple disclosure of name, address, phone number, and so forth. I doubt, though, that it really meant to encompass the sharing of any information that might, at some point, by some unknown third parties be assembled into a profile that, just maybe if you squint at it hard enough, will identify a particular person and their viewing habits.

Passive Subscriptions?

What seems pretty clear, however, is that the court got it wrong when it declared that Mr. Yershov was a “subscriber” to USA Today by virtue of simply downloading an app from the Play Store.

The VPPA prohibits disclosure of a “consumer’s” PII — with “consumer” meaning “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” In this case (as presumably will happen in most future VPPA cases involving free apps and websites), the plaintiff claims that he is a “subscriber” to a “video tape” service.

The court built its view of “subscriber” predominantly on two bases: (1) you don’t need to actually pay anything to count as a subscriber (with which I agree), and (2) that something about installing an app that can send you push notifications is different enough than frequenting a website, that a user, no matter how casual, becomes a “subscriber”:

When opened for the first time, the App presents a screen that seeks the user’s permission for it to “push” or display notifications on the device. After choosing “Yes” or “No,” the user is directed to the App’s main user interface.

The court characterized this connection between USA Today and Yershov as “seamless” — ostensibly because the app facilitates push notifications to the end user.

Thus, simply because it offers an app that can send push notifications to users, and because this app sometimes shows videos, a website or Internet service — in this case, an app portal for a newspaper company — becomes a “video tape service,” offering content to “subscribers.” And by sharing information in a manner that is nowhere mentioned in the statute and that on its own is not capable of actually identifying anyone, the company suddenly becomes subject to what will undoubtedly be an avalanche of lawsuits (at least in the first circuit).

Preposterous as this may seem on its face, it gets worse. Nothing in the court’s opinion is limited to “apps,” and the “logic” would seem to apply to the general web as well (whether the “seamless” experience is provided by push notifications or some other technology that facilitates tighter interaction with users). But, rest assured, the court believes that

[B]y installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.

Thank goodness it’s “materially” different… although just going by the reasoning in this opinion, I don’t see how that can possibly be true.

What happens when web browsers can enable push notifications between users and servers? Well, I guess we’ll find out soon because major browsers now support this feature. Further, other technologies — like websockets — allow for continuous two-way communication between users and corporate sites. Does this change the calculus? Does it meet the court’s “test”? If so, the court’s exceedingly vague reasoning provides little guidance (and a whole lot of red meat for lawsuits).

To bolster its view that apps are qualitatively different than web sites with regard to their delivery to consumers, the court asks “[w]hy, after all, did Gannett develop and seek to induce downloading of the App?” I don’t know, because… cell phones?

And this bit of “reasoning” does nothing for the court’s opinion, in fact. Gannett undertook development of a web site in the first place because some cross-section of the public was interested in reading news online (and that was certainly the case for any electronic distribution pre-2007). No less, consumers have increasingly been moving toward using mobile devices for their online activities. Though it’s a debatable point, apps can often provide a better user experience than that provided by a mobile browser. Regardless, the line between “app” and “web site” is increasingly a blurry one, especially on mobile devices, and with the proliferation of HTML5 and frameworks like Google’s Progressive Web Apps, the line will only grow more indistinct. That Gannett was seeking to provide the public with an app has nothing to do with whether it intended to develop a more “intimate” relationship with mobile app users than it has with web users.

The 11th Circuit, at least, understands this. In Ellis v. Cartoon Network, it held that a mere user of an app — without more — could not count as a “subscriber” under the VPPA:

The dictionary definitions of the term “subscriber” we have quoted above have a common thread. And that common thread is that “subscription” involves some type of commitment, relationship, or association (financial or otherwise) between a person and an entity. As one district court succinctly put it: “Subscriptions involve some or [most] of the following [factors]: payment, registration, commitment, delivery, [expressed association,] and/or access to restricted content.”

The Eleventh Circuit’s point is crystal clear, and I’m not sure how the First Circuit failed to appreciate it (particularly since it was the district court below in the Yershov case that the Eleventh Circuit was citing). Instead, the court got tied up in asking whether or not a payment was required to constitute a “subscription.” But that’s wrong. What’s needed is some affirmative step – something more than just downloading an app, and certainly something more than merely accessing a web site.

Without that step — a “commitment, relationship, or association (financial or otherwise) between a person and an entity” — the development of technology that simply offers a different mode of interaction between users and content promises to transform the VPPA into a tremendously powerful weapon in the hands of eager attorneys, and a massive threat to the advertising-based business models that have enabled the growth of the web.

How could this possibly not apply to websites?

In fact, there is no way this opinion won’t be picked up by plaintiff’s attorneys in suits against web sites that allow ad networks to collect any information on their users. Web sites may not have access to exact GPS data (for now), but they do have access to fairly accurate location data, cookies, and a host of other data about their users. And with browser-based push notifications and other technologies being developed to create what the court calls a “seamless” experience for users, any user of a web site will count as a “subscriber” under the VPPA. The potential damage to the business models that have funded the growth of the Internet is hard to overstate.

There is hope, however.

Hulu faced a similar challenge over the last few years arising out of its collection of viewer data on its platform and the sharing of that data with third-party ad services in order to provide better targeted and, importantly, more user-relevant marketing. Last year it actually won a summary judgment motion on the basis that it had no way of knowing that Facebook (the third-party with which it was sharing data) would reassemble the data in order to identify particular users and their viewing habits. Nevertheless, Huu has previously lost motions on the subscriber and PII issues.

Hulu has, however, previously raised one issue in its filings on which the district court punted, but that could hold the key to putting these abusive litigations to bed.

The VPPA provides a very narrowly written exception to the prohibition on information sharing when such sharing is “incident to the ordinary course of business” of the “video tape service provider.” “Ordinary course of business” in this context means  “debt collection activities, order fulfillment, request processing, and the transfer of ownership.” In one of its motions, Hulu argued that

the section shows that Congress took into account that providers use third parties in their business operations and “‘allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’

The district court didn’t grant Hulu summary judgment on the issue, essentially passing on the question. But in 2014 the Seventh Circuit reviewed a very similar set of circumstances in Sterk v. Redbox and found that the exception applied. In that case Redbox had a business relationship with Stream, a third party that provided Redbox with automated customer service functions. The Seventh Circuit found that information sharing in such a relationship fell within Redbox’s “ordinary course of business”, and so Redbox was entitled to summary judgment on the VPPA claims against it.

This is essentially the same argument that Hulu was making. Third-party ad networks most certainly provide a service to corporations that serve content over the web. Hulu, Gannett and every other publisher on the web surely could provide their own ad platforms on their own properties. But by doing so they would lose the economic benefits that come from specialization and economies of scale. Thus, working with a third-party ad network pretty clearly replaces the “order fulfillment” and “request processing” functions of a content platform.

The Big Picture

And, stepping back for a moment, it’s important to take in the big picture. The point of the VPPA was to prevent public disclosures that would chill speech or embarrass individuals; the reporter in 1987 set out to expose or embarrass Judge Bork.  This is the situation the VPPA’s drafters had in mind when they wrote the Act. But the VPPA was most emphatically not designed to punish Internet business models — especially of a sort that was largely unknown in 1988 — that serve the interests of consumers.

The 1988 Senate report on the bill, for instance, notes that “[t]he bill permits the disclosure of personally identifiable information under appropriate and clearly defined circumstances. For example… companies may sell mailing lists that do not disclose the actual selections of their customers.”  Moreover, the “[Act] also allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’”

Congress plainly contemplated companies being able to monetize their data. And this just as plainly includes the common practice in automated tracking systems on the web today that use customers’ viewing habits to serve them with highly personalized web experiences.

Sites that serve targeted advertising aren’t in the business of embarrassing consumers or abusing their information by revealing it publicly. And, most important, nothing in the VPPA declares that information sharing is prohibited if third party partners could theoretically construct a profile of users. The technology to construct these profiles simply didn’t exist in 1988, and there is nothing in the Act or its legislative history to support the idea that the VPPA should be employed against the content platforms that outsource marketing to ad networks.

What would make sense is to actually try to fit modern practice in with the design and intent of the VPPA. If, for instance, third-party ad networks were using the profiles they created to extort, blackmail, embarrass, or otherwise coerce individuals, the practice certainly falls outside of course of business, and should be actionable.

But as it stands, much like the TCPA, the VPPA threatens to become a costly technological anachronism. Future courts should take the lead of the Eleventh and Seventh circuits, and make the law operate in the way it was actually intended. Gannett still has the opportunity to appeal for an en banc hearing, and after that for cert before the Supreme Court. But the circuit split this presents is the least of our worries. If this issue is not resolved in a way that permits platforms to continue to outsource their marketing efforts as they do today, the effects on innovation could be drastic.

Web platforms — which includes much more than just online newspapers — depend upon targeted ads to support their efforts. This applies to mobile apps as well. The “freemium” model has eclipsed the premium model for apps — a fact that expresses the preferences of both consumers at large as well as producers. Using the VPPA as a hammer to smash these business models will hurt everyone except, of course, for plaintiff’s attorneys.

Earlier this month, Federal Communications Commission (FCC) Chairman Tom Wheeler released a “fact sheet” describing his proposal to have the FCC regulate the privacy policies of broadband Internet service providers (ISPs).  Chairman Wheeler’s detailed proposal will be embodied in a Notice of Proposed Rulemaking (NPRM) that the FCC may take up as early as March 31.  The FCC instead should shelve this problematic initiative and leave broadband privacy regulation (to the extent it is warranted) to the Federal Trade Commission (FTC).

In a March 23 speech before the Free State Foundation, FTC Commissioner Maureen Ohlhausen ably summarized the negative economic implications of the NPRM, contrasting the FCC’s proposal with the FTC’s approach to privacy-related enforcement (citations omitted):

The FCC’s proposal differs significantly from the choice architecture the FTC has established under its deception authority.  Our [FTC] deception authority enforces the promises companies make to consumers.  But companies are not required under our deception authority to make such privacy promises.  This is as it should be.  As I’ve already described, unfairness authority sets a baseline by prohibiting practices the vast majority of consumers would not embrace. Mandating practices above this baseline reduces consumer welfare because it denies some consumers options that best match their preferences.  Consumer demand and competitive forces spur companies to make privacy promises.  In fact, nearly all legitimate companies currently make detailed promises about their privacy practices.  This demonstrates a market demand for, and supply of, transparency about company uses of data.  Indeed, recent research . . . shows that broadband ISPs in particular already make strong privacy promises to consumers.

In contrast to the choice framework of the FTC, the FCC’s proposal, according to the recent [Wheeler] fact sheet, seeks to mandate that broadband ISPs adopt a specific opt in / opt-out regime.  The fact sheet repeatedly insists that this is about consumer choice. But, in fact, opt in mandates unavoidably reduce consumer choice. First, one subtle way in which a privacy baseline might be set too high is if the default opt in condition does not match the average consumer preference.  If the FCC mandates opt in for a specific data collection, but a majority of consumers already prefer to share that information, the mandate unnecessarily raises costs to companies and consumers.  Second, opt in mandates prevent unanticipated beneficial uses of data.  An effective and transparent opt-in regime requires that companies know at the time of collection how they will use the collected information. Yet data, including non-sensitive data, often yields significantconsumer benefits from uses that could not be known at the time of collection.  Ignoring this, the fact sheet proposes to ban all but a very few uses unless consumers opt in.  This proposed opt in regime would prohibit unforeseeable future uses of collected data, regardless of what consumers would prefer.  This approach is stricter and more limiting than the requirements that other internet companies face. Now, I agree such mandates may be appropriate for certain types of sensitive data such as credit card numbers or SSNs, but they likely will reduce consumer options if applied to non-sensitive data.

If the FCC wished to be consistent with the FTC’s approach of using prohibitions only for widely held consumer preferences, it would take a different approach and simply require opt in for specific, sensitive uses. . . . 

[Furthermore,] [h]ere, the FCC proposes, for the first time ever, to apply a statute created for telephone lines to broadband ISPs. That raises some significant statutory authority issues that the FCC may ultimately need to look to Congress to clarify. . . .

[In addition,] the current FCC proposal appears to reflect the preferences of privacy lobbyists who are frustrated with the lax privacy preferences of average American consumers.  Furthermore, the proposal doesn’t appear to have the support of the minority FCC Commissioners or Congress. 

[Also,] the FCC proposal applies to just one segment of the internet ecosystem broadband ISPs, even though there is good evidence that ISPs are not uniquely privy to your data. . . .

[In conclusion,] [a]t its core, protecting consumer privacy ought to be about effectuating consumers’ preferences.  If privacy rules impose the preferences of the few on the many, consumers will not be better off.  Therefore, prescriptive baseline privacy mandates like the FCC’s proposal should be reserved for practices that consumers overwhelmingly disfavor.  Otherwise, consumers should remain free to exercise their privacy preferences in the marketplace, and companies should be held to the promises they make.  This approach, which is a time-tested, emergent result of the FTC’s case-by-case application of its statutory authority, offers a good template for the FCC.

Commissioner Ohlhausen’s presentation comports with my May 2015 Heritage Foundation Legal Memorandum, which explained that the FTC’s highly structured, analytic, fact-based approach, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

Regrettably, there is little reason to believe that the FCC, acting on its own, will heed Commissioner Ohlhausen’s call to focus on consumer preferences in evaluating broadband ISP privacy practices.  What’s worse, the FTC’s ability to act at all in this area is in doubt.  The FCC’s current regulation requiring broadband ISP “net neutrality,” and its proposed regulation of ISP privacy practices, are premised on the dubious reclassification of broadband as a “common carrier” service – and the FTC has no authority over common carriers.  If the D.C. Circuit fails to overturn the FCC’s broadband rule, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.

The FCC doesn’t have authority over the edge and doesn’t want authority over the edge. Well, that is until it finds itself with no choice but to regulate the edge as a result of its own policies. As the FCC begins to explore its new authority to regulate privacy under the Open Internet Order (“OIO”), for instance, it will run up against policy conflicts and inconsistencies that will make it increasingly hard to justify forbearance from regulating edge providers.

Take for example the recently announced NPRM titled “Expanding Consumers’ Video Navigation Choices” — a proposal that seeks to force cable companies to provide video programming to third party set-top box manufacturers. Under the proposed rules, MVPD distributors would be required to expose three data streams to competitors: (1) listing information about what is available to particular customers; (2) the rights associated with accessing such content; and (3) the actual video content. As Geoff Manne has aptly noted, this seems to be much more of an effort to eliminate the “nightmare” of “too many remote controls” than it is to actually expand consumer choice in a market that is essentially drowning in consumer choice. But of course even so innocuous a goal—which is probably more about picking on cable companies because… “eww cable companies”—suggests some very important questions.

First, the market for video on cable systems is governed by a highly interdependent web of contracts that assures to a wide variety of parties that their bargained-for rights are respected. Among other things, channels negotiate for particular placements and channel numbers in a cable system’s lineup, IP rights holders bargain for content to be made available only at certain times and at certain locations, and advertisers pay for their ads to be inserted into channel streams and broadcasts.

Moreover, to a large extent, the content industry develops its content based on a stable regime of bargained-for contractual terms with cable distribution networks (among others). Disrupting the ability of cable companies to control access to their video streams will undoubtedly alter the underlying assumptions upon which IP companies rely when planning and investing in content development. And, of course, the physical networks and their related equipment have been engineered around the current cable-access regimes. Some non-trivial amount of re-engineering will have to take place to make the cable-networks compatible with a more “open” set-top box market.

The FCC nods to these concerns in its NPRM, when it notes that its “goal is to preserve the contractual arrangements between programmers and MVPDs, while creating additional opportunities for programmers[.]” But this aspiration is not clearly given effect in the NPRM, and, as noted, some contractual arrangements are simply inconsistent with the NPRM’s approach.

Second, the FCC proposes to bind third-party manufacturers to the public interest privacy commitments in §§ 629, 551 and 338(i) of the Communications Act (“Act”) through a self-certification process. MVPDs would be required to pass the three data streams to third-party providers only once such a certification is received. To the extent that these sections, enforced via self-certification, do not sufficiently curtail third-parties’ undesirable behavior, the FCC appears to believe that “the strictest state regulatory regime[s]” and the “European Union privacy regulations” will serve as the necessary regulatory gap fillers.

This seems hard to believe, however, particularly given the recently announced privacy and cybersecurity NPRM, through which the FCC will adopt rules detailing the agency’s new authority (under the OIO) to regulate privacy at the ISP level. Largely, these rules will grow out of §§ 222 and 201 of the Act, which the FCC in Terracom interpreted together to be a general grant of privacy and cybersecurity authority.

I’m apprehensive of the asserted scope of the FCC’s power over privacy — let alone cybersecurity — under §§ 222 and 201. In truth, the FCC makes an admirable showing in Terracom of demonstrating its reasoning; it does a far better job than the FTC in similar enforcement actions. But there remains a problem. The FTC’s authority is fundamentally cabined by the limitations contained within the FTC Act (even if it frequently chooses to ignore them, they are there and are theoretically a protection against overreach).

But the FCC’s enforcement decisions are restrained (if at all) by a vague “public interest” mandate, and a claim that it will enforce these privacy principles on a case-by-case basis. Thus, the FCC’s proposed regime is inherently one based on vast agency discretion. As in many other contexts, enforcers with wide discretion and a tremendous power to penalize exert a chilling effect on innovation and openness, as well as a frightening power over a tremendous swath of the economy. For the FCC to claim anything like an unbounded UDAP authority for itself has got to be outside of the archaic grant of authority from § 201, and is certainly a long stretch for the language of § 706 (a provision of the Act which it used as one of the fundamental justifications for the OIO)— leading very possibly to a bout of Chevron problems under precedent such as King v. Burwell and UARG v. EPA.

And there is a real risk here of, if not hypocrisy, then… deep conflict in the way the FCC will strike out on the set-top box and privacy NPRMs. The Commission has already noted in its NPRM that it will not be able to bind third-party providers of set-top boxes under the same privacy requirements that apply to current MVPD providers. Self-certification will go a certain length, but even there agitation from privacy absolutists will possibly sway the FCC to consider more stringent requirements. For instance, §§ 551 and 338 of the Act — which the FCC focuses on in the set-top box NPRM — are really only about disclosing intended uses of consumer data. And disclosures can come in many forms, including burying them in long terms of service that customers frequently do not read. Such “weak” guarantees of consumer privacy will likely become a frequent source of complaint (and FCC filings) for privacy absolutists.  

Further, many of the new set-top box entrants are going to be current providers of OTT video or devices that redistribute OTT video. And many of these providers make a huge share of their revenue from data mining and selling access to customer data. Which means one of two things: Either the FCC is going to just allow us to live in a world of double standards where these self-certifying entities are permitted significantly more leeway in their uses of consumer data than MVPD providers or, alternatively, the FCC is going to discover that it does in fact need to “do something.” If only there were a creative way to extend the new privacy authority under Title II to these providers of set-top boxes… . Oh! there is: bring edge providers into the regulation fold under the OIO.

It’s interesting that Wheeler’s announcement of the FCC’s privacy NPRM explicitly noted that the rules would not be extended to edge providers. That Wheeler felt the need to be explicit in this suggests that he believes that the FCC has the authority to extend the privacy regulations to edge providers, but that it will merely forbear (for now) from doing so.

If edge providers are swept into the scope of Title II they would be subject to the brand new privacy rules the FCC is proposing. Thus, despite itself (or perhaps not), the FCC may find itself in possession of a much larger authority over some edge providers than any of the pro-Title II folks would have dared admit was possible. And the hook (this time) could be the privacy concerns embedded in the FCC’s ill-advised attempt to “open” the set-top box market.

This is a complicated set of issues, and it’s contingent on a number of moving parts. This week, Chairman Wheeler will be facing an appropriations hearing where I hope he will be asked to unpack his thinking regarding the true extent to which the OIO may in fact be extended to the edge.

Scolding teacher

I have small children and, like any reasonably competent parent, I take an interest in monitoring their Internet usage. In particular, I am sensitive to what ad content they are being served and which sites they visit that might try to misuse their information. My son even uses Chromebooks at his elementary school, which underscores this concern for me, as I can’t always be present to watch what he does online. However, also like any other reasonably competent parent, I trust his school and his teacher to make good choices about what he is allowed to do online when I am not there to watch him. And so it is that I am both interested in and rather perplexed by what has EFF so worked up in its FTC complaint alleging privacy “violations” in the “Google for Education” program.

EFF alleges three “unfair or deceptive” acts that would subject Google to remedies under Section 5 of the FTCA: (1) Students logged into “Google for Education” accounts have their non-educational behavior individually tracked (e.g. performing general web searches, browsing YouTube, etc.); (2) the Chromebooks distributed as part of the “Google for Education” program have the “Chrome Sync” feature turned on by default (ostensibly in a terribly diabolical effort to give students a seamless experience between using the Chromebooks at home and at school); and (3) the school administrators running particular instances of “Google for Education” have the ability to share student geolocation information with third-party websites. Each of these violations, claims EFF, violates the K-12 School Service Provider Pledge to Safeguard Student Privacy (“Pledge”) that was authored by the Future of Privacy Forum and Software & Information Industry Association, and to which Google is a signatory. According to EFF, Google included references to its signature in its “Google for Education” marketing materials, thereby creating the expectation in parents that it would adhere to the principles, failed to do so, and thus should be punished.

The TL;DR version: EFF appears to be making some simple interpretational errors — it believes that the scope of the Pledge covers any student activity and data generated while a student is logged into a Google account. As the rest of this post will (hopefully) make clear, however, the Pledge, though ambiguous, is more reasonably read as limiting Google’s obligations to instances where a student is using  Google for Education apps, and does not apply to instances where the student is using non-Education apps — whether she is logged on using her Education account or not.

The key problem, as EFF sees it, is that Google “use[d] and share[d] … student personal information beyond what is needed for education.” So nice of them to settle complex business and educational decisions for the world! Who knew it was so easy to determine exactly what is needed for educational purposes!

Case in point: EFF feels that Google’s use of anonymous and aggregated student data in order to improve its education apps is not an educational purpose. Seriously? How can that not be useful for educational purposes — to improve its educational apps!?

And, according to EFF, the fact that Chrome Sync is ‘on’ by default in the Chromebooks only amplifies the harm caused by the non-Education data tracking because, when the students log in outside of school, their behavior can be correlated with their in-school behavior. Of course, this ignores the fact that the same limitations apply to the tracking — it happens only on non-Education apps. Thus, the Chrome Sync objection is somehow vaguely based on geography. The fact that Google can correlate an individual student’s viewing of a Neil DeGrasse Tyson video in a computer lab at school with her later finishing that video at home is somehow really bad (or so EFF claims).

EFF also takes issue with the fact that school administrators are allowed to turn on a setting enabling third parties to access the geolocation data of Google education apps users.

The complaint is fairly sparse on this issue — and the claim is essentially limited to the assertion that “[s]haring a student’s physical location with third parties is unquestionably sharing personal information beyond what is needed for educational purposes[.]”  While it’s possible that third-parties could misuse student data, a presumption that it is per se outside of any educational use for third-parties to have geolocation access at all strikes me as unreasonable.

Geolocation data, particularly on mobile devices, could allow for any number of positive and negative uses, and without more it’s hard to really take EFF’s premature concern all that seriously. Did they conduct a study demonstrating that geolocation data can serve no educational purpose or that the feature is frequently abused? Sadly, it seems doubtful. Instead, they appear to be relying upon the rather loose definition of likely harm that we have seen in FTC actions in other contexts ( more on this problem here).  

Who decides what ambiguous terms mean?

The bigger issue, however, is the ambiguity latent in the Pledge and how that ambiguity is being exploited to criticize Google. The complaint barely conceals EFF’s eagerness, and gives one the distinct feeling that the Pledge and this complaint are part of a long game. Everyone knows that Google’s entire existence revolves around the clever and innovative employment of large data sets. When Google announced that it was interested in working with schools to provide technology to students, I can only imagine how the anti-big-data-for-any-commercial-purpose crowd sat up and took notice, just waiting to pounce as soon as an opportunity, no matter how tenuous, presented itself.

EFF notes that “[u]nlike Microsoft and numerous other developers of digital curriculum and classroom management software, Google did not initially sign onto the Student Privacy Pledge with the first round of signatories when it was announced in the fall of 2014.” Apparently, it is an indictment of Google that it hesitated to adopt an external statement of privacy principles that was authored by a group that had no involvement with Google’s internal operations or business realities. EFF goes on to note that it was only after “sustained criticism” that Google “reluctantly” signed the pledge. So the company is badgered into signing a pledge that it was reluctant to sign in the first place (almost certainly for exactly these sorts of reasons), and is now being skewered by the proponents of the pledge that it was reluctant to sign. Somehow I can’t help but get the sense that this FTC complaint was drafted even before Google signed the Pledge.

According to the Pledge, Google promised to:

  1. “Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
  2. “Not build a personal profile of a student other than for supporting authorized educational/school purposes or as authorized by the parent/student.”
  3. “Not knowingly retain student personal information beyond the time period required to support the authorized educational/school purposes, or as authorized by the parent/student.”

EFF interprets “educational purpose” as anything a student does while logged into her education account, and by extension, any of the even non-educational activity will count as “student personal information.” I think that a fair reading of the Pledge undermines this position, however, and that the correct interpretation of the Pledge is that “educational purpose” and “student personal information” are more tightly coupled such that Google’s ability to collect student data is only circumscribed when the student is actually using the Google for Education Apps.

So what counts as “student personal information” in the pledge? “Student personal information” is “personally identifiable information as well as other information when it is both collected and maintained on an individual level and is linked to personally identifiable information.”  Although this is fairly broad, it is limited by the definition of “Educational/School purposes” which are “services or functions that customarily take place at the direction of the educational institution/agency or their teacher/employee, for which the institutions or agency would otherwise use its own employees, and that aid in the administration or improvement of educational and school activities.” (emphasis added).

This limitation in the Pledge essentially sinks EFF’s complaint. A major part of EFF’s gripe is that when the students interact with non-Education services, Google tracks them. However, the Pledge limits the collection of information only in contexts where “the institutions or agency would otherwise use its own employees” — a definition that clearly does not extend to general Internet usage. This definition would reasonably cover activities like administering classes, tests, and lessons. This definition would not cover activity such as general searches, watching videos on YouTube and the like. Key to EFF’s error is that the pledge is not operative on accounts but around activity — in particular educational activity “for which the institutions or agency would otherwise use its own employees.”

To interpret Google’s activity in the way that EFF does is to treat the Pledge as a promise never to do anything, ever, with the data of a student logged into an education account, whether generated as part of Education apps or otherwise. That just can’t be right. Thinking through the implications of EFF’s complaint, the ultimate end has to be that Google needs to obtain a permission slip from parents before offering access to Google for Education accounts. Administrators and Google are just not allowed to provision any services otherwise.

And here is where the long game comes in. EFF and its peers induced Google to sign the Pledge all the while understanding that their interpretation would necessarily require a re-write of Google’s business model.  But not only is this sneaky, it’s also ridiculous. By way of analogy, this would be similar to allowing parents an individual say over what textbooks or other curricular materials their children are allowed to access. This would either allow for a total veto by a single parent, or else would require certain students to be frozen out of participating in homework and other activities being performed with a Google for Education app. That may work for Yale students hiding from microaggressions, but it makes no sense to read such a contentious and questionable educational model into Google’s widely-offered apps.

I think a more reasonable interpretation should prevail. The privacy pledge is meant to govern the use of student data while that student is acting as a student — which in the case of Google for Education apps would mean while using said apps. Plenty of other Google apps could be used for educational purposes, but Google is intentionally delineating a sensible dividing line in order to avoid exactly this sort of problem (as well as problems that could arise under other laws directed at student activity, like COPPA, most notably). It is entirely unreasonable to presume that Google, by virtue of its socially desirable behavior of enabling students to have ready access to technology, is thereby prevented from tracking individuals’ behavior on non-Education apps as it chooses to define them.

What is the Harm?

According to EFF, there are two primary problems with Google’s gathering and use of student data: gathering and using individual data in non-Education apps, and gathering and using anonymized and aggregated data in the Education apps. So what is the evil end to which Google uses this non-Education gathered data?

“Google not only collects and stores the vast array of student data described above, but uses it for its own purposes such as improving Google products and serving targeted advertising (within non-Education Google services)”

The horrors! Google wants to use student behavior to improve its services! And yes, I get it, everyone hates ads — I hate ads too — but at some point you need to learn to accept that the wealth of nominally free apps available to every user is underwritten by the ad-sphere. So if Google is using the non-Education behavior of students to gain valuable insights that it can monetize and thereby subsidize its services, so what? This is life in the twenty-first century, and until everyone collectively decides that we prefer to pay for services up front, we had better get used to being tracked and monetized by advertisers.

But as noted above, whether you think Google should or shouldn’t be gathering this data, it seems clear that the data generated from use of non-Education apps doesn’t fall under the Pledge’s purview. Thus, perhaps sensing the problems in its non-Education use argument, EFF also half-heartedly attempts to demonize certain data practices that Google employs in the Education context. In short, Google aggregates and anonymizes the usage data of the Google for Education apps, and, according to EFF, this is a violation of the Pledge:

“Aggregating and anonymizing students’ browsing history does not change the intensely private nature of the data … such that Google should be free to use it[.]”

Again the “harm” is that Google actually wants to improve the Educational apps:  “Google has acknowledged that it collects, maintains, and uses student information via Chrome Sync (in aggregated and anonymized form) for the purpose of improving Google products”

This of course doesn’t violate the Pledge. After all, signatories to the Pledge promise only that they will “[n]ot collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes.” It’s eminently reasonable to include the improvement of the provisioned services as part of an “authorized educational … purpose[.]” And by ensuring that the data is anonymized and aggregated, Google is clearly acknowledging that some limits are appropriate in the education context — that it doesn’t need to collect individual and identifiable personal information for education purposes — but that improving its education products the same way it improves all its products is an educational purpose.

How are the harms enhanced by Chrome Sync? Honestly, it’s not really clear from EFF’s complaint. I believe that the core of EFF’s gripe (at least here) has to do with how the two data gathering activities may be correlated together. Google has ChromeSync enabled by default, so when the students sign on at different locations, the Education apps usage is recorded and grouped (still anonymously) for service improvement alongside non-Education use. And the presence of these two data sets being generated side-by-side creates the potential to track students in the educational capacity by correlating with information generated in their non-educational capacity.

Maybe there are potential flaws in the manner in which the data is anonymized. Obviously EFF thinks anonymized data won’t stay anonymized. That is a contentious view, to say the least, but regardless, it is in no way compelled by the Pledge. But more to the point, merely having both data sets does not do anything that clearly violates the Pledge.

The End Game

So what do groups like EFF actually want? It’s important to consider the effects on social welfare that this approach to privacy takes, and its context. First, the Pledge was overwhelmingly designed for and signed by pure education companies, and not large organizations like Google, Apple, or Microsoft — thus the nature of the Pledge itself is more or less ill-fitted to a multi-faceted business model. If we follow the logical conclusions of this complaint, a company like Google would face an undesirable choice: On the one hand, it can provide hardware to schools at zero cost or heavily subsidized prices, and also provide a suite of useful educational applications. However, as part of this socially desirable donation, it must also place a virtual invisibility shield around students once they’ve signed into their accounts. From that point on, regardless of what service they use — even non-educational ones — Google is prevented from using any data students generate. At this point, one has to question Google’s incentive to remove huge swaths of the population from its ability to gather data. If Google did nothing but provide the hardware, it could simply leave its free services online as-is, and let schools adopt or not adopt them as they wish (subject of course to extant legislation such as COPPA) — thereby allowing itself to possibly collect even more data on the same students.

On the other hand, if not Google, then surely many other companies would think twice before wading into this quagmire, or, when they do, they might offer severely limited services. For instance, one way of complying with EFF’s view of how the Pledge works would be to shut off access to all non-Education services. So, students logged into an education account could only access the word processing and email services, but would be prevented from accessing YouTube, web search and other services — and consequently suffer from a limitation of potentially novel educational options.

EFF goes on to cite numerous FTC enforcement actions and settlements from recent years. But all of the cited examples have one thing in common that the current complaint does not: they all are violations of § 5 for explicit statements or representations made by a company to consumers. EFF’s complaint, on the other hand, is based on a particular interpretation of an ambiguous document generally drafted, and outside of the the complicated business practice at issue. What counts as “student information” when a user employs a general purpose machine for both educational purposes and non-educational purposes?  The Pledge — at least the sections that EFF relies upon in its complaint — is far from clear and doesn’t cover Google’s behavior in an obvious manner.

Of course, the whole complaint presumes that the nature of Google’s services was somehow unfair or deceptive to parents — thus implying that there was at least some material reliance on the Pledge in parental decision making. However, this misses a crucial detail: it is the school administrators who contract with Google for the Chromebooks and Google for Education services, and not the parents or the students.  Then again, maybe EFF doesn’t care and it is, as I suggest above, just interested in a long game whereby it can shoehorn Google’s services into some new sort of privacy regime. This isn’t all that unusual, as we have seen even the White House in other contexts willing to rewrite business practices wholly apart from the realities of privacy “harms.”

But in the end, this approach to privacy is just a very efficient way to discover the lowest common denominator in charity. If it even decides to brave the possible privacy suits, Google and other similarly situated companies will provide the barest access to the most limited services in order to avoid extensive liability from ambiguous pledges. And, perhaps even worse for overall social welfare, using the law to force compliance with voluntarily enacted, ambiguous codes of conduct is a sure-fire way to make sure that there are fewer and more limited codes of conduct in the future.

by Berin Szoka, President, TechFreedom

Josh Wright will doubtless be remembered for transforming how FTC polices competition. Between finally defining Unfair Methods of Competition (UMC), and his twelve dissents and multiple speeches about competition matters, he re-grounded competition policy in the error-cost framework: weighing not only costs against benefits, but also the likelihood of getting it wrong against the likelihood of getting it right.

Yet Wright may be remembered as much for what he started as what he finished: reforming the Commission’s Unfair and Deceptive Acts and Practices (UDAP) work. His consumer protection work is relatively slender: four dissents on high tech matters plus four relatively brief concurrences and one dissent on more traditional advertising substantiation cases. But together, these offer all the building blocks of an economic, error-cost-based approach to consumer protection. All that remains is for another FTC Commissioner to pick up where Wright left off.

Apple: Unfairness & Cost-Benefit Analysis

In January 2014, Wright issued a blistering, 17 page dissent from the Commission’s decision to bring, and settle, an enforcement action against Apple regarding the design of its app store. Wright dissented, not from the conclusion necessarily, but from the methodology by which the Commission arrived there. In essence, he argued for an error-cost approach to unfairness:

The Commission, under the rubric of “unfair acts and practices,” substitutes its own judgment for a private firm’s decisions as to how to design its product to satisfy as many users as possible, and requires a company to revamp an otherwise indisputably legitimate business practice. Given the apparent benefits to some consumers and to competition from Apple’s allegedly unfair practices, I believe the Commission should have conducted a much more robust analysis to determine whether the injury to this small group of consumers justifies the finding of unfairness and the imposition of a remedy.

…. although Apple’s allegedly unfair act or practice has harmed some consumers, I do not believe the Commission has demonstrated the injury is substantial. More importantly, any injury to consumers flowing from Apple’s choice of disclosure and billing practices is outweighed considerably by the benefits to competition and to consumers that flow from the same practice.

The majority insisted that the burden on consumers or Apple from its remedy “is de minimis,” and therefore “it was unnecessary for the Commission to undertake a study of how consumers react to different disclosures before issuing its complaint against Apple, as Commissioner Wright suggests.”

Wright responded: “Apple has apparently determined that most consumers do not want to experience excessive disclosures or to be inconvenienced by having to enter their passwords every time they make a purchase.” In essence, he argued, that the FTC should not presume to know better than Apple how to manage the subtle trade-offs between convenience and usability.

Wright was channeling Hayek’s famous quip: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” The last thing the FTC should be doing is designing digital products — even by hovering over Apple’s shoulder.

The Data Broker Report

Wright next took the Commission to task for the lack of economic analysis in its May 2013 report, “Data Brokers: A Call for Transparency and Accountability.” In just four footnotes, Wright extended his analysis of Apple. For example:

Footnote 85: Commissioner Wright agrees that Congress should consider legislation that would provide for consumer access to the information collected by data brokers. However, he does not believe that at this time there is enough evidence that the benefits to consumers of requiring data brokers to provide them with the ability to opt out of the sharing of all consumer information for marketing purposes outweighs the costs of imposing such a restriction. Finally… he believes that the Commission should engage in a rigorous study of consumer preferences sufficient to establish that consumers would likely benefit from such a portal prior to making such a recommendation.

Footnote 88: Commissioner Wright believes that in enacting statutes such as the Fair Credit Reporting Act, Congress undertook efforts to balance [costs and benefits]. In the instant case, Commissioner Wright is wary of extending FCRA-like coverage to other uses and categories of information without first performing a more robust balancing of the benefits and costs associated with imposing these requirements

The Internet of Things Report

This January, in a 4-page dissent from the FTC’s staff report on “The Internet of Things: Privacy and Security in a Connected World,” Wright lamented that the report neither represented serious economic analysis of the issues discussed nor synthesized the FTC’s workshop on the topic:

A record that consists of a one-day workshop, its accompanying public comments, and the staff’s impressions of those proceedings, however well-intended, is neither likely to result in a representative sample of viewpoints nor to generate information sufficient to support legislative or policy recommendations.

His attack on the report’s methodology was blistering:

The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs. Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace…. I support the well-established Commission view that companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of “security by design” and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.

Ouch.

Nomi: Deception & Materiality Analysis

In April, Wright turned his analytical artillery from unfairness to deception, long the more uncontroversial half of UDAP. In a five-page dissent, Wright accused the Commission of essentially dispensing with the core limiting principle of the 1983 Deception Policy Statement: materiality. As Wright explained:

The materiality inquiry is critical because the Commission’s construct of “deception” uses materiality as an evidentiary proxy for consumer injury…. Deception causes consumer harm because it influences consumer behavior — that is, the deceptive statement is one that is not merely misleading in the abstract but one that causes consumers to make choices to their detriment that they would not have otherwise made. This essential link between materiality and consumer injury ensures the Commission’s deception authority is employed to deter only conduct that is likely to harm consumers and does not chill business conduct that makes consumers better off.

As in Apple, Wright did not argue that there might not be a role for the FTC; merely that the FTC had failed to justify bringing, let alone settling, an enforcement action without establishing that the key promise at issue — to provide in-store opt-out — was material.

The Chamber Speech: A Call for Economic Analysis

In May, Wright gave a speech to the Chamber of Commerce on “How to Regulate the Internet of Things Without Harming its Future: Some Do’s and Don’ts”:

Perhaps it is because I am an economist who likes to deal with hard data, but when it comes to data and privacy regulation, the tendency to rely upon anecdote to motivate policy is a serious problem. Instead of developing a proper factual record that documents cognizable and actual harms, regulators can sometimes be tempted merely to explore anecdotal and other hypothetical examples and end up just offering speculations about the possibility of harm.

And on privacy in particular:

What I have seen instead is what appears to be a generalized apprehension about the collection and use of data — whether or not the data is actually personally identifiable or sensitive — along with a corresponding, and arguably crippling, fear about the possible misuse of such data.  …. Any sensible approach to regulating the collection and use of data will take into account the risk of abuses that will harm consumers. But those risks must be weighed with as much precision as possible, as is the case with potential consumer benefits, in order to guide sensible policy for data collection and use. The appropriate calibration, of course, turns on our best estimates of how policy changes will actually impact consumers on the margin….

Wright concedes that the “vast majority of work that the Consumer Protection Bureau performs simply does not require significant economic analysis because they involve business practices that create substantial risk of consumer harm but little or nothing in the way of consumer benefits.” Yet he notes that the Internet has made the need for cost-benefit analysis far more acute, at least where conduct is ambiguous as its effects on consumers, as in Apple, to avoid “squelching innovation and depriving consumers of these benefits.”

The Wrightian Reform Agenda for UDAP Enforcement

Wright left all the building blocks his successor will need to bring “Wrightian” reform to how the Bureau of Consumer Protection works:

  1. Wright’s successor should work to require economic analysis for consent decrees, as Wright proposed in his last major address as a Commissioner. BE might not to issue a statement at all in run-of-the-mill deception cases, but it should certainly have to say something about unfairness cases.
  2. The FTC needs to systematically assess its enforcement process to understand the incentives causing companies to settle UDAP cases nearly every time — resulting in what Chairman Ramirez and Commissioner Brill frequently call the FTC’s “common law of consent decrees.”
  3. As Wright says in his Nomi dissent “While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint.” This point should be uncontroversial, yet the Commission has never addressed it. Wright’s successor (and the FTC) should, at a minimum, propose a standard for settling cases.
  4. Just as Josh succeeded in getting the FTC to issue a UMC policy statement, his successor should re-assess the FTC’s two UDAP policy statements. Wright’s successor needs to make the case for finally codifying the DPS — and ensuring that the FTC stops bypassing materiality, as in Nomi.
  5. The Commission should develop a rigorous methodology for each of the required elements of unfairness and deception to justify bringing cases (or making report recommendations). This will be a great deal harder than merely attacking the lack of such methodology in dissents.
  6. The FTC has, in recent years, increasingly used reports to make de facto policy — by inventing what Wright calls, in his Chamber speech, “slogans and catchphrases” like “privacy by design,” and then using them as boilerplate requirements for consent decrees; by pressuring companies into adopting the FTC’s best practices; by calling for legislation; and so on. At a minimum, these reports must be grounded in careful economic analysis.
  7. The Commission should apply far greater rigor in setting standards for substantiating claims about health benefits. In two dissents, Genelink et al and HCG Platinum, Wright demolished arguments for a clear, bright line requiring two randomized clinical trials, and made the case for “a more flexible substantiation requirement” instead.

Conclusion: Big Shoes to Fill

It’s a testament to Wright’s analytical clarity that he managed to say so much about consumer protection in so few words. That his UDAP work has received so little attention, relative to his competition work, says just as much about the far greater need for someone to do for consumer protection what Wright did for competition enforcement and policy at the FTC.

Wright’s successor, if she’s going to finish what Wright started, will need something approaching Wright’s sheer intellect, his deep internalization of the error-costs approach, and his knack for brokering bipartisan compromise around major issues — plus the kind of passion for UDAP matters Wright had for competition matters. And, of course, that person needs to be able to continue his legacy on competition matters…

Compared to the difficulty of finding that person, actually implementing these reforms may be the easy part.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best😉.

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.