Archives For privacy

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

Last week the International Center for Law & Economics filed comments on the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As we note in our comments:

The Commission’s NPRM would shoehorn the business models of a subset of new economy firms into a regime modeled on thirty-year-old CPNI rules designed to address fundamentally different concerns about a fundamentally different market. The Commission’s hurried and poorly supported NPRM demonstrates little understanding of the data markets it proposes to regulate and the position of ISPs within that market. And, what’s more, the resulting proposed rules diverge from analogous rules the Commission purports to emulate. Without mounting a convincing case for treating ISPs differently than the other data firms with which they do or could compete, the rules contemplate disparate regulatory treatment that would likely harm competition and innovation without evident corresponding benefit to consumers.

In particular, we focus on the FCC’s failure to justify treating ISPs differently than other competitors, and its failure to justify more stringent treatment for ISPs in general:

In short, the Commission has not made a convincing case that discrimination between ISPs and edge providers makes sense for the industry or for consumer welfare. The overwhelming body of evidence upon which other regulators have relied in addressing privacy concerns urges against a hard opt-in approach. That same evidence and analysis supports a consistent regulatory approach for all competitors, and nowhere advocates for a differential approach for ISPs when they are participating in the broader informatics and advertising markets.

With respect to the proposed opt-in regime, the NPRM ignores the weight of economic evidence on opt-in rules and fails to justify the specific rules it prescribes. Of most significance is the imposition of this opt-in requirement for the sharing of non-sensitive data.

On net opt-in regimes may tend to favor the status quo, and to maintain or grow the position of a few dominant firms. Opt-in imposes additional costs on consumers and hurts competition — and it may not offer any additional protections over opt-out. In the absence of any meaningful evidence or rigorous economic analysis to the contrary, the Commission should eschew imposing such a potentially harmful regime on broadband and data markets.

Finally, we explain that, although the NPRM purports to embrace a regulatory regime consistent with the current “federal privacy regime,” and particularly the FTC’s approach to privacy regulation, it actually does no such thing — a sentiment echoed by a host of current and former FTC staff and commissioners, including the Bureau of Consumer Protection staff, Commissioner Maureen Ohlhausen, former Chairman Jon Leibowitz, former Commissioner Josh Wright, and former BCP Director Howard Beales.

Our full comments are available here.

The lifecycle of a law is a curious one; born to fanfare, a great solution to a great problem, but ultimately doomed to age badly as lawyers seek to shoehorn wholly inappropriate technologies and circumstances into its ambit. The latest chapter in the book of badly aging laws comes to us courtesy of yet another dysfunctional feature of our political system: the Supreme Court nomination and confirmation process.

In 1988, President Reagan nominated Judge Bork for a spot on the US Supreme Court. During the confirmation process following his nomination, a reporter was able to obtain a list of videos he and his family had rented from local video rental stores (You remember those, right?). In response to this invasion of privacy — by a reporter whose intention was to publicize and thereby (in some fashion) embarrass or “expose” Judge Bork — Congress enacted the Video Privacy Protection Act (“VPPA”).

In short, the VPPA makes it illegal for a “video tape service provider” to knowingly disclose to third parties any “personally identifiable information” in connection with the viewing habits of a “consumer” who uses its services. Left as written and confined to the scope originally intended for it, the Act seems more or less fine. However, over the last few years, plaintiffs have begun to use the Act as a weapon with which to attack common Internet business models in a manner wholly out of keeping with drafters’ intent.

And with a decision that promises to be a windfall for hungry plaintiff’s attorneys everywhere, the First Circuit recently allowed a plaintiff, Alexander Yershov, to make it past a 12(b)(6) motion on a claim that Gannett violated the VPPA with its  USA Today Android mobile app.

What’s in a name (or Android ID) ?

The app in question allowed Mr. Yershov to view videos without creating an account, providing his personal details, or otherwise subscribing (in the generally accepted sense of the term) to USA Today’s content. What Gannett did do, however, was to provide to Adobe Systems the Android ID and GPS location data associated with Mr. Yershov’s use of the app’s video content.

In interpreting the VPPA in a post-Blockbuster world, the First Circuit panel (which, apropos of nothing, included retired Justice Souter) had to wrestle with whether Mr. Yershov counts as a “subscriber,” and to what extent an Android ID and location information count as “personally identifying information” under the Act. Relying on the possibility that Adobe might be able to infer the identity of the plaintiff given its access to data from other web properties, and given the court’s rather gut-level instinct that an app user is a “subscriber,” the court allowed the plaintiff to survive the 12(b)(6) motion.

The PII point is the more arguable of the two, as the statutory language is somewhat vague. Under the Act, PIII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” On this score the court decided that GPS data plus an Android ID (or each alone — it wasn’t completely clear) could constitute information protected under the Act (at least for purposes of a 12(b)(6) motion):

The statutory term “personally identifiable information” is awkward and unclear. The definition of that term… adds little clarity beyond training our focus on the question whether the information identifies the person who obtained the video…. Nevertheless, the language reasonably conveys the point that PII is not limited to information that explicitly names a person.

OK (maybe). But where the court goes off the rails is in its determination that an Android ID, GPS data, or a list of videos is, in itself, enough to identify anyone.

It might be reasonable to conclude that Adobe could use that information in combination with other information it collects from yet other third parties (fourth parties?) in order to build up a reliable, personally identifiable profile. But the statute’s language doesn’t hang on such a combination. Instead, the court’s reasoning finds potential liability by reading this exact sort of prohibition into the statute:

Adobe takes this and other information culled from a variety of sources to create user profiles comprised of a given user’s personal information, online behavioral data, and device identifiers… These digital dossiers provide Adobe and its clients with “an intimate look at the different types of materials consumed by the individual” … While there is certainly a point at which the linkage of information to identity becomes too uncertain, or too dependent on too much yet-to-be-done, or unforeseeable detective work, here the linkage, as plausibly alleged, is both firm and readily foreseeable to Gannett.

Despite its hedging about uncertain linkages, the court’s reasoning remains contingent on an awful lot of other moving parts — something not found in either the text of the law, nor the legislative history of the Act.

The information sharing identified by the court is in no way the sort of simple disclosure of PII that easily identifies a particular person in the way that, say, Blockbuster Video would have been able to do in 1988 with disclosure of its viewing lists.  Yet the court purports to find a basis for its holding in the abstract nature of the language in the VPPA:

Had Congress intended such a narrow and simple construction [as specifying a precise definition for PII], it would have had no reason to fashion the more abstract formulation contained in the statute.

Again… maybe. Maybe Congress meant to future-proof the provision, and didn’t want the statute construed as being confined to the simple disclosure of name, address, phone number, and so forth. I doubt, though, that it really meant to encompass the sharing of any information that might, at some point, by some unknown third parties be assembled into a profile that, just maybe if you squint at it hard enough, will identify a particular person and their viewing habits.

Passive Subscriptions?

What seems pretty clear, however, is that the court got it wrong when it declared that Mr. Yershov was a “subscriber” to USA Today by virtue of simply downloading an app from the Play Store.

The VPPA prohibits disclosure of a “consumer’s” PII — with “consumer” meaning “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” In this case (as presumably will happen in most future VPPA cases involving free apps and websites), the plaintiff claims that he is a “subscriber” to a “video tape” service.

The court built its view of “subscriber” predominantly on two bases: (1) you don’t need to actually pay anything to count as a subscriber (with which I agree), and (2) that something about installing an app that can send you push notifications is different enough than frequenting a website, that a user, no matter how casual, becomes a “subscriber”:

When opened for the first time, the App presents a screen that seeks the user’s permission for it to “push” or display notifications on the device. After choosing “Yes” or “No,” the user is directed to the App’s main user interface.

The court characterized this connection between USA Today and Yershov as “seamless” — ostensibly because the app facilitates push notifications to the end user.

Thus, simply because it offers an app that can send push notifications to users, and because this app sometimes shows videos, a website or Internet service — in this case, an app portal for a newspaper company — becomes a “video tape service,” offering content to “subscribers.” And by sharing information in a manner that is nowhere mentioned in the statute and that on its own is not capable of actually identifying anyone, the company suddenly becomes subject to what will undoubtedly be an avalanche of lawsuits (at least in the first circuit).

Preposterous as this may seem on its face, it gets worse. Nothing in the court’s opinion is limited to “apps,” and the “logic” would seem to apply to the general web as well (whether the “seamless” experience is provided by push notifications or some other technology that facilitates tighter interaction with users). But, rest assured, the court believes that

[B]y installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.

Thank goodness it’s “materially” different… although just going by the reasoning in this opinion, I don’t see how that can possibly be true.

What happens when web browsers can enable push notifications between users and servers? Well, I guess we’ll find out soon because major browsers now support this feature. Further, other technologies — like websockets — allow for continuous two-way communication between users and corporate sites. Does this change the calculus? Does it meet the court’s “test”? If so, the court’s exceedingly vague reasoning provides little guidance (and a whole lot of red meat for lawsuits).

To bolster its view that apps are qualitatively different than web sites with regard to their delivery to consumers, the court asks “[w]hy, after all, did Gannett develop and seek to induce downloading of the App?” I don’t know, because… cell phones?

And this bit of “reasoning” does nothing for the court’s opinion, in fact. Gannett undertook development of a web site in the first place because some cross-section of the public was interested in reading news online (and that was certainly the case for any electronic distribution pre-2007). No less, consumers have increasingly been moving toward using mobile devices for their online activities. Though it’s a debatable point, apps can often provide a better user experience than that provided by a mobile browser. Regardless, the line between “app” and “web site” is increasingly a blurry one, especially on mobile devices, and with the proliferation of HTML5 and frameworks like Google’s Progressive Web Apps, the line will only grow more indistinct. That Gannett was seeking to provide the public with an app has nothing to do with whether it intended to develop a more “intimate” relationship with mobile app users than it has with web users.

The 11th Circuit, at least, understands this. In Ellis v. Cartoon Network, it held that a mere user of an app — without more — could not count as a “subscriber” under the VPPA:

The dictionary definitions of the term “subscriber” we have quoted above have a common thread. And that common thread is that “subscription” involves some type of commitment, relationship, or association (financial or otherwise) between a person and an entity. As one district court succinctly put it: “Subscriptions involve some or [most] of the following [factors]: payment, registration, commitment, delivery, [expressed association,] and/or access to restricted content.”

The Eleventh Circuit’s point is crystal clear, and I’m not sure how the First Circuit failed to appreciate it (particularly since it was the district court below in the Yershov case that the Eleventh Circuit was citing). Instead, the court got tied up in asking whether or not a payment was required to constitute a “subscription.” But that’s wrong. What’s needed is some affirmative step – something more than just downloading an app, and certainly something more than merely accessing a web site.

Without that step — a “commitment, relationship, or association (financial or otherwise) between a person and an entity” — the development of technology that simply offers a different mode of interaction between users and content promises to transform the VPPA into a tremendously powerful weapon in the hands of eager attorneys, and a massive threat to the advertising-based business models that have enabled the growth of the web.

How could this possibly not apply to websites?

In fact, there is no way this opinion won’t be picked up by plaintiff’s attorneys in suits against web sites that allow ad networks to collect any information on their users. Web sites may not have access to exact GPS data (for now), but they do have access to fairly accurate location data, cookies, and a host of other data about their users. And with browser-based push notifications and other technologies being developed to create what the court calls a “seamless” experience for users, any user of a web site will count as a “subscriber” under the VPPA. The potential damage to the business models that have funded the growth of the Internet is hard to overstate.

There is hope, however.

Hulu faced a similar challenge over the last few years arising out of its collection of viewer data on its platform and the sharing of that data with third-party ad services in order to provide better targeted and, importantly, more user-relevant marketing. Last year it actually won a summary judgment motion on the basis that it had no way of knowing that Facebook (the third-party with which it was sharing data) would reassemble the data in order to identify particular users and their viewing habits. Nevertheless, Huu has previously lost motions on the subscriber and PII issues.

Hulu has, however, previously raised one issue in its filings on which the district court punted, but that could hold the key to putting these abusive litigations to bed.

The VPPA provides a very narrowly written exception to the prohibition on information sharing when such sharing is “incident to the ordinary course of business” of the “video tape service provider.” “Ordinary course of business” in this context means  “debt collection activities, order fulfillment, request processing, and the transfer of ownership.” In one of its motions, Hulu argued that

the section shows that Congress took into account that providers use third parties in their business operations and “‘allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’

The district court didn’t grant Hulu summary judgment on the issue, essentially passing on the question. But in 2014 the Seventh Circuit reviewed a very similar set of circumstances in Sterk v. Redbox and found that the exception applied. In that case Redbox had a business relationship with Stream, a third party that provided Redbox with automated customer service functions. The Seventh Circuit found that information sharing in such a relationship fell within Redbox’s “ordinary course of business”, and so Redbox was entitled to summary judgment on the VPPA claims against it.

This is essentially the same argument that Hulu was making. Third-party ad networks most certainly provide a service to corporations that serve content over the web. Hulu, Gannett and every other publisher on the web surely could provide their own ad platforms on their own properties. But by doing so they would lose the economic benefits that come from specialization and economies of scale. Thus, working with a third-party ad network pretty clearly replaces the “order fulfillment” and “request processing” functions of a content platform.

The Big Picture

And, stepping back for a moment, it’s important to take in the big picture. The point of the VPPA was to prevent public disclosures that would chill speech or embarrass individuals; the reporter in 1987 set out to expose or embarrass Judge Bork.  This is the situation the VPPA’s drafters had in mind when they wrote the Act. But the VPPA was most emphatically not designed to punish Internet business models — especially of a sort that was largely unknown in 1988 — that serve the interests of consumers.

The 1988 Senate report on the bill, for instance, notes that “[t]he bill permits the disclosure of personally identifiable information under appropriate and clearly defined circumstances. For example… companies may sell mailing lists that do not disclose the actual selections of their customers.”  Moreover, the “[Act] also allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’”

Congress plainly contemplated companies being able to monetize their data. And this just as plainly includes the common practice in automated tracking systems on the web today that use customers’ viewing habits to serve them with highly personalized web experiences.

Sites that serve targeted advertising aren’t in the business of embarrassing consumers or abusing their information by revealing it publicly. And, most important, nothing in the VPPA declares that information sharing is prohibited if third party partners could theoretically construct a profile of users. The technology to construct these profiles simply didn’t exist in 1988, and there is nothing in the Act or its legislative history to support the idea that the VPPA should be employed against the content platforms that outsource marketing to ad networks.

What would make sense is to actually try to fit modern practice in with the design and intent of the VPPA. If, for instance, third-party ad networks were using the profiles they created to extort, blackmail, embarrass, or otherwise coerce individuals, the practice certainly falls outside of course of business, and should be actionable.

But as it stands, much like the TCPA, the VPPA threatens to become a costly technological anachronism. Future courts should take the lead of the Eleventh and Seventh circuits, and make the law operate in the way it was actually intended. Gannett still has the opportunity to appeal for an en banc hearing, and after that for cert before the Supreme Court. But the circuit split this presents is the least of our worries. If this issue is not resolved in a way that permits platforms to continue to outsource their marketing efforts as they do today, the effects on innovation could be drastic.

Web platforms — which includes much more than just online newspapers — depend upon targeted ads to support their efforts. This applies to mobile apps as well. The “freemium” model has eclipsed the premium model for apps — a fact that expresses the preferences of both consumers at large as well as producers. Using the VPPA as a hammer to smash these business models will hurt everyone except, of course, for plaintiff’s attorneys.

Earlier this month, Federal Communications Commission (FCC) Chairman Tom Wheeler released a “fact sheet” describing his proposal to have the FCC regulate the privacy policies of broadband Internet service providers (ISPs).  Chairman Wheeler’s detailed proposal will be embodied in a Notice of Proposed Rulemaking (NPRM) that the FCC may take up as early as March 31.  The FCC instead should shelve this problematic initiative and leave broadband privacy regulation (to the extent it is warranted) to the Federal Trade Commission (FTC).

In a March 23 speech before the Free State Foundation, FTC Commissioner Maureen Ohlhausen ably summarized the negative economic implications of the NPRM, contrasting the FCC’s proposal with the FTC’s approach to privacy-related enforcement (citations omitted):

The FCC’s proposal differs significantly from the choice architecture the FTC has established under its deception authority.  Our [FTC] deception authority enforces the promises companies make to consumers.  But companies are not required under our deception authority to make such privacy promises.  This is as it should be.  As I’ve already described, unfairness authority sets a baseline by prohibiting practices the vast majority of consumers would not embrace. Mandating practices above this baseline reduces consumer welfare because it denies some consumers options that best match their preferences.  Consumer demand and competitive forces spur companies to make privacy promises.  In fact, nearly all legitimate companies currently make detailed promises about their privacy practices.  This demonstrates a market demand for, and supply of, transparency about company uses of data.  Indeed, recent research . . . shows that broadband ISPs in particular already make strong privacy promises to consumers.

In contrast to the choice framework of the FTC, the FCC’s proposal, according to the recent [Wheeler] fact sheet, seeks to mandate that broadband ISPs adopt a specific opt in / opt-out regime.  The fact sheet repeatedly insists that this is about consumer choice. But, in fact, opt in mandates unavoidably reduce consumer choice. First, one subtle way in which a privacy baseline might be set too high is if the default opt in condition does not match the average consumer preference.  If the FCC mandates opt in for a specific data collection, but a majority of consumers already prefer to share that information, the mandate unnecessarily raises costs to companies and consumers.  Second, opt in mandates prevent unanticipated beneficial uses of data.  An effective and transparent opt-in regime requires that companies know at the time of collection how they will use the collected information. Yet data, including non-sensitive data, often yields significantconsumer benefits from uses that could not be known at the time of collection.  Ignoring this, the fact sheet proposes to ban all but a very few uses unless consumers opt in.  This proposed opt in regime would prohibit unforeseeable future uses of collected data, regardless of what consumers would prefer.  This approach is stricter and more limiting than the requirements that other internet companies face. Now, I agree such mandates may be appropriate for certain types of sensitive data such as credit card numbers or SSNs, but they likely will reduce consumer options if applied to non-sensitive data.

If the FCC wished to be consistent with the FTC’s approach of using prohibitions only for widely held consumer preferences, it would take a different approach and simply require opt in for specific, sensitive uses. . . . 

[Furthermore,] [h]ere, the FCC proposes, for the first time ever, to apply a statute created for telephone lines to broadband ISPs. That raises some significant statutory authority issues that the FCC may ultimately need to look to Congress to clarify. . . .

[In addition,] the current FCC proposal appears to reflect the preferences of privacy lobbyists who are frustrated with the lax privacy preferences of average American consumers.  Furthermore, the proposal doesn’t appear to have the support of the minority FCC Commissioners or Congress. 

[Also,] the FCC proposal applies to just one segment of the internet ecosystem broadband ISPs, even though there is good evidence that ISPs are not uniquely privy to your data. . . .

[In conclusion,] [a]t its core, protecting consumer privacy ought to be about effectuating consumers’ preferences.  If privacy rules impose the preferences of the few on the many, consumers will not be better off.  Therefore, prescriptive baseline privacy mandates like the FCC’s proposal should be reserved for practices that consumers overwhelmingly disfavor.  Otherwise, consumers should remain free to exercise their privacy preferences in the marketplace, and companies should be held to the promises they make.  This approach, which is a time-tested, emergent result of the FTC’s case-by-case application of its statutory authority, offers a good template for the FCC.

Commissioner Ohlhausen’s presentation comports with my May 2015 Heritage Foundation Legal Memorandum, which explained that the FTC’s highly structured, analytic, fact-based approach, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

Regrettably, there is little reason to believe that the FCC, acting on its own, will heed Commissioner Ohlhausen’s call to focus on consumer preferences in evaluating broadband ISP privacy practices.  What’s worse, the FTC’s ability to act at all in this area is in doubt.  The FCC’s current regulation requiring broadband ISP “net neutrality,” and its proposed regulation of ISP privacy practices, are premised on the dubious reclassification of broadband as a “common carrier” service – and the FTC has no authority over common carriers.  If the D.C. Circuit fails to overturn the FCC’s broadband rule, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.

The FCC doesn’t have authority over the edge and doesn’t want authority over the edge. Well, that is until it finds itself with no choice but to regulate the edge as a result of its own policies. As the FCC begins to explore its new authority to regulate privacy under the Open Internet Order (“OIO”), for instance, it will run up against policy conflicts and inconsistencies that will make it increasingly hard to justify forbearance from regulating edge providers.

Take for example the recently announced NPRM titled “Expanding Consumers’ Video Navigation Choices” — a proposal that seeks to force cable companies to provide video programming to third party set-top box manufacturers. Under the proposed rules, MVPD distributors would be required to expose three data streams to competitors: (1) listing information about what is available to particular customers; (2) the rights associated with accessing such content; and (3) the actual video content. As Geoff Manne has aptly noted, this seems to be much more of an effort to eliminate the “nightmare” of “too many remote controls” than it is to actually expand consumer choice in a market that is essentially drowning in consumer choice. But of course even so innocuous a goal—which is probably more about picking on cable companies because… “eww cable companies”—suggests some very important questions.

First, the market for video on cable systems is governed by a highly interdependent web of contracts that assures to a wide variety of parties that their bargained-for rights are respected. Among other things, channels negotiate for particular placements and channel numbers in a cable system’s lineup, IP rights holders bargain for content to be made available only at certain times and at certain locations, and advertisers pay for their ads to be inserted into channel streams and broadcasts.

Moreover, to a large extent, the content industry develops its content based on a stable regime of bargained-for contractual terms with cable distribution networks (among others). Disrupting the ability of cable companies to control access to their video streams will undoubtedly alter the underlying assumptions upon which IP companies rely when planning and investing in content development. And, of course, the physical networks and their related equipment have been engineered around the current cable-access regimes. Some non-trivial amount of re-engineering will have to take place to make the cable-networks compatible with a more “open” set-top box market.

The FCC nods to these concerns in its NPRM, when it notes that its “goal is to preserve the contractual arrangements between programmers and MVPDs, while creating additional opportunities for programmers[.]” But this aspiration is not clearly given effect in the NPRM, and, as noted, some contractual arrangements are simply inconsistent with the NPRM’s approach.

Second, the FCC proposes to bind third-party manufacturers to the public interest privacy commitments in §§ 629, 551 and 338(i) of the Communications Act (“Act”) through a self-certification process. MVPDs would be required to pass the three data streams to third-party providers only once such a certification is received. To the extent that these sections, enforced via self-certification, do not sufficiently curtail third-parties’ undesirable behavior, the FCC appears to believe that “the strictest state regulatory regime[s]” and the “European Union privacy regulations” will serve as the necessary regulatory gap fillers.

This seems hard to believe, however, particularly given the recently announced privacy and cybersecurity NPRM, through which the FCC will adopt rules detailing the agency’s new authority (under the OIO) to regulate privacy at the ISP level. Largely, these rules will grow out of §§ 222 and 201 of the Act, which the FCC in Terracom interpreted together to be a general grant of privacy and cybersecurity authority.

I’m apprehensive of the asserted scope of the FCC’s power over privacy — let alone cybersecurity — under §§ 222 and 201. In truth, the FCC makes an admirable showing in Terracom of demonstrating its reasoning; it does a far better job than the FTC in similar enforcement actions. But there remains a problem. The FTC’s authority is fundamentally cabined by the limitations contained within the FTC Act (even if it frequently chooses to ignore them, they are there and are theoretically a protection against overreach).

But the FCC’s enforcement decisions are restrained (if at all) by a vague “public interest” mandate, and a claim that it will enforce these privacy principles on a case-by-case basis. Thus, the FCC’s proposed regime is inherently one based on vast agency discretion. As in many other contexts, enforcers with wide discretion and a tremendous power to penalize exert a chilling effect on innovation and openness, as well as a frightening power over a tremendous swath of the economy. For the FCC to claim anything like an unbounded UDAP authority for itself has got to be outside of the archaic grant of authority from § 201, and is certainly a long stretch for the language of § 706 (a provision of the Act which it used as one of the fundamental justifications for the OIO)— leading very possibly to a bout of Chevron problems under precedent such as King v. Burwell and UARG v. EPA.

And there is a real risk here of, if not hypocrisy, then… deep conflict in the way the FCC will strike out on the set-top box and privacy NPRMs. The Commission has already noted in its NPRM that it will not be able to bind third-party providers of set-top boxes under the same privacy requirements that apply to current MVPD providers. Self-certification will go a certain length, but even there agitation from privacy absolutists will possibly sway the FCC to consider more stringent requirements. For instance, §§ 551 and 338 of the Act — which the FCC focuses on in the set-top box NPRM — are really only about disclosing intended uses of consumer data. And disclosures can come in many forms, including burying them in long terms of service that customers frequently do not read. Such “weak” guarantees of consumer privacy will likely become a frequent source of complaint (and FCC filings) for privacy absolutists.  

Further, many of the new set-top box entrants are going to be current providers of OTT video or devices that redistribute OTT video. And many of these providers make a huge share of their revenue from data mining and selling access to customer data. Which means one of two things: Either the FCC is going to just allow us to live in a world of double standards where these self-certifying entities are permitted significantly more leeway in their uses of consumer data than MVPD providers or, alternatively, the FCC is going to discover that it does in fact need to “do something.” If only there were a creative way to extend the new privacy authority under Title II to these providers of set-top boxes… . Oh! there is: bring edge providers into the regulation fold under the OIO.

It’s interesting that Wheeler’s announcement of the FCC’s privacy NPRM explicitly noted that the rules would not be extended to edge providers. That Wheeler felt the need to be explicit in this suggests that he believes that the FCC has the authority to extend the privacy regulations to edge providers, but that it will merely forbear (for now) from doing so.

If edge providers are swept into the scope of Title II they would be subject to the brand new privacy rules the FCC is proposing. Thus, despite itself (or perhaps not), the FCC may find itself in possession of a much larger authority over some edge providers than any of the pro-Title II folks would have dared admit was possible. And the hook (this time) could be the privacy concerns embedded in the FCC’s ill-advised attempt to “open” the set-top box market.

This is a complicated set of issues, and it’s contingent on a number of moving parts. This week, Chairman Wheeler will be facing an appropriations hearing where I hope he will be asked to unpack his thinking regarding the true extent to which the OIO may in fact be extended to the edge.

Scolding teacher

I have small children and, like any reasonably competent parent, I take an interest in monitoring their Internet usage. In particular, I am sensitive to what ad content they are being served and which sites they visit that might try to misuse their information. My son even uses Chromebooks at his elementary school, which underscores this concern for me, as I can’t always be present to watch what he does online. However, also like any other reasonably competent parent, I trust his school and his teacher to make good choices about what he is allowed to do online when I am not there to watch him. And so it is that I am both interested in and rather perplexed by what has EFF so worked up in its FTC complaint alleging privacy “violations” in the “Google for Education” program.

EFF alleges three “unfair or deceptive” acts that would subject Google to remedies under Section 5 of the FTCA: (1) Students logged into “Google for Education” accounts have their non-educational behavior individually tracked (e.g. performing general web searches, browsing YouTube, etc.); (2) the Chromebooks distributed as part of the “Google for Education” program have the “Chrome Sync” feature turned on by default (ostensibly in a terribly diabolical effort to give students a seamless experience between using the Chromebooks at home and at school); and (3) the school administrators running particular instances of “Google for Education” have the ability to share student geolocation information with third-party websites. Each of these violations, claims EFF, violates the K-12 School Service Provider Pledge to Safeguard Student Privacy (“Pledge”) that was authored by the Future of Privacy Forum and Software & Information Industry Association, and to which Google is a signatory. According to EFF, Google included references to its signature in its “Google for Education” marketing materials, thereby creating the expectation in parents that it would adhere to the principles, failed to do so, and thus should be punished.

The TL;DR version: EFF appears to be making some simple interpretational errors — it believes that the scope of the Pledge covers any student activity and data generated while a student is logged into a Google account. As the rest of this post will (hopefully) make clear, however, the Pledge, though ambiguous, is more reasonably read as limiting Google’s obligations to instances where a student is using  Google for Education apps, and does not apply to instances where the student is using non-Education apps — whether she is logged on using her Education account or not.

The key problem, as EFF sees it, is that Google “use[d] and share[d] … student personal information beyond what is needed for education.” So nice of them to settle complex business and educational decisions for the world! Who knew it was so easy to determine exactly what is needed for educational purposes!

Case in point: EFF feels that Google’s use of anonymous and aggregated student data in order to improve its education apps is not an educational purpose. Seriously? How can that not be useful for educational purposes — to improve its educational apps!?

And, according to EFF, the fact that Chrome Sync is ‘on’ by default in the Chromebooks only amplifies the harm caused by the non-Education data tracking because, when the students log in outside of school, their behavior can be correlated with their in-school behavior. Of course, this ignores the fact that the same limitations apply to the tracking — it happens only on non-Education apps. Thus, the Chrome Sync objection is somehow vaguely based on geography. The fact that Google can correlate an individual student’s viewing of a Neil DeGrasse Tyson video in a computer lab at school with her later finishing that video at home is somehow really bad (or so EFF claims).

EFF also takes issue with the fact that school administrators are allowed to turn on a setting enabling third parties to access the geolocation data of Google education apps users.

The complaint is fairly sparse on this issue — and the claim is essentially limited to the assertion that “[s]haring a student’s physical location with third parties is unquestionably sharing personal information beyond what is needed for educational purposes[.]”  While it’s possible that third-parties could misuse student data, a presumption that it is per se outside of any educational use for third-parties to have geolocation access at all strikes me as unreasonable.

Geolocation data, particularly on mobile devices, could allow for any number of positive and negative uses, and without more it’s hard to really take EFF’s premature concern all that seriously. Did they conduct a study demonstrating that geolocation data can serve no educational purpose or that the feature is frequently abused? Sadly, it seems doubtful. Instead, they appear to be relying upon the rather loose definition of likely harm that we have seen in FTC actions in other contexts ( more on this problem here).  

Who decides what ambiguous terms mean?

The bigger issue, however, is the ambiguity latent in the Pledge and how that ambiguity is being exploited to criticize Google. The complaint barely conceals EFF’s eagerness, and gives one the distinct feeling that the Pledge and this complaint are part of a long game. Everyone knows that Google’s entire existence revolves around the clever and innovative employment of large data sets. When Google announced that it was interested in working with schools to provide technology to students, I can only imagine how the anti-big-data-for-any-commercial-purpose crowd sat up and took notice, just waiting to pounce as soon as an opportunity, no matter how tenuous, presented itself.

EFF notes that “[u]nlike Microsoft and numerous other developers of digital curriculum and classroom management software, Google did not initially sign onto the Student Privacy Pledge with the first round of signatories when it was announced in the fall of 2014.” Apparently, it is an indictment of Google that it hesitated to adopt an external statement of privacy principles that was authored by a group that had no involvement with Google’s internal operations or business realities. EFF goes on to note that it was only after “sustained criticism” that Google “reluctantly” signed the pledge. So the company is badgered into signing a pledge that it was reluctant to sign in the first place (almost certainly for exactly these sorts of reasons), and is now being skewered by the proponents of the pledge that it was reluctant to sign. Somehow I can’t help but get the sense that this FTC complaint was drafted even before Google signed the Pledge.

According to the Pledge, Google promised to:

  1. “Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
  2. “Not build a personal profile of a student other than for supporting authorized educational/school purposes or as authorized by the parent/student.”
  3. “Not knowingly retain student personal information beyond the time period required to support the authorized educational/school purposes, or as authorized by the parent/student.”

EFF interprets “educational purpose” as anything a student does while logged into her education account, and by extension, any of the even non-educational activity will count as “student personal information.” I think that a fair reading of the Pledge undermines this position, however, and that the correct interpretation of the Pledge is that “educational purpose” and “student personal information” are more tightly coupled such that Google’s ability to collect student data is only circumscribed when the student is actually using the Google for Education Apps.

So what counts as “student personal information” in the pledge? “Student personal information” is “personally identifiable information as well as other information when it is both collected and maintained on an individual level and is linked to personally identifiable information.”  Although this is fairly broad, it is limited by the definition of “Educational/School purposes” which are “services or functions that customarily take place at the direction of the educational institution/agency or their teacher/employee, for which the institutions or agency would otherwise use its own employees, and that aid in the administration or improvement of educational and school activities.” (emphasis added).

This limitation in the Pledge essentially sinks EFF’s complaint. A major part of EFF’s gripe is that when the students interact with non-Education services, Google tracks them. However, the Pledge limits the collection of information only in contexts where “the institutions or agency would otherwise use its own employees” — a definition that clearly does not extend to general Internet usage. This definition would reasonably cover activities like administering classes, tests, and lessons. This definition would not cover activity such as general searches, watching videos on YouTube and the like. Key to EFF’s error is that the pledge is not operative on accounts but around activity — in particular educational activity “for which the institutions or agency would otherwise use its own employees.”

To interpret Google’s activity in the way that EFF does is to treat the Pledge as a promise never to do anything, ever, with the data of a student logged into an education account, whether generated as part of Education apps or otherwise. That just can’t be right. Thinking through the implications of EFF’s complaint, the ultimate end has to be that Google needs to obtain a permission slip from parents before offering access to Google for Education accounts. Administrators and Google are just not allowed to provision any services otherwise.

And here is where the long game comes in. EFF and its peers induced Google to sign the Pledge all the while understanding that their interpretation would necessarily require a re-write of Google’s business model.  But not only is this sneaky, it’s also ridiculous. By way of analogy, this would be similar to allowing parents an individual say over what textbooks or other curricular materials their children are allowed to access. This would either allow for a total veto by a single parent, or else would require certain students to be frozen out of participating in homework and other activities being performed with a Google for Education app. That may work for Yale students hiding from microaggressions, but it makes no sense to read such a contentious and questionable educational model into Google’s widely-offered apps.

I think a more reasonable interpretation should prevail. The privacy pledge is meant to govern the use of student data while that student is acting as a student — which in the case of Google for Education apps would mean while using said apps. Plenty of other Google apps could be used for educational purposes, but Google is intentionally delineating a sensible dividing line in order to avoid exactly this sort of problem (as well as problems that could arise under other laws directed at student activity, like COPPA, most notably). It is entirely unreasonable to presume that Google, by virtue of its socially desirable behavior of enabling students to have ready access to technology, is thereby prevented from tracking individuals’ behavior on non-Education apps as it chooses to define them.

What is the Harm?

According to EFF, there are two primary problems with Google’s gathering and use of student data: gathering and using individual data in non-Education apps, and gathering and using anonymized and aggregated data in the Education apps. So what is the evil end to which Google uses this non-Education gathered data?

“Google not only collects and stores the vast array of student data described above, but uses it for its own purposes such as improving Google products and serving targeted advertising (within non-Education Google services)”

The horrors! Google wants to use student behavior to improve its services! And yes, I get it, everyone hates ads — I hate ads too — but at some point you need to learn to accept that the wealth of nominally free apps available to every user is underwritten by the ad-sphere. So if Google is using the non-Education behavior of students to gain valuable insights that it can monetize and thereby subsidize its services, so what? This is life in the twenty-first century, and until everyone collectively decides that we prefer to pay for services up front, we had better get used to being tracked and monetized by advertisers.

But as noted above, whether you think Google should or shouldn’t be gathering this data, it seems clear that the data generated from use of non-Education apps doesn’t fall under the Pledge’s purview. Thus, perhaps sensing the problems in its non-Education use argument, EFF also half-heartedly attempts to demonize certain data practices that Google employs in the Education context. In short, Google aggregates and anonymizes the usage data of the Google for Education apps, and, according to EFF, this is a violation of the Pledge:

“Aggregating and anonymizing students’ browsing history does not change the intensely private nature of the data … such that Google should be free to use it[.]”

Again the “harm” is that Google actually wants to improve the Educational apps:  “Google has acknowledged that it collects, maintains, and uses student information via Chrome Sync (in aggregated and anonymized form) for the purpose of improving Google products”

This of course doesn’t violate the Pledge. After all, signatories to the Pledge promise only that they will “[n]ot collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes.” It’s eminently reasonable to include the improvement of the provisioned services as part of an “authorized educational … purpose[.]” And by ensuring that the data is anonymized and aggregated, Google is clearly acknowledging that some limits are appropriate in the education context — that it doesn’t need to collect individual and identifiable personal information for education purposes — but that improving its education products the same way it improves all its products is an educational purpose.

How are the harms enhanced by Chrome Sync? Honestly, it’s not really clear from EFF’s complaint. I believe that the core of EFF’s gripe (at least here) has to do with how the two data gathering activities may be correlated together. Google has ChromeSync enabled by default, so when the students sign on at different locations, the Education apps usage is recorded and grouped (still anonymously) for service improvement alongside non-Education use. And the presence of these two data sets being generated side-by-side creates the potential to track students in the educational capacity by correlating with information generated in their non-educational capacity.

Maybe there are potential flaws in the manner in which the data is anonymized. Obviously EFF thinks anonymized data won’t stay anonymized. That is a contentious view, to say the least, but regardless, it is in no way compelled by the Pledge. But more to the point, merely having both data sets does not do anything that clearly violates the Pledge.

The End Game

So what do groups like EFF actually want? It’s important to consider the effects on social welfare that this approach to privacy takes, and its context. First, the Pledge was overwhelmingly designed for and signed by pure education companies, and not large organizations like Google, Apple, or Microsoft — thus the nature of the Pledge itself is more or less ill-fitted to a multi-faceted business model. If we follow the logical conclusions of this complaint, a company like Google would face an undesirable choice: On the one hand, it can provide hardware to schools at zero cost or heavily subsidized prices, and also provide a suite of useful educational applications. However, as part of this socially desirable donation, it must also place a virtual invisibility shield around students once they’ve signed into their accounts. From that point on, regardless of what service they use — even non-educational ones — Google is prevented from using any data students generate. At this point, one has to question Google’s incentive to remove huge swaths of the population from its ability to gather data. If Google did nothing but provide the hardware, it could simply leave its free services online as-is, and let schools adopt or not adopt them as they wish (subject of course to extant legislation such as COPPA) — thereby allowing itself to possibly collect even more data on the same students.

On the other hand, if not Google, then surely many other companies would think twice before wading into this quagmire, or, when they do, they might offer severely limited services. For instance, one way of complying with EFF’s view of how the Pledge works would be to shut off access to all non-Education services. So, students logged into an education account could only access the word processing and email services, but would be prevented from accessing YouTube, web search and other services — and consequently suffer from a limitation of potentially novel educational options.

EFF goes on to cite numerous FTC enforcement actions and settlements from recent years. But all of the cited examples have one thing in common that the current complaint does not: they all are violations of § 5 for explicit statements or representations made by a company to consumers. EFF’s complaint, on the other hand, is based on a particular interpretation of an ambiguous document generally drafted, and outside of the the complicated business practice at issue. What counts as “student information” when a user employs a general purpose machine for both educational purposes and non-educational purposes?  The Pledge — at least the sections that EFF relies upon in its complaint — is far from clear and doesn’t cover Google’s behavior in an obvious manner.

Of course, the whole complaint presumes that the nature of Google’s services was somehow unfair or deceptive to parents — thus implying that there was at least some material reliance on the Pledge in parental decision making. However, this misses a crucial detail: it is the school administrators who contract with Google for the Chromebooks and Google for Education services, and not the parents or the students.  Then again, maybe EFF doesn’t care and it is, as I suggest above, just interested in a long game whereby it can shoehorn Google’s services into some new sort of privacy regime. This isn’t all that unusual, as we have seen even the White House in other contexts willing to rewrite business practices wholly apart from the realities of privacy “harms.”

But in the end, this approach to privacy is just a very efficient way to discover the lowest common denominator in charity. If it even decides to brave the possible privacy suits, Google and other similarly situated companies will provide the barest access to the most limited services in order to avoid extensive liability from ambiguous pledges. And, perhaps even worse for overall social welfare, using the law to force compliance with voluntarily enacted, ambiguous codes of conduct is a sure-fire way to make sure that there are fewer and more limited codes of conduct in the future.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best😉.

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

Last week, the FTC announced its complaint and consent decree with Nomi Technologies for failing to allow consumers to opt-out of cell phone tracking while shopping in retail stores. Whatever one thinks about Nomi itself, the FTC’s enforcement action represents another step in the dubious application of its enforcement authority against deceptive statements.

In response, Geoffrey Manne, Ben Sperry, and Berin Szoka have written a new ICLE White Paper, titled, In the Matter of Nomi, Technologies, Inc.: The Dark Side of the FTC’s Latest Feel-Good Case.

Nomi Technologies offers retailers an innovative way to observe how customers move through their stores, how often they return, what products they browse and for how long (among other things) by tracking the Wi-Fi addresses broadcast by customers’ mobile phones. This allows stores to do what websites do all the time: tweak their configuration, pricing, purchasing and the like in response to real-time analytics — instead of just eyeballing what works. Nomi anonymized the data it collected so that retailers couldn’t track specific individuals. Recognizing that some customers might still object, even to “anonymized” tracking, Nomi allowed anyone to opt-out of all Nomi tracking on its website.

The FTC, though, seized upon a promise made within Nomi’s privacy policy to provide an additional, in-store opt out and argued that Nomi’s failure to make good on this promise — and/or notify customers of which stores used the technology — made its privacy policy deceptive. Commissioner Wright dissented, noting that the majority failed to consider evidence that showed the promise was not material, arguing that the inaccurate statement was not important enough to actually affect consumers’ behavior because they could opt-out on the website anyway. Both Commissioners Wright’s and Commissioner Ohlhausen’s dissents argued that the FTC majority’s enforcement decision in Nomi amounted to prosecutorial overreach, imposing an overly stringent standard of review without any actual indication of consumer harm.

The FTC’s deception authority is supposed to provide the agency with the authority to remedy consumer harms not effectively handled by common law torts and contracts — but it’s not a blank check. The 1983 Deception Policy Statement requires the FTC to demonstrate:

  1. There is a representation, omission or practice that is likely to mislead the consumer;
  2. A consumer’s interpretation of the representation, omission, or practice is considered reasonable under the circumstances; and
  3. The misleading representation, omission, or practice is material (meaning the inaccurate statement was important enough to actually affect consumers’ behavior).

Under the DPS, certain types of claims are treated as presumptively material, although the FTC is always supposed to “consider relevant and competent evidence offered to rebut presumptions of materiality.” The Nomi majority failed to do exactly that in its analysis of the company’s claims, as Commissioner Wright noted in his dissent:

the Commission failed to discharge its commitment to duly consider relevant and competent evidence that squarely rebuts the presumption that Nomi’s failure to implement an additional, retail-level opt out was material to consumers. In other words, the Commission neglects to take into account evidence demonstrating consumers would not “have chosen differently” but for the allegedly deceptive representation.

As we discuss in detail in the white paper, we believe that the Commission committed several additional legal errors in its application of the Deception Policy Statement in Nomi, over and above its failure to adequately weigh exculpatory evidence. Exceeding the legal constraints of the DPS isn’t just a legal problem: in this case, it’s led the FTC to bring an enforcement action that will likely have the very opposite of its intended result, discouraging rather than encouraging further disclosure.

Moreover, as we write in the white paper:

Nomi is the latest in a long string of recent cases in which the FTC has pushed back against both legislative and self-imposed constraints on its discretion. By small increments (unadjudicated consent decrees), but consistently and with apparent purpose, the FTC seems to be reverting to the sweeping conception of its power to police deception and unfairness that led the FTC to a titanic clash with Congress back in 1980.

The Nomi case presents yet another example of the need for FTC process reforms. Those reforms could ensure the FTC focuses on cases that actually make consumers better off. But given the FTC majority’s unwavering dedication to maximizing its discretion, such reforms will likely have to come from Congress.

Find the full white paper here.

Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.

A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.

According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.

In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.

Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.

FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.

In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact  a data barrier to entry that makes these harms possible.

As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.

First, data is useful to all industries — this is not some new phenomenon particular to online companies

It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:

  • Macy’s analyzes tens of millions of terabytes of data every day to gain insights from social media and store transactions. Over the past three years, the use of big data analytics alone has helped Macy’s boost its revenue growth by 4 percent annually.
  • Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on Walmart.com after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
  • Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.

And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.

Second, it’s not the amount of data that leads to success but building a better mousetrap

The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.

Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to  Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.

Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.

At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.

In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.

Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low

Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.

Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:

  • If consumers want to make a purchase, they are more likely to do their research on Amazon than Google.
  • Google flight search has failed to seriously challenge — let alone displace —  its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
  • People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
  • Pinterest, one of the most highly valued startups today, is now a serious challenger to traditional search engines when people want to discover new products.
  • With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
  • Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.

Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.

Fourth, access to data is not exclusive

Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.

This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.

Friend-Finder

Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.

If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.

Fifth, the data barrier to entry argument does not have workable antitrust remedies

The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?

It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.

On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.

Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.

It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.

If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.