Archives For advertising

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

The lifecycle of a law is a curious one; born to fanfare, a great solution to a great problem, but ultimately doomed to age badly as lawyers seek to shoehorn wholly inappropriate technologies and circumstances into its ambit. The latest chapter in the book of badly aging laws comes to us courtesy of yet another dysfunctional feature of our political system: the Supreme Court nomination and confirmation process.

In 1988, President Reagan nominated Judge Bork for a spot on the US Supreme Court. During the confirmation process following his nomination, a reporter was able to obtain a list of videos he and his family had rented from local video rental stores (You remember those, right?). In response to this invasion of privacy — by a reporter whose intention was to publicize and thereby (in some fashion) embarrass or “expose” Judge Bork — Congress enacted the Video Privacy Protection Act (“VPPA”).

In short, the VPPA makes it illegal for a “video tape service provider” to knowingly disclose to third parties any “personally identifiable information” in connection with the viewing habits of a “consumer” who uses its services. Left as written and confined to the scope originally intended for it, the Act seems more or less fine. However, over the last few years, plaintiffs have begun to use the Act as a weapon with which to attack common Internet business models in a manner wholly out of keeping with drafters’ intent.

And with a decision that promises to be a windfall for hungry plaintiff’s attorneys everywhere, the First Circuit recently allowed a plaintiff, Alexander Yershov, to make it past a 12(b)(6) motion on a claim that Gannett violated the VPPA with its  USA Today Android mobile app.

What’s in a name (or Android ID) ?

The app in question allowed Mr. Yershov to view videos without creating an account, providing his personal details, or otherwise subscribing (in the generally accepted sense of the term) to USA Today’s content. What Gannett did do, however, was to provide to Adobe Systems the Android ID and GPS location data associated with Mr. Yershov’s use of the app’s video content.

In interpreting the VPPA in a post-Blockbuster world, the First Circuit panel (which, apropos of nothing, included retired Justice Souter) had to wrestle with whether Mr. Yershov counts as a “subscriber,” and to what extent an Android ID and location information count as “personally identifying information” under the Act. Relying on the possibility that Adobe might be able to infer the identity of the plaintiff given its access to data from other web properties, and given the court’s rather gut-level instinct that an app user is a “subscriber,” the court allowed the plaintiff to survive the 12(b)(6) motion.

The PII point is the more arguable of the two, as the statutory language is somewhat vague. Under the Act, PIII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” On this score the court decided that GPS data plus an Android ID (or each alone — it wasn’t completely clear) could constitute information protected under the Act (at least for purposes of a 12(b)(6) motion):

The statutory term “personally identifiable information” is awkward and unclear. The definition of that term… adds little clarity beyond training our focus on the question whether the information identifies the person who obtained the video…. Nevertheless, the language reasonably conveys the point that PII is not limited to information that explicitly names a person.

OK (maybe). But where the court goes off the rails is in its determination that an Android ID, GPS data, or a list of videos is, in itself, enough to identify anyone.

It might be reasonable to conclude that Adobe could use that information in combination with other information it collects from yet other third parties (fourth parties?) in order to build up a reliable, personally identifiable profile. But the statute’s language doesn’t hang on such a combination. Instead, the court’s reasoning finds potential liability by reading this exact sort of prohibition into the statute:

Adobe takes this and other information culled from a variety of sources to create user profiles comprised of a given user’s personal information, online behavioral data, and device identifiers… These digital dossiers provide Adobe and its clients with “an intimate look at the different types of materials consumed by the individual” … While there is certainly a point at which the linkage of information to identity becomes too uncertain, or too dependent on too much yet-to-be-done, or unforeseeable detective work, here the linkage, as plausibly alleged, is both firm and readily foreseeable to Gannett.

Despite its hedging about uncertain linkages, the court’s reasoning remains contingent on an awful lot of other moving parts — something not found in either the text of the law, nor the legislative history of the Act.

The information sharing identified by the court is in no way the sort of simple disclosure of PII that easily identifies a particular person in the way that, say, Blockbuster Video would have been able to do in 1988 with disclosure of its viewing lists.  Yet the court purports to find a basis for its holding in the abstract nature of the language in the VPPA:

Had Congress intended such a narrow and simple construction [as specifying a precise definition for PII], it would have had no reason to fashion the more abstract formulation contained in the statute.

Again… maybe. Maybe Congress meant to future-proof the provision, and didn’t want the statute construed as being confined to the simple disclosure of name, address, phone number, and so forth. I doubt, though, that it really meant to encompass the sharing of any information that might, at some point, by some unknown third parties be assembled into a profile that, just maybe if you squint at it hard enough, will identify a particular person and their viewing habits.

Passive Subscriptions?

What seems pretty clear, however, is that the court got it wrong when it declared that Mr. Yershov was a “subscriber” to USA Today by virtue of simply downloading an app from the Play Store.

The VPPA prohibits disclosure of a “consumer’s” PII — with “consumer” meaning “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” In this case (as presumably will happen in most future VPPA cases involving free apps and websites), the plaintiff claims that he is a “subscriber” to a “video tape” service.

The court built its view of “subscriber” predominantly on two bases: (1) you don’t need to actually pay anything to count as a subscriber (with which I agree), and (2) that something about installing an app that can send you push notifications is different enough than frequenting a website, that a user, no matter how casual, becomes a “subscriber”:

When opened for the first time, the App presents a screen that seeks the user’s permission for it to “push” or display notifications on the device. After choosing “Yes” or “No,” the user is directed to the App’s main user interface.

The court characterized this connection between USA Today and Yershov as “seamless” — ostensibly because the app facilitates push notifications to the end user.

Thus, simply because it offers an app that can send push notifications to users, and because this app sometimes shows videos, a website or Internet service — in this case, an app portal for a newspaper company — becomes a “video tape service,” offering content to “subscribers.” And by sharing information in a manner that is nowhere mentioned in the statute and that on its own is not capable of actually identifying anyone, the company suddenly becomes subject to what will undoubtedly be an avalanche of lawsuits (at least in the first circuit).

Preposterous as this may seem on its face, it gets worse. Nothing in the court’s opinion is limited to “apps,” and the “logic” would seem to apply to the general web as well (whether the “seamless” experience is provided by push notifications or some other technology that facilitates tighter interaction with users). But, rest assured, the court believes that

[B]y installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.

Thank goodness it’s “materially” different… although just going by the reasoning in this opinion, I don’t see how that can possibly be true.

What happens when web browsers can enable push notifications between users and servers? Well, I guess we’ll find out soon because major browsers now support this feature. Further, other technologies — like websockets — allow for continuous two-way communication between users and corporate sites. Does this change the calculus? Does it meet the court’s “test”? If so, the court’s exceedingly vague reasoning provides little guidance (and a whole lot of red meat for lawsuits).

To bolster its view that apps are qualitatively different than web sites with regard to their delivery to consumers, the court asks “[w]hy, after all, did Gannett develop and seek to induce downloading of the App?” I don’t know, because… cell phones?

And this bit of “reasoning” does nothing for the court’s opinion, in fact. Gannett undertook development of a web site in the first place because some cross-section of the public was interested in reading news online (and that was certainly the case for any electronic distribution pre-2007). No less, consumers have increasingly been moving toward using mobile devices for their online activities. Though it’s a debatable point, apps can often provide a better user experience than that provided by a mobile browser. Regardless, the line between “app” and “web site” is increasingly a blurry one, especially on mobile devices, and with the proliferation of HTML5 and frameworks like Google’s Progressive Web Apps, the line will only grow more indistinct. That Gannett was seeking to provide the public with an app has nothing to do with whether it intended to develop a more “intimate” relationship with mobile app users than it has with web users.

The 11th Circuit, at least, understands this. In Ellis v. Cartoon Network, it held that a mere user of an app — without more — could not count as a “subscriber” under the VPPA:

The dictionary definitions of the term “subscriber” we have quoted above have a common thread. And that common thread is that “subscription” involves some type of commitment, relationship, or association (financial or otherwise) between a person and an entity. As one district court succinctly put it: “Subscriptions involve some or [most] of the following [factors]: payment, registration, commitment, delivery, [expressed association,] and/or access to restricted content.”

The Eleventh Circuit’s point is crystal clear, and I’m not sure how the First Circuit failed to appreciate it (particularly since it was the district court below in the Yershov case that the Eleventh Circuit was citing). Instead, the court got tied up in asking whether or not a payment was required to constitute a “subscription.” But that’s wrong. What’s needed is some affirmative step – something more than just downloading an app, and certainly something more than merely accessing a web site.

Without that step — a “commitment, relationship, or association (financial or otherwise) between a person and an entity” — the development of technology that simply offers a different mode of interaction between users and content promises to transform the VPPA into a tremendously powerful weapon in the hands of eager attorneys, and a massive threat to the advertising-based business models that have enabled the growth of the web.

How could this possibly not apply to websites?

In fact, there is no way this opinion won’t be picked up by plaintiff’s attorneys in suits against web sites that allow ad networks to collect any information on their users. Web sites may not have access to exact GPS data (for now), but they do have access to fairly accurate location data, cookies, and a host of other data about their users. And with browser-based push notifications and other technologies being developed to create what the court calls a “seamless” experience for users, any user of a web site will count as a “subscriber” under the VPPA. The potential damage to the business models that have funded the growth of the Internet is hard to overstate.

There is hope, however.

Hulu faced a similar challenge over the last few years arising out of its collection of viewer data on its platform and the sharing of that data with third-party ad services in order to provide better targeted and, importantly, more user-relevant marketing. Last year it actually won a summary judgment motion on the basis that it had no way of knowing that Facebook (the third-party with which it was sharing data) would reassemble the data in order to identify particular users and their viewing habits. Nevertheless, Huu has previously lost motions on the subscriber and PII issues.

Hulu has, however, previously raised one issue in its filings on which the district court punted, but that could hold the key to putting these abusive litigations to bed.

The VPPA provides a very narrowly written exception to the prohibition on information sharing when such sharing is “incident to the ordinary course of business” of the “video tape service provider.” “Ordinary course of business” in this context means  “debt collection activities, order fulfillment, request processing, and the transfer of ownership.” In one of its motions, Hulu argued that

the section shows that Congress took into account that providers use third parties in their business operations and “‘allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’

The district court didn’t grant Hulu summary judgment on the issue, essentially passing on the question. But in 2014 the Seventh Circuit reviewed a very similar set of circumstances in Sterk v. Redbox and found that the exception applied. In that case Redbox had a business relationship with Stream, a third party that provided Redbox with automated customer service functions. The Seventh Circuit found that information sharing in such a relationship fell within Redbox’s “ordinary course of business”, and so Redbox was entitled to summary judgment on the VPPA claims against it.

This is essentially the same argument that Hulu was making. Third-party ad networks most certainly provide a service to corporations that serve content over the web. Hulu, Gannett and every other publisher on the web surely could provide their own ad platforms on their own properties. But by doing so they would lose the economic benefits that come from specialization and economies of scale. Thus, working with a third-party ad network pretty clearly replaces the “order fulfillment” and “request processing” functions of a content platform.

The Big Picture

And, stepping back for a moment, it’s important to take in the big picture. The point of the VPPA was to prevent public disclosures that would chill speech or embarrass individuals; the reporter in 1987 set out to expose or embarrass Judge Bork.  This is the situation the VPPA’s drafters had in mind when they wrote the Act. But the VPPA was most emphatically not designed to punish Internet business models — especially of a sort that was largely unknown in 1988 — that serve the interests of consumers.

The 1988 Senate report on the bill, for instance, notes that “[t]he bill permits the disclosure of personally identifiable information under appropriate and clearly defined circumstances. For example… companies may sell mailing lists that do not disclose the actual selections of their customers.”  Moreover, the “[Act] also allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’”

Congress plainly contemplated companies being able to monetize their data. And this just as plainly includes the common practice in automated tracking systems on the web today that use customers’ viewing habits to serve them with highly personalized web experiences.

Sites that serve targeted advertising aren’t in the business of embarrassing consumers or abusing their information by revealing it publicly. And, most important, nothing in the VPPA declares that information sharing is prohibited if third party partners could theoretically construct a profile of users. The technology to construct these profiles simply didn’t exist in 1988, and there is nothing in the Act or its legislative history to support the idea that the VPPA should be employed against the content platforms that outsource marketing to ad networks.

What would make sense is to actually try to fit modern practice in with the design and intent of the VPPA. If, for instance, third-party ad networks were using the profiles they created to extort, blackmail, embarrass, or otherwise coerce individuals, the practice certainly falls outside of course of business, and should be actionable.

But as it stands, much like the TCPA, the VPPA threatens to become a costly technological anachronism. Future courts should take the lead of the Eleventh and Seventh circuits, and make the law operate in the way it was actually intended. Gannett still has the opportunity to appeal for an en banc hearing, and after that for cert before the Supreme Court. But the circuit split this presents is the least of our worries. If this issue is not resolved in a way that permits platforms to continue to outsource their marketing efforts as they do today, the effects on innovation could be drastic.

Web platforms — which includes much more than just online newspapers — depend upon targeted ads to support their efforts. This applies to mobile apps as well. The “freemium” model has eclipsed the premium model for apps — a fact that expresses the preferences of both consumers at large as well as producers. Using the VPPA as a hammer to smash these business models will hurt everyone except, of course, for plaintiff’s attorneys.

by Berin Szoka, President, TechFreedom

Josh Wright will doubtless be remembered for transforming how FTC polices competition. Between finally defining Unfair Methods of Competition (UMC), and his twelve dissents and multiple speeches about competition matters, he re-grounded competition policy in the error-cost framework: weighing not only costs against benefits, but also the likelihood of getting it wrong against the likelihood of getting it right.

Yet Wright may be remembered as much for what he started as what he finished: reforming the Commission’s Unfair and Deceptive Acts and Practices (UDAP) work. His consumer protection work is relatively slender: four dissents on high tech matters plus four relatively brief concurrences and one dissent on more traditional advertising substantiation cases. But together, these offer all the building blocks of an economic, error-cost-based approach to consumer protection. All that remains is for another FTC Commissioner to pick up where Wright left off.

Apple: Unfairness & Cost-Benefit Analysis

In January 2014, Wright issued a blistering, 17 page dissent from the Commission’s decision to bring, and settle, an enforcement action against Apple regarding the design of its app store. Wright dissented, not from the conclusion necessarily, but from the methodology by which the Commission arrived there. In essence, he argued for an error-cost approach to unfairness:

The Commission, under the rubric of “unfair acts and practices,” substitutes its own judgment for a private firm’s decisions as to how to design its product to satisfy as many users as possible, and requires a company to revamp an otherwise indisputably legitimate business practice. Given the apparent benefits to some consumers and to competition from Apple’s allegedly unfair practices, I believe the Commission should have conducted a much more robust analysis to determine whether the injury to this small group of consumers justifies the finding of unfairness and the imposition of a remedy.

…. although Apple’s allegedly unfair act or practice has harmed some consumers, I do not believe the Commission has demonstrated the injury is substantial. More importantly, any injury to consumers flowing from Apple’s choice of disclosure and billing practices is outweighed considerably by the benefits to competition and to consumers that flow from the same practice.

The majority insisted that the burden on consumers or Apple from its remedy “is de minimis,” and therefore “it was unnecessary for the Commission to undertake a study of how consumers react to different disclosures before issuing its complaint against Apple, as Commissioner Wright suggests.”

Wright responded: “Apple has apparently determined that most consumers do not want to experience excessive disclosures or to be inconvenienced by having to enter their passwords every time they make a purchase.” In essence, he argued, that the FTC should not presume to know better than Apple how to manage the subtle trade-offs between convenience and usability.

Wright was channeling Hayek’s famous quip: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” The last thing the FTC should be doing is designing digital products — even by hovering over Apple’s shoulder.

The Data Broker Report

Wright next took the Commission to task for the lack of economic analysis in its May 2013 report, “Data Brokers: A Call for Transparency and Accountability.” In just four footnotes, Wright extended his analysis of Apple. For example:

Footnote 85: Commissioner Wright agrees that Congress should consider legislation that would provide for consumer access to the information collected by data brokers. However, he does not believe that at this time there is enough evidence that the benefits to consumers of requiring data brokers to provide them with the ability to opt out of the sharing of all consumer information for marketing purposes outweighs the costs of imposing such a restriction. Finally… he believes that the Commission should engage in a rigorous study of consumer preferences sufficient to establish that consumers would likely benefit from such a portal prior to making such a recommendation.

Footnote 88: Commissioner Wright believes that in enacting statutes such as the Fair Credit Reporting Act, Congress undertook efforts to balance [costs and benefits]. In the instant case, Commissioner Wright is wary of extending FCRA-like coverage to other uses and categories of information without first performing a more robust balancing of the benefits and costs associated with imposing these requirements

The Internet of Things Report

This January, in a 4-page dissent from the FTC’s staff report on “The Internet of Things: Privacy and Security in a Connected World,” Wright lamented that the report neither represented serious economic analysis of the issues discussed nor synthesized the FTC’s workshop on the topic:

A record that consists of a one-day workshop, its accompanying public comments, and the staff’s impressions of those proceedings, however well-intended, is neither likely to result in a representative sample of viewpoints nor to generate information sufficient to support legislative or policy recommendations.

His attack on the report’s methodology was blistering:

The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs. Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace…. I support the well-established Commission view that companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of “security by design” and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.

Ouch.

Nomi: Deception & Materiality Analysis

In April, Wright turned his analytical artillery from unfairness to deception, long the more uncontroversial half of UDAP. In a five-page dissent, Wright accused the Commission of essentially dispensing with the core limiting principle of the 1983 Deception Policy Statement: materiality. As Wright explained:

The materiality inquiry is critical because the Commission’s construct of “deception” uses materiality as an evidentiary proxy for consumer injury…. Deception causes consumer harm because it influences consumer behavior — that is, the deceptive statement is one that is not merely misleading in the abstract but one that causes consumers to make choices to their detriment that they would not have otherwise made. This essential link between materiality and consumer injury ensures the Commission’s deception authority is employed to deter only conduct that is likely to harm consumers and does not chill business conduct that makes consumers better off.

As in Apple, Wright did not argue that there might not be a role for the FTC; merely that the FTC had failed to justify bringing, let alone settling, an enforcement action without establishing that the key promise at issue — to provide in-store opt-out — was material.

The Chamber Speech: A Call for Economic Analysis

In May, Wright gave a speech to the Chamber of Commerce on “How to Regulate the Internet of Things Without Harming its Future: Some Do’s and Don’ts”:

Perhaps it is because I am an economist who likes to deal with hard data, but when it comes to data and privacy regulation, the tendency to rely upon anecdote to motivate policy is a serious problem. Instead of developing a proper factual record that documents cognizable and actual harms, regulators can sometimes be tempted merely to explore anecdotal and other hypothetical examples and end up just offering speculations about the possibility of harm.

And on privacy in particular:

What I have seen instead is what appears to be a generalized apprehension about the collection and use of data — whether or not the data is actually personally identifiable or sensitive — along with a corresponding, and arguably crippling, fear about the possible misuse of such data.  …. Any sensible approach to regulating the collection and use of data will take into account the risk of abuses that will harm consumers. But those risks must be weighed with as much precision as possible, as is the case with potential consumer benefits, in order to guide sensible policy for data collection and use. The appropriate calibration, of course, turns on our best estimates of how policy changes will actually impact consumers on the margin….

Wright concedes that the “vast majority of work that the Consumer Protection Bureau performs simply does not require significant economic analysis because they involve business practices that create substantial risk of consumer harm but little or nothing in the way of consumer benefits.” Yet he notes that the Internet has made the need for cost-benefit analysis far more acute, at least where conduct is ambiguous as its effects on consumers, as in Apple, to avoid “squelching innovation and depriving consumers of these benefits.”

The Wrightian Reform Agenda for UDAP Enforcement

Wright left all the building blocks his successor will need to bring “Wrightian” reform to how the Bureau of Consumer Protection works:

  1. Wright’s successor should work to require economic analysis for consent decrees, as Wright proposed in his last major address as a Commissioner. BE might not to issue a statement at all in run-of-the-mill deception cases, but it should certainly have to say something about unfairness cases.
  2. The FTC needs to systematically assess its enforcement process to understand the incentives causing companies to settle UDAP cases nearly every time — resulting in what Chairman Ramirez and Commissioner Brill frequently call the FTC’s “common law of consent decrees.”
  3. As Wright says in his Nomi dissent “While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint.” This point should be uncontroversial, yet the Commission has never addressed it. Wright’s successor (and the FTC) should, at a minimum, propose a standard for settling cases.
  4. Just as Josh succeeded in getting the FTC to issue a UMC policy statement, his successor should re-assess the FTC’s two UDAP policy statements. Wright’s successor needs to make the case for finally codifying the DPS — and ensuring that the FTC stops bypassing materiality, as in Nomi.
  5. The Commission should develop a rigorous methodology for each of the required elements of unfairness and deception to justify bringing cases (or making report recommendations). This will be a great deal harder than merely attacking the lack of such methodology in dissents.
  6. The FTC has, in recent years, increasingly used reports to make de facto policy — by inventing what Wright calls, in his Chamber speech, “slogans and catchphrases” like “privacy by design,” and then using them as boilerplate requirements for consent decrees; by pressuring companies into adopting the FTC’s best practices; by calling for legislation; and so on. At a minimum, these reports must be grounded in careful economic analysis.
  7. The Commission should apply far greater rigor in setting standards for substantiating claims about health benefits. In two dissents, Genelink et al and HCG Platinum, Wright demolished arguments for a clear, bright line requiring two randomized clinical trials, and made the case for “a more flexible substantiation requirement” instead.

Conclusion: Big Shoes to Fill

It’s a testament to Wright’s analytical clarity that he managed to say so much about consumer protection in so few words. That his UDAP work has received so little attention, relative to his competition work, says just as much about the far greater need for someone to do for consumer protection what Wright did for competition enforcement and policy at the FTC.

Wright’s successor, if she’s going to finish what Wright started, will need something approaching Wright’s sheer intellect, his deep internalization of the error-costs approach, and his knack for brokering bipartisan compromise around major issues — plus the kind of passion for UDAP matters Wright had for competition matters. And, of course, that person needs to be able to continue his legacy on competition matters…

Compared to the difficulty of finding that person, actually implementing these reforms may be the easy part.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Continue Reading...

In a previous Truth on the Market blog posting, I noted that the FTC recently revised its “advertising substantiation” policy in a highly problematic manner.  In particular, in a number of recent enforcement actions, an FTC majority has taken the position that it will deem advertising claims “deceptive” unless they are supported by two randomized controlled tests (RCTs), and (in the case of food and drug supplements) will require companies to obtain prior U.S. Food and Drug Administration (FDA) approval for future advertising claims.  As I explained in a Heritage Foundation Legal Memorandum, these and other new burdens “may deter firms from investing in new health-related product improvements, in which event consumers who are denied new and beneficial products (as well as useful information about the attributes of current products) will be the losers.  Competition will also suffer as businesses shy away from informational advertising that rewards the highest quality current products and encourages firms to compete on the basis of quality.  Furthermore, the broad scope of these requirements is in tension with the constitutional prohibition on restricting commercial speech no more than is necessary to satisfy legitimate statutory purposes.” (NOTABLY, Commissioner Maureen Ohlhausen has argued against categorically imposing a two RCTs requirement in all cases , explaining that “[i]f we demand too high a level of substantiation in pursuit of certainty, we risk losing the benefits to consumers of having access to information about emerging areas of science and the corresponding pressure on firms to compete on the health features of their products.”  Commissioner Joshua Wright has also opined “that a reflexive approach in requiring two RCTs as fencing-in relief might not always be in the best interest of consumers.”)

In a January 30, 2015 decision, POM Wonderful, LLC v. FTC, the D.C. Circuit took an initial step that may help rein in FTC enthusiasm for imposing a “two RCTs” requirement on future advertising by a firm.  The FTC ruled in 2013 that POM Wonderful, a producer and seller of pomegranate products, violated the FTC Act by making advertisements that suggested POM products could treat, prevent, or reduce heart disease, prostate cancer, and erectile dysfunction.  According to the FTC, the ads were false and misleading because POM lacked valid and adequate scientific evidence to substantiate its claims.  (The FTC determined that scientific findings cited by POM, based on over $35 million of pomegranate-related research, had not been supported by subsequent studies.)   The FTC entered a cease and desist order that barred POM from making future disease claims (claims that its products treat, prevent, or reduce a disease) about its products without “competent and reliable” scientific evidence.  Specifically, the FTC’s order required that such future claims be supported by at least two RCTs.  (NOTABLY, Commissioner Ohlhausen disagreed with the majority’s view that two RCTs were warranted and would have required only one RCT, regarding that study in light of other available scientific evidence.)

POM appealed to the D.C. Circuit, which unanimously held that there was no basis for setting aside the FTC’s finding that many of POM’s ads made false or misleading claims; that there was no First Amendment protection for deceptive advertising; and that requiring an RCT was not too onerous and did not violate the First Amendment.  The court concluded that “the [FTC] injunctive order’s requirement of some RCT substantiation for disease claims directly advances, and is not more extensive than necessary to serve, the interest in preventing misleading commercial speech”, consistent with the test for evaluating commercial speech enunciated by the Supreme Court in Central Hudson.  The court, however, also held that “a categorical floor of two RCTs for any and all disease claims . . . fails Central Hudson scrutiny”.  The court stressed that the FTC “fails to demonstrate how such a rigid remedial rule bears the requisite ‘reasonable fit’ with the interest in preventing deceptive speech.”  Significantly, the court also enunciated a strong policy justification, rooted in First Amendment commercial speech concerns, for precluding a categorical “two RCTs” rule:

“Requiring additional RCTs without adequate justification exacts considerable costs, and not just in terms of the substantial resources often necessary to design and conduct a properly randomized and controlled human clinical trial.  If there is a categorical bar against claims about the disease-related benefits of a food product or dietary supplement in the absence of two RCTs, consumers may be denied useful, truthful information about products with a demonstrated capacity to treat or prevent serious disease.  That would subvert rather than promote the objectives of the commercial speech doctrine.”

Accordingly, the court modified the FTC’s order to require that POM possess at least one RCT in support of future health-related advertising claims.  Assuming that the D.C. Circuit’s POM decision is not appealed and remains in force, future advertisers investigated by the FTC will have stronger grounds to resist FTC efforts to impose “two or more RCT” requirements as part of a decree.

This is just a small step in badly-needed reforms, however.  Even a single RCT is unnecessarily onerous in many market settings (and, in my view, ignores the teachings of Central Hudson).  More broadly, as I have previously argued, the FTC should rethink its entire approach and issue new advertising substantiation guidelines that state the FTC:  (1) will seek to restrict commercial speech to the smallest extent possible, consistent with fraud prevention; (2) will apply strict cost-benefit analysis in investigating advertising claims and framing remedies in advertising substantiation cases; (3) will apply a reasonableness standard in such cases, consistent with general guidance found in a 1983 FTC policy statement; (4) will not require clinical studies be conducted in order to substantiate advertising claims; (5) will not require that the FDA or any other agency be involved in approving or reviewing advertising claims; and (6) will avoid excessive “fencing in” relief that extends well beyond the ambit of the alleged harm associated with statements that the FTC deems misleading.  Enactment of such guidelines may be a long-term project, requiring a change in Commission thinking, but it is well worth pursuing, in order to advance both free commercial speech and consumer welfare.

The Children’s Online Privacy Protection Act (COPPA) continues to be a hot button issue for many online businesses and privacy advocates. On November 14, Senator Markey, along with Senator Kirk and Representatives Barton and Rush introduced the Do Not Track Kids Act of 2013 to amend the statute to include children from 13-15 and add new requirements, like an eraser button. The current COPPA Rule, since the FTC’s recent update went into effect this past summer, requires parental consent before businesses can collect information about children online, including relatively de-identified information like IP addresses and device numbers that allow for targeted advertising.

Often, the debate about COPPA is framed in a way that makes it very difficult to discuss as a policy matter. With the stated purpose of “enhanc[ing] parental involvement in children’s online activities in order to protect children’s privacy,” who can really object? While there is recognition that there are substantial costs to COPPA compliance (including foregone innovation and investment in children’s media), it’s generally taken for granted by all that the Rule is necessary to protect children online. But it has never been clear what COPPA is supposed to help us protect our children from.

Then-Representative Markey’s original speech suggested one possible answer in “protect[ing] children’s safety when they visit and post information on public chat rooms and message boards.” If COPPA is to be understood in this light, the newest COPPA revision from the FTC and the proposed Do Not Track Kids Act of 2013 largely miss the mark. It seems unlikely that proponents worry about children or teens posting their IP address or device numbers online, allowing online predators to look at this information and track them down. Rather, the clear goal animating the updates to COPPA is to “protect” children from online behavioral advertising. Here’s now-Senator Markey’s press statement:

“The speed with which Facebook is pushing teens to share their sensitive, personal information widely and publicly online must spur Congress to act commensurately to put strong privacy protections on the books for teens and parents,” said Senator Markey. “Now is the time to pass the bipartisan Do Not Track Kids Act so that children and teens don’t have their information collected and sold to the highest bidder. Corporations like Facebook should not be profiting from the personal and sensitive information of children and teens, and parents and teens should have the right to control their personal information online.”

The concern about online behavioral advertising could probably be understood in at least three ways, but each of them is flawed.

  1. Creepiness. Some people believe there is something just “creepy” about companies collecting data on consumers, especially when it comes to children and teens. While nearly everyone would agree that surreptitiously collecting data like email addresses or physical addresses without consent is wrong, many would probably prefer to trade data like IP addresses and device numbers for free content (as nearly everyone does every day on the Internet). It is also unclear that COPPA is the answer to this type of problem, even if it could be defined. As Adam Thierer has pointed out, parents are in a much better position than government regulators or even companies to protect their children from privacy practices they don’t like.
  2. Exploitation. Another way to understand the concern is that companies are exploiting consumers by making money off their data without consumers getting any value. But this fundamentally ignores the multi-sided market at play here. Users trade information for a free service, whether it be Facebook, Google, or Twitter. These services then monetize that information by creating profiles and selling that to advertisers. Advertisers then place ads based on that information with the hopes of increasing sales. In the end, though, companies make money only when consumers buy their products. Free content funded by such advertising is likely a win-win-win for everyone involved.
  3. False Consciousness. A third way to understand the concern over behavioral advertising is that corporations can convince consumers to buy things they don’t need or really want through advertising. Much of this is driven by what Jack Calfee called The Fear of Persuasion: many people don’t understand the beneficial effects of advertising in increasing the information available to consumers and, as a result, misdiagnose the role of advertising. Even accepting this false consciousness theory, the difficulty for COPPA is that no one has ever explained why advertising is a harm to children or teens. If anything, online behavioral advertising is less of a harm to teens and children than adults for one simple reason: Children and teens can’t (usually) buy anything! Kids and teens need their parents’ credit cards in order to buy stuff online. This means that parental involvement is already necessary, and has little need of further empowerment by government regulation.

COPPA may have benefits in preserving children’s safety — as Markey once put it — beyond what underlying laws, industry self-regulation and parental involvement can offer. But as we work to update the law, we shouldn’t allow the Rule to be a solution in search of a problem. It is incumbent upon Markey and other supporters of the latest amendment to demonstrate that the amendment will serve to actually protect kids from something they need protecting from. Absent that, the costs very likely outweigh the benefits.

Critics of Google have argued that users overvalue Google’s services in relation to the data they give away.  One breath-taking headline asked Who Would Pay $5,000 to Use Google?, suggesting that Google and its advertisers can make as much as $5,000 off of individuals whose data they track. Scholars, such as Nathan Newman, have used this to argue that Google exploits its users through data extraction. But, the question remains: how good of a deal is Google? My contention is that Google’s value to most consumers far surpasses the value supposedly extracted from them in data.

First off, it is unlikely that Google and its advertisers make anywhere close to $5,000 off the average user. Only very high volume online purchasers who consistently click through online ads are likely anywhere close to that valuable. Nonetheless, it is true that Google and its advertisers must be making money, or else Google would be charging users for its services.

PrivacyFix, a popular extension for Google Chrome, calculates your worth to Google based upon the amount of searches you have done. Far from $5,000, my total only comes in at $58.66 (and only $10.74 for Facebook). Now, I might not be the highest volume searcher out there. My colleague, Geoffrey Manne states that he is worth $125.18 on Google (and $10.74 for Facebook). But, I use Google search everyday for work in tech policy, along with Google Docs, Google Calendar, and Gmail (both my private email and work emails)… for FREE!*

The value of all of these services to me, or even just Google search alone, easily surpasses the value of my data attributed to Google. This is likely true for the vast majority of other users, as well. While not a perfect analogue, there are paid specialized search options out there (familiar to lawyers) that do little tracking and are not ad-supported: Westlaw, Lexis, and Bloomberg. But, the price for using these services are considerably higher than zero:

legalsearchcosts

Can you imagine having to pay anywhere near $14 per search on Google? Or a subscription that costs $450 per user per month like some firms pay for Bloomberg? It may be the case that the costs are significantly lower per search for Google than for specialized legal searches (though Google is increasingly used by young lawyers as more cases become available). But, the “price” of viewing a targeted ad is a much lower psychic burden for most people than paying even just a few cents per month for an ad-free experience. For instance, consumers almost always choose free apps over the 99 cent alternative without ads.

Maybe the real question about Google is: Great Deal or Greatest Deal?

* Otherwise known as unpriced for those that know there’s no such thing as a free lunch.

Last week the New York Times ran an article, “Building the Next Facebook a Tough Task in Europe“, by Eric Pfanner, discussing the lack of major high tech innovation in Europe.  Eric Pfanner discusses the importance of such investment, and then speculates on the reason for the lack of such innovation.  The ultimate conclusion is that there is a lack of venture capital in Europe for various cultural and historical reasons.  This explanation of course makes no sense.  Capital is geographically mobile and if European tech start ups were a profitable investment that Europeans were afraid to bankroll, American investors would be on the next plane.

Here is a better explanation.  In the name of “privacy,” the EU greatly restricts the use of consumer online  information.  Josh Lerner has a recent paper, “The Impact of Privacy Policy Changes on Venture Capital Investment in Online Advertising Companies” (based in part on the work of Avi Goldfarb and Catherine E. Tucker, “Privacy Regulation and Online Advertising“) finding that this restriction on the use of information is a large part of the explanation for the lack of tech investment in Europe.  Tom Lenard and I have written extensively about the costs of privacy regulation (for example, here) and this is just another example of these costs, although the costs are much greater in Europe than they are here (so far.)

A new rule kicks in today requiring airlines to include all taxes and mandatory fees in their advertised fares.  The rule, part of a broader “passengers’ bill of rights”-type regulation promulgated by the Department of Transportation, is being sold as a proconsumer mandate:  It purportedly protects consumers from the sticker shock that results when they learn that the true consumer price for a flight, due to taxes and mandatory fees, is much higher than the advertised price.

But how consumer-friendly is this rule?  Won’t it be easier to raise taxes and fees when they aren’t presented as a line item, when consumers aren’t “startled” to see the exorbitant amount they’re paying for government services?  Value-added taxes (VATs), which tax the incremental value added at each stage of production and are generally included in the posted price for an item, have proven easier to raise than sales taxes, which are added at the register.  That’s because the latter are more visible so that increases are more likely to generate political opposition.  While VATs are common throughout Europe, they’re virtually non-existent in the United States, in part because we Americans have recognized the important role “tax sticker shock” plays in creating political accountability.

Consumer advocates, nevertheless, are lauding the new Department of Transportation rule.  They don’t seem to realize that higher taxes are bad for consumers and that taxes are more likely to rise when the government can hide them.  They also seem to care little about consumer sovereignty.  Don’t consumers have a right to know how much they’re paying to have scads of Homeland Security officers bark orders at them and gawk at their privates?

 

By Berin Szoka, Geoffrey Manne & Ryan Radia

As has become customary with just about every new product announcement by Google these days, the company’s introduction on Tuesday of its new “Search, plus Your World” (SPYW) program, which aims to incorporate a user’s Google+ content into her organic search results, has met with cries of antitrust foul play. All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?

The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it’s not even clear Facebook and Twitter really want to be included or what their price and conditions for doing so would be. So in short, it’s hard to see what there is to argue about yet.

Let’s consider both questions in turn.

Should Google Have Included Other Services Prior to SPYW’s Launch?

Google says it’s happy to add non-Google content to SPYW but, as Google fellow Amit Singhal told Danny Sullivan, a leading search engine journalist:

Facebook and Twitter and other services, basically, their terms of service don’t allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we’d look at designing things to see how it would work.

In a follow-up story, Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:

“To start with, we would have a conversation with them,” Schmidt said, about settling any differences.

I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google’s regular crawling, allowed by both Twitter and Facebook, was a form of “automated conversation” giving Google material it could use.

“Anything we do with companies like that, it’s always better to have a conversion,” Schmidt said.

MG Siegler calls this “doublespeak” and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn’t need permission to include public data in SPYW:

Both Twitter and Facebook have data that is available to the public. It’s data that Google crawls. It’s data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.

It’s not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the “People and Places” box, for example.

It’s certainly true that if you search Google for “site:twitter.com” or “site:facebook.com,” you’ll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this dashboard), as Sullivan notes. But the public data isn’t available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a “data firehose” so that Google could display tweets in real-time under the “personalized search” program for public social information that SPYW builds on top of. But Twitter ended the deal last May for reasons neither company has explained.

At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn’t could Google have included this data in SPYW, but rather need they have? If Google’s engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it’s hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general, as with issues surrounding the vertical integration claims against Google, product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it’s difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google’s rivals, result in anticompetitive foreclosure, retard rivals’ incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).

Further, it’s easy to see why Google’s lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for “scraping” a different type of social content, user reviews, most notably by Yelp’s CEO at the contentious Senate antitrust hearing in September. Perhaps one could distinguish that situation from this one, but it’s not obvious where to draw the line between content Google has a duty to include without “making excuses” about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of “damned if you do, damned if you don’t.” It seems only natural for Google to be gun-shy about “scraping” other services’ public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a “conversation.”

And as we noted, integrating non-public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user’s friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.

Now, if Twitter truly wants Google to feature tweets in Google’s personalized search results, why did Twitter end its agreement with Google last year? Google responded to Twitter’s criticism of its SPYW launch last night with a short Google+ statement:

We are a bit surprised by Twitter’s comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their rel=nofollow instructions [by removing Twitter content results from “personalized search” results].

Perhaps Twitter simply got a better deal: Microsoft may have paid Twitter $30 million last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it’s simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there’s nothing illegal about that.

Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an intense war for eyeballs: the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ (800 million v 62 million, but even larger lead in active users), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine “to rule them all“? Or would Facebook prefer to continue developing “social search” in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web’s hottest property).

Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools (Start Google+, for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of “Facebook primacy” ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users’ primary “home” as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through Start Google+). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.

Amidst all the antitrust hand-wringing over SPYW and Google’s decision to “go it alone” for now, it’s worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet’s worth about the issue. It’s simply not clear that Google’s rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn’t be Google. If this all sounds speculative, it is—and that’s precisely the point. No one really knows. So, again, what’s to argue about on Day 3 of the new social search paradigm?

The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a “conversation” about greater cooperation. Google hasn’t made clear on what terms it would include content from other social media platforms. So it’s at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:

  1. Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright argues.
  2. Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
  3. Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.

Door #3 is a legal non-starter. In Aspen Skiing v. Aspen Highlands (1985), the Supreme Court came the closest it has ever come to endorsing the “essential facilities” doctrine by which a competitor has a duty to offer its facilities to competitors. But in Verizon Communications v. Trinko (2004), the Court made clear that even Aspen Skiing is “at or near the outer boundary of § 2 liability.” Part of the basis for the decision in Aspen Skiing was the existence of a prior, profitable relationship between the “essential facility” in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely “profitable” is not the same thing as “best available use of a resource”), the Court in Aspen Skiing seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it. Trinko limited the reach of the doctrine to the extraordinary circumstances of Aspen Skiing, and thus, as the Court affirmed in Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his chapter on search bias in TechFreedom’s free ebook, The Next Digital Decade).

So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would “reasonableness” be defined?

First, this issue parallels claims made more generally about Google’s supposed “search bias.” As Josh Wright has said about those claims, “[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.” Supposing (for the moment) that the second point could be established, it’s hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google’s “organic” search results—would actually “render their efforts to compete for distribution uneconomical,” which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a quarter as much time as Facebook.

Moreover, by these same measures, it’s clear that Facebook and Twitter don’t need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it’s difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook’s detriment? Yes. Just as Facebook’s deal with Microsoft hurts Google. But this is called competition. The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors’ expense.

After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft’s exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.

Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. “Reasonable” is a dangerously slippery concept in such markets, and a recipe for costly errors by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are as much the object of innovation as their technologies. Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky. Continue Reading…

In my series of three posts (here, here and here) drawn from my empirical study on search bias I have examined whether search bias exists, and, if so, how frequently it occurs.  This, the final post in the series, assesses the results of the study (as well as the Edelman & Lockwood (E&L) study to which it responds) to determine whether the own-content bias I’ve identified is in fact consistent with anticompetitive foreclosure or is otherwise sufficient to warrant antitrust intervention.

As I’ve repeatedly emphasized, while I refer to differences among search engines’ rankings of their own or affiliated content as “bias,” without more these differences do not imply anticompetitive conduct.  It is wholly unsurprising and indeed consistent with vigorous competition among engines that differentiation emerges with respect to algorithms.  However, it is especially important to note that the theories of anticompetitive foreclosure raised by Google’s rivals involve very specific claims about these differences.  Properly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.  Unfortunately for search engine critics, their theories fail on both counts.  The observed own-content bias appears neither to be extensive enough to prevent rivals from gaining access to distribution nor does it appear to target Google’s rivals; rather, it seems to be a natural result of intense competition between search engines and of significant benefit to consumers.

Vertical foreclosure arguments are premised upon the notion that rivals are excluded with sufficient frequency and intensity as to render their efforts to compete for distribution uneconomical.  Yet the empirical results simply do not indicate that market conditions are in fact conducive to the types of harmful exclusion contemplated by application of the antitrust laws.  Rather, the evidence indicates that (1) the absolute level of search engine “bias” is extremely low, and (2) “bias” is not a function of market power, but an effective strategy that has arisen as a result of serious competition and innovation between and by search engines.  The first finding undermines competitive foreclosure arguments on their own terms, that is, even if there were no pro-consumer justifications for the integration of Google content with Google search results.  The second finding, even more importantly, reveals that the evolution of consumer preferences for more sophisticated and useful search results has driven rival search engines to satisfy that demand.  Both Bing and Google have shifted toward these results, rendering the complained-of conduct equivalent to satisfying the standard of care in the industry–not restraining competition.

A significant lack of search bias emerges in the representative sample of queries.  This result is entirely unsurprising, given that bias is relatively infrequent even in E&L’s sample of queries specifically designed to identify maximum bias.  In the representative sample, the total percentage of queries for which Google references its own content when rivals do not is even lower—only about 8%—meaning that Google favors its own content far less often than critics have suggested.  This fact is crucial and highly problematic for search engine critics, as their burden in articulating a cognizable antitrust harm includes not only demonstrating that bias exists, but further that it is actually competitively harmful.  As I’ve discussed, bias alone is simply not sufficient to demonstrate any prima facie anticompetitive harm as it is far more often procompetitive or competitively neutral than actively harmful.  Moreover, given that bias occurs in less than 10% of queries run on Google, anticompetitive exclusion arguments appear unsustainable.

Indeed, theories of vertical foreclosure find virtually zero empirical support in the data.  Moreover, it appears that, rather than being a function of monopolistic abuse of power, search bias has emerged as an efficient competitive strategy, allowing search engines to differentiate their products in ways that benefit consumers.  I find that when search engines do reference their own content on their search results pages, it is generally unlikely that another engine will reference this same content.  However, the fact that both this percentage and the absolute level of own content inclusion is similar across engines indicates that this practice is not a function of market power (or its abuse), but is rather an industry standard.  In fact, despite conducting a much smaller percentage of total consumer searches, Bing is consistently more biased than Google, illustrating that the benefits search engines enjoy from integrating their own content into results is not necessarily a function of search engine size or volume of queries.  These results are consistent with a business practice that is efficient and at significant tension with arguments that such integration is designed to facilitate competitive foreclosure. Continue Reading…

By now everyone is probably aware of the “tracking” of certain cellphones (Sprint, iPhone, T-Mobile, AT&T perhaps others) by a company called Carrier IQ.  There are lots of discussions available; a good summary is on one of my favorite websites, Lifehacker;  also here from CNET. Apparently the program gathers lots of anonymous data mainly for the purpose of helping carriers improve their service. Nonetheless, there are lawsuits and calls for the FTC to investigate.

Aside from the fact that the data is used only to improve service, it is also useful to ask just what people are afraid of.  Clearly the phone companies already have access to SMS messages if they want it since these go through the phone system anyway.  Moreover, of course, no person would see the data even if it were somehow collected.  The fear is perhaps that “… marketers can use that data to sell you more stuff or send targeted ads…” (from the Lifehacker site) but even if so, so what?  If apps are using data to try to sell you stuff that they think that you want, what is the harm? If you do want it, then the app has done you a service.  If you don’t want it, then you don’t buy it.  Ads tailored to your behavior are likely to be more useful than ads randomly assigned.

The Lifehacker story does use phrases like “freak people out” and “scary” and “creepy.”  But except for the possibility of being sold stuff, the story never explains what is harmful about the behavior.  As I have said before, I think the basic problem is that people cannot understand the notion that something is known but no person knows it.  If some server somewhere knows where your phone has been, so what?

The end result of this episode will probably be somewhat worse phone service.