Archives For privacy

by Berin Szoka, President, TechFreedom

Josh Wright will doubtless be remembered for transforming how FTC polices competition. Between finally defining Unfair Methods of Competition (UMC), and his twelve dissents and multiple speeches about competition matters, he re-grounded competition policy in the error-cost framework: weighing not only costs against benefits, but also the likelihood of getting it wrong against the likelihood of getting it right.

Yet Wright may be remembered as much for what he started as what he finished: reforming the Commission’s Unfair and Deceptive Acts and Practices (UDAP) work. His consumer protection work is relatively slender: four dissents on high tech matters plus four relatively brief concurrences and one dissent on more traditional advertising substantiation cases. But together, these offer all the building blocks of an economic, error-cost-based approach to consumer protection. All that remains is for another FTC Commissioner to pick up where Wright left off.

Apple: Unfairness & Cost-Benefit Analysis

In January 2014, Wright issued a blistering, 17 page dissent from the Commission’s decision to bring, and settle, an enforcement action against Apple regarding the design of its app store. Wright dissented, not from the conclusion necessarily, but from the methodology by which the Commission arrived there. In essence, he argued for an error-cost approach to unfairness:

The Commission, under the rubric of “unfair acts and practices,” substitutes its own judgment for a private firm’s decisions as to how to design its product to satisfy as many users as possible, and requires a company to revamp an otherwise indisputably legitimate business practice. Given the apparent benefits to some consumers and to competition from Apple’s allegedly unfair practices, I believe the Commission should have conducted a much more robust analysis to determine whether the injury to this small group of consumers justifies the finding of unfairness and the imposition of a remedy.

…. although Apple’s allegedly unfair act or practice has harmed some consumers, I do not believe the Commission has demonstrated the injury is substantial. More importantly, any injury to consumers flowing from Apple’s choice of disclosure and billing practices is outweighed considerably by the benefits to competition and to consumers that flow from the same practice.

The majority insisted that the burden on consumers or Apple from its remedy “is de minimis,” and therefore “it was unnecessary for the Commission to undertake a study of how consumers react to different disclosures before issuing its complaint against Apple, as Commissioner Wright suggests.”

Wright responded: “Apple has apparently determined that most consumers do not want to experience excessive disclosures or to be inconvenienced by having to enter their passwords every time they make a purchase.” In essence, he argued, that the FTC should not presume to know better than Apple how to manage the subtle trade-offs between convenience and usability.

Wright was channeling Hayek’s famous quip: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” The last thing the FTC should be doing is designing digital products — even by hovering over Apple’s shoulder.

The Data Broker Report

Wright next took the Commission to task for the lack of economic analysis in its May 2013 report, “Data Brokers: A Call for Transparency and Accountability.” In just four footnotes, Wright extended his analysis of Apple. For example:

Footnote 85: Commissioner Wright agrees that Congress should consider legislation that would provide for consumer access to the information collected by data brokers. However, he does not believe that at this time there is enough evidence that the benefits to consumers of requiring data brokers to provide them with the ability to opt out of the sharing of all consumer information for marketing purposes outweighs the costs of imposing such a restriction. Finally… he believes that the Commission should engage in a rigorous study of consumer preferences sufficient to establish that consumers would likely benefit from such a portal prior to making such a recommendation.

Footnote 88: Commissioner Wright believes that in enacting statutes such as the Fair Credit Reporting Act, Congress undertook efforts to balance [costs and benefits]. In the instant case, Commissioner Wright is wary of extending FCRA-like coverage to other uses and categories of information without first performing a more robust balancing of the benefits and costs associated with imposing these requirements

The Internet of Things Report

This January, in a 4-page dissent from the FTC’s staff report on “The Internet of Things: Privacy and Security in a Connected World,” Wright lamented that the report neither represented serious economic analysis of the issues discussed nor synthesized the FTC’s workshop on the topic:

A record that consists of a one-day workshop, its accompanying public comments, and the staff’s impressions of those proceedings, however well-intended, is neither likely to result in a representative sample of viewpoints nor to generate information sufficient to support legislative or policy recommendations.

His attack on the report’s methodology was blistering:

The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs. Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace…. I support the well-established Commission view that companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of “security by design” and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.


Nomi: Deception & Materiality Analysis

In April, Wright turned his analytical artillery from unfairness to deception, long the more uncontroversial half of UDAP. In a five-page dissent, Wright accused the Commission of essentially dispensing with the core limiting principle of the 1983 Deception Policy Statement: materiality. As Wright explained:

The materiality inquiry is critical because the Commission’s construct of “deception” uses materiality as an evidentiary proxy for consumer injury…. Deception causes consumer harm because it influences consumer behavior — that is, the deceptive statement is one that is not merely misleading in the abstract but one that causes consumers to make choices to their detriment that they would not have otherwise made. This essential link between materiality and consumer injury ensures the Commission’s deception authority is employed to deter only conduct that is likely to harm consumers and does not chill business conduct that makes consumers better off.

As in Apple, Wright did not argue that there might not be a role for the FTC; merely that the FTC had failed to justify bringing, let alone settling, an enforcement action without establishing that the key promise at issue — to provide in-store opt-out — was material.

The Chamber Speech: A Call for Economic Analysis

In May, Wright gave a speech to the Chamber of Commerce on “How to Regulate the Internet of Things Without Harming its Future: Some Do’s and Don’ts”:

Perhaps it is because I am an economist who likes to deal with hard data, but when it comes to data and privacy regulation, the tendency to rely upon anecdote to motivate policy is a serious problem. Instead of developing a proper factual record that documents cognizable and actual harms, regulators can sometimes be tempted merely to explore anecdotal and other hypothetical examples and end up just offering speculations about the possibility of harm.

And on privacy in particular:

What I have seen instead is what appears to be a generalized apprehension about the collection and use of data — whether or not the data is actually personally identifiable or sensitive — along with a corresponding, and arguably crippling, fear about the possible misuse of such data.  …. Any sensible approach to regulating the collection and use of data will take into account the risk of abuses that will harm consumers. But those risks must be weighed with as much precision as possible, as is the case with potential consumer benefits, in order to guide sensible policy for data collection and use. The appropriate calibration, of course, turns on our best estimates of how policy changes will actually impact consumers on the margin….

Wright concedes that the “vast majority of work that the Consumer Protection Bureau performs simply does not require significant economic analysis because they involve business practices that create substantial risk of consumer harm but little or nothing in the way of consumer benefits.” Yet he notes that the Internet has made the need for cost-benefit analysis far more acute, at least where conduct is ambiguous as its effects on consumers, as in Apple, to avoid “squelching innovation and depriving consumers of these benefits.”

The Wrightian Reform Agenda for UDAP Enforcement

Wright left all the building blocks his successor will need to bring “Wrightian” reform to how the Bureau of Consumer Protection works:

  1. Wright’s successor should work to require economic analysis for consent decrees, as Wright proposed in his last major address as a Commissioner. BE might not to issue a statement at all in run-of-the-mill deception cases, but it should certainly have to say something about unfairness cases.
  2. The FTC needs to systematically assess its enforcement process to understand the incentives causing companies to settle UDAP cases nearly every time — resulting in what Chairman Ramirez and Commissioner Brill frequently call the FTC’s “common law of consent decrees.”
  3. As Wright says in his Nomi dissent “While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint.” This point should be uncontroversial, yet the Commission has never addressed it. Wright’s successor (and the FTC) should, at a minimum, propose a standard for settling cases.
  4. Just as Josh succeeded in getting the FTC to issue a UMC policy statement, his successor should re-assess the FTC’s two UDAP policy statements. Wright’s successor needs to make the case for finally codifying the DPS — and ensuring that the FTC stops bypassing materiality, as in Nomi.
  5. The Commission should develop a rigorous methodology for each of the required elements of unfairness and deception to justify bringing cases (or making report recommendations). This will be a great deal harder than merely attacking the lack of such methodology in dissents.
  6. The FTC has, in recent years, increasingly used reports to make de facto policy — by inventing what Wright calls, in his Chamber speech, “slogans and catchphrases” like “privacy by design,” and then using them as boilerplate requirements for consent decrees; by pressuring companies into adopting the FTC’s best practices; by calling for legislation; and so on. At a minimum, these reports must be grounded in careful economic analysis.
  7. The Commission should apply far greater rigor in setting standards for substantiating claims about health benefits. In two dissents, Genelink et al and HCG Platinum, Wright demolished arguments for a clear, bright line requiring two randomized clinical trials, and made the case for “a more flexible substantiation requirement” instead.

Conclusion: Big Shoes to Fill

It’s a testament to Wright’s analytical clarity that he managed to say so much about consumer protection in so few words. That his UDAP work has received so little attention, relative to his competition work, says just as much about the far greater need for someone to do for consumer protection what Wright did for competition enforcement and policy at the FTC.

Wright’s successor, if she’s going to finish what Wright started, will need something approaching Wright’s sheer intellect, his deep internalization of the error-costs approach, and his knack for brokering bipartisan compromise around major issues — plus the kind of passion for UDAP matters Wright had for competition matters. And, of course, that person needs to be able to continue his legacy on competition matters…

Compared to the difficulty of finding that person, actually implementing these reforms may be the easy part.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.


Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.



The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.


If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.


Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive


Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Rate this:

Continue Reading...
On Wednesday, March 18, our fellow law-and-economics-focused brethren at George Mason’s Law and Economics Center will host a very interesting morning briefing on the intersection of privacy, big data, consumer protection, and antitrust. FTC Commissioner Maureen Ohlhausen will keynote and she will be followed by what looks like will be a lively panel discussion. If you are in DC you can join in person, but you can also watch online. More details below.
Please join the LEC in person or online for a morning of lively discussion on this topic. FTC Commissioner Maureen K. Ohlhausen will set the stage by discussing her Antitrust Law Journal article, “Competition, Consumer Protection and The Right [Approach] To Privacy“. A panel discussion on big data and antitrust, which includes some of the leading thinkers on the subject, will follow.
Other featured speakers include:

Allen P. Grunes
Founder, The Konkurrenz Group and Data Competition Institute

Andres Lerner
Executive Vice President, Compass Lexecon

Darren S. Tucker
Partner, Morgan Lewis

Nathan Newman
Director, Economic and Technology Strategies LLC

Moderator: James C. Cooper
Director, Research and Policy, Law & Economics Center

A full agenda is available click here.

UPDATE: I’ve been reliably informed that Vint Cerf coined the term “permissionless innovation,” and, thus, that he did so with the sorts of private impediments discussed below in mind rather than government regulation. So consider the title of this post changed to “Permissionless innovation SHOULD not mean ‘no contracts required,'” and I’ll happily accept that my version is the “bastardized” version of the term. Which just means that the original conception was wrong and thank god for disruptive innovation in policy memes!

Can we dispense with the bastardization of the “permissionless innovation” concept (best developed by Adam Thierer) to mean “no contracts required”? I’ve been seeing this more and more, but it’s been around for a while. Some examples from among the innumerable ones out there:

Vint Cerf on net neutrality in 2009:

We believe that the vast numbers of innovative Internet applications over the last decade are a direct consequence of an open and freely accessible Internet. Many now-successful companies have deployed their services on the Internet without the need to negotiate special arrangements with Internet Service Providers, and it’s crucial that future innovators have the same opportunity. We are advocates for “permissionless innovation” that does not impede entrepreneurial enterprise.

Net neutrality is replete with this sort of idea — that any impediment to edge providers (not networks, of course) doing whatever they want to do at a zero price is a threat to innovation.

Chet Kanojia (Aereo CEO) following the Aereo decision:

It is troubling that the Court states in its decision that, ‘to the extent commercial actors or other interested entities may be concerned with the relationship between the development and use of such technologies and the Copyright Act, they are of course free to seek action from Congress.’ (Majority, page 17)That begs the question: Are we moving towards a permission-based system for technology innovation?

At least he puts it in the context of the Court’s suggestion that Congress pass a law, but what he really wants is to not have to ask “permission” of content providers to use their content.

Mike Masnick on copyright in 2010:

But, of course, the problem with all of this is that it goes back to creating permission culture, rather than a culture where people freely create. You won’t be able to use these popular or useful tools to build on the works of others — which, contrary to the claims of today’s copyright defenders, is a key component in almost all creativity you see out there — without first getting permission.

Fair use is, by definition, supposed to be “permissionless.” But the concept is hardly limited to fair use, is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright (see, e.g., Mike Masnick again), which otherwise requires those pernicious licenses (i.e., permission) from others.

The point is, when we talk about permissionless innovation for Tesla, Uber, Airbnb, commercial drones, online data and the like, we’re talking (or should be) about ex ante government restrictions on these things — the “permission” at issue is permission from the government, it’s the “permission” required to get around regulatory roadblocks imposed via rent-seeking and baseless paternalism. As Gordon Crovitz writes, quoting Thierer:

“The central fault line in technology policy debates today can be thought of as ‘the permission question,'” Mr. Thierer writes. “Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations?”

But it isn’t (or shouldn’t be) about private contracts.

Just about all human (commercial) activity requires interaction with others, and that means contracts and licenses. You don’t see anyone complaining about the “permission” required to rent space from a landlord. But that some form of “permission” may be required to use someone else’s creative works or other property (including broadband networks) is no different. And, in fact, it is these sorts of contracts (and, yes, the revenue that may come with them) that facilitates people engaging with other commercial actors to produce things of value in the first place. The same can’t be said of government permission.

Don’t get me wrong – there may be some net welfare-enhancing regulatory limits that might require forms of government permission. But the real concern is the pervasive abuse of these limits, imposed without anything approaching a rigorous welfare determination. There might even be instances where private permission, imposed, say, by a true monopolist, might be problematic.

But this idea that any contractual obligation amounts to a problematic impediment to innovation is absurd, and, in fact, precisely backward. Which is why net neutrality is so misguided. Instead of identifying actual, problematic impediments to innovation, it simply assumes that networks threaten edge innovation, without any corresponding benefit and with such certainty (although no actual evidence) that ex ante common carrier regulations are required.

“Permissionless innovation” is a great phrase and, well developed (as Adam Thierer has done), a useful concept. But its bastardization to justify interference with private contracts is unsupported and pernicious.

The Children’s Online Privacy Protection Act (COPPA) continues to be a hot button issue for many online businesses and privacy advocates. On November 14, Senator Markey, along with Senator Kirk and Representatives Barton and Rush introduced the Do Not Track Kids Act of 2013 to amend the statute to include children from 13-15 and add new requirements, like an eraser button. The current COPPA Rule, since the FTC’s recent update went into effect this past summer, requires parental consent before businesses can collect information about children online, including relatively de-identified information like IP addresses and device numbers that allow for targeted advertising.

Often, the debate about COPPA is framed in a way that makes it very difficult to discuss as a policy matter. With the stated purpose of “enhanc[ing] parental involvement in children’s online activities in order to protect children’s privacy,” who can really object? While there is recognition that there are substantial costs to COPPA compliance (including foregone innovation and investment in children’s media), it’s generally taken for granted by all that the Rule is necessary to protect children online. But it has never been clear what COPPA is supposed to help us protect our children from.

Then-Representative Markey’s original speech suggested one possible answer in “protect[ing] children’s safety when they visit and post information on public chat rooms and message boards.” If COPPA is to be understood in this light, the newest COPPA revision from the FTC and the proposed Do Not Track Kids Act of 2013 largely miss the mark. It seems unlikely that proponents worry about children or teens posting their IP address or device numbers online, allowing online predators to look at this information and track them down. Rather, the clear goal animating the updates to COPPA is to “protect” children from online behavioral advertising. Here’s now-Senator Markey’s press statement:

“The speed with which Facebook is pushing teens to share their sensitive, personal information widely and publicly online must spur Congress to act commensurately to put strong privacy protections on the books for teens and parents,” said Senator Markey. “Now is the time to pass the bipartisan Do Not Track Kids Act so that children and teens don’t have their information collected and sold to the highest bidder. Corporations like Facebook should not be profiting from the personal and sensitive information of children and teens, and parents and teens should have the right to control their personal information online.”

The concern about online behavioral advertising could probably be understood in at least three ways, but each of them is flawed.

  1. Creepiness. Some people believe there is something just “creepy” about companies collecting data on consumers, especially when it comes to children and teens. While nearly everyone would agree that surreptitiously collecting data like email addresses or physical addresses without consent is wrong, many would probably prefer to trade data like IP addresses and device numbers for free content (as nearly everyone does every day on the Internet). It is also unclear that COPPA is the answer to this type of problem, even if it could be defined. As Adam Thierer has pointed out, parents are in a much better position than government regulators or even companies to protect their children from privacy practices they don’t like.
  2. Exploitation. Another way to understand the concern is that companies are exploiting consumers by making money off their data without consumers getting any value. But this fundamentally ignores the multi-sided market at play here. Users trade information for a free service, whether it be Facebook, Google, or Twitter. These services then monetize that information by creating profiles and selling that to advertisers. Advertisers then place ads based on that information with the hopes of increasing sales. In the end, though, companies make money only when consumers buy their products. Free content funded by such advertising is likely a win-win-win for everyone involved.
  3. False Consciousness. A third way to understand the concern over behavioral advertising is that corporations can convince consumers to buy things they don’t need or really want through advertising. Much of this is driven by what Jack Calfee called The Fear of Persuasion: many people don’t understand the beneficial effects of advertising in increasing the information available to consumers and, as a result, misdiagnose the role of advertising. Even accepting this false consciousness theory, the difficulty for COPPA is that no one has ever explained why advertising is a harm to children or teens. If anything, online behavioral advertising is less of a harm to teens and children than adults for one simple reason: Children and teens can’t (usually) buy anything! Kids and teens need their parents’ credit cards in order to buy stuff online. This means that parental involvement is already necessary, and has little need of further empowerment by government regulation.

COPPA may have benefits in preserving children’s safety — as Markey once put it — beyond what underlying laws, industry self-regulation and parental involvement can offer. But as we work to update the law, we shouldn’t allow the Rule to be a solution in search of a problem. It is incumbent upon Markey and other supporters of the latest amendment to demonstrate that the amendment will serve to actually protect kids from something they need protecting from. Absent that, the costs very likely outweigh the benefits.

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.

I’ll be headed to New Orleans tomorrow to participate in the Federalist Society Faculty Conference and the AALS Annual Meeting.

For those attending and interested, I’ll be speaking at the Fed Soc on privacy and antitrust, and at AALS on Google and antitrust.  Details below.  I hope to see you there!

Federalist Society:

Seven-Minute Presentations of Works in Progress – Part I
Friday, January 4, 5:00 p.m. – 6:00 p.m.
Location: Bacchus Room, Wyndham Riverfront Hotel

  • Prof. Geoffrey Manne, Lewis & Clark School of Law, “Is There a Place for Privacy in Antitrust?”
  • Prof. Zvi Rosen, New York University School of Law, “Discharging Fiduciary Debts in Bankruptcy”
  • Prof. Erin Sheley, George Washington University School of Law, “The Body, the Self, and the Legal Account of Harm”
  • Prof. Scott Shepard, John Marshall Law School, “A Negative Externality by Any Other Name: Using Emissions Caps as Models for Constraining Dead-Weight Costs of Regulation”
  • ModeratorProf. David Olson, Boston College Law School


Google and Antitrust
Saturday, January 5, 10:30 a.m. – 12:15 p.m.
Location: Newberry, Third Floor, Hilton New Orleans Riverside

  • Moderator: Michael A. Carrier, Rutgers School of Law – Camden
  • Marina L. Lao, Seton Hall University School of Law
  • Geoffrey A. Manne, Lewis & Clark Law School
  • Frank A. Pasquale, Seton Hall University School of Law
  • Mark R. Patterson, Fordham University School of Law
  • Pamela Samuelson, University of California, Berkeley, School of Law

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures.

Commissioner Ohlhausen points out in her dissent that there is no support for the claim that the Policy Statement has led to sub-optimal deterrence and quite sensibly finds no reason for the Commission to withdraw the Policy Statement.  But even more importantly Commissioner Ohlhausen worries about what the Commission’s decision here might portend:

The guidance in the Policy Statement will be replaced by this view: “[T]he Commission withdraws the Policy Statement and will rely instead upon existing law, which provides sufficient guidance on the use of monetary equitable remedies.”  This position could be used to justify a decision to refrain from issuing any guidance whatsoever about how this agency will interpret and exercise its statutory authority on any issue. It also runs counter to the goal of transparency, which is an important factor in ensuring ongoing support for the agency’s mission and activities. In essence, we are moving from clear guidance on disgorgement to virtually no guidance on this important policy issue.

An excellent point.  If the standard for the FTC issuing policy statements is the sufficiency of the guidance provided by existing law, then arguably the FTC need not offer any guidance whatever.

But as we careen toward a more and more active role on the part of the FTC in regulating the collection, use and dissemination of data (i.e., “privacy”), this sets an ominous precedent.  Already the Commission has managed to side-step the courts in establishing its policies on this issue by, well, never going to court.  As Berin Szoka noted in recent Congressional testimony:

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.

This has lead Berin and others to suggest — and the chorus will only grow louder — that the FTC clarify the basis for its enforcement decisions and offer clear guidance on its interpretation of the unfairness and deception standards it applies under the rubric of protecting privacy.  Unfortunately, the Commission’s reasoning in this action suggests it might well not see fit to offer any such guidance.