Archives For privacy

UPDATE: I’ve been reliably informed that Vint Cerf coined the term “permissionless innovation,” and, thus, that he did so with the sorts of private impediments discussed below in mind rather than government regulation. So consider the title of this post changed to “Permissionless innovation SHOULD not mean ‘no contracts required,’” and I’ll happily accept that my version is the “bastardized” version of the term. Which just means that the original conception was wrong and thank god for disruptive innovation in policy memes!

Can we dispense with the bastardization of the “permissionless innovation” concept (best developed by Adam Thierer) to mean “no contracts required”? I’ve been seeing this more and more, but it’s been around for a while. Some examples from among the innumerable ones out there:

Vint Cerf on net neutrality in 2009:

We believe that the vast numbers of innovative Internet applications over the last decade are a direct consequence of an open and freely accessible Internet. Many now-successful companies have deployed their services on the Internet without the need to negotiate special arrangements with Internet Service Providers, and it’s crucial that future innovators have the same opportunity. We are advocates for “permissionless innovation” that does not impede entrepreneurial enterprise.

Net neutrality is replete with this sort of idea — that any impediment to edge providers (not networks, of course) doing whatever they want to do at a zero price is a threat to innovation.

Chet Kanojia (Aereo CEO) following the Aereo decision:

It is troubling that the Court states in its decision that, ‘to the extent commercial actors or other interested entities may be concerned with the relationship between the development and use of such technologies and the Copyright Act, they are of course free to seek action from Congress.’ (Majority, page 17)That begs the question: Are we moving towards a permission-based system for technology innovation?

At least he puts it in the context of the Court’s suggestion that Congress pass a law, but what he really wants is to not have to ask “permission” of content providers to use their content.

Mike Masnick on copyright in 2010:

But, of course, the problem with all of this is that it goes back to creating permission culture, rather than a culture where people freely create. You won’t be able to use these popular or useful tools to build on the works of others — which, contrary to the claims of today’s copyright defenders, is a key component in almost all creativity you see out there — without first getting permission.

Fair use is, by definition, supposed to be “permissionless.” But the concept is hardly limited to fair use, is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright (see, e.g., Mike Masnick again), which otherwise requires those pernicious licenses (i.e., permission) from others.

The point is, when we talk about permissionless innovation for Tesla, Uber, Airbnb, commercial drones, online data and the like, we’re talking (or should be) about ex ante government restrictions on these things — the “permission” at issue is permission from the government, it’s the “permission” required to get around regulatory roadblocks imposed via rent-seeking and baseless paternalism. As Gordon Crovitz writes, quoting Thierer:

“The central fault line in technology policy debates today can be thought of as ‘the permission question,’” Mr. Thierer writes. “Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations?”

But it isn’t (or shouldn’t be) about private contracts.

Just about all human (commercial) activity requires interaction with others, and that means contracts and licenses. You don’t see anyone complaining about the “permission” required to rent space from a landlord. But that some form of “permission” may be required to use someone else’s creative works or other property (including broadband networks) is no different. And, in fact, it is these sorts of contracts (and, yes, the revenue that may come with them) that facilitates people engaging with other commercial actors to produce things of value in the first place. The same can’t be said of government permission.

Don’t get me wrong – there may be some net welfare-enhancing regulatory limits that might require forms of government permission. But the real concern is the pervasive abuse of these limits, imposed without anything approaching a rigorous welfare determination. There might even be instances where private permission, imposed, say, by a true monopolist, might be problematic.

But this idea that any contractual obligation amounts to a problematic impediment to innovation is absurd, and, in fact, precisely backward. Which is why net neutrality is so misguided. Instead of identifying actual, problematic impediments to innovation, it simply assumes that networks threaten edge innovation, without any corresponding benefit and with such certainty (although no actual evidence) that ex ante common carrier regulations are required.

“Permissionless innovation” is a great phrase and, well developed (as Adam Thierer has done), a useful concept. But its bastardization to justify interference with private contracts is unsupported and pernicious.

The Children’s Online Privacy Protection Act (COPPA) continues to be a hot button issue for many online businesses and privacy advocates. On November 14, Senator Markey, along with Senator Kirk and Representatives Barton and Rush introduced the Do Not Track Kids Act of 2013 to amend the statute to include children from 13-15 and add new requirements, like an eraser button. The current COPPA Rule, since the FTC’s recent update went into effect this past summer, requires parental consent before businesses can collect information about children online, including relatively de-identified information like IP addresses and device numbers that allow for targeted advertising.

Often, the debate about COPPA is framed in a way that makes it very difficult to discuss as a policy matter. With the stated purpose of “enhanc[ing] parental involvement in children’s online activities in order to protect children’s privacy,” who can really object? While there is recognition that there are substantial costs to COPPA compliance (including foregone innovation and investment in children’s media), it’s generally taken for granted by all that the Rule is necessary to protect children online. But it has never been clear what COPPA is supposed to help us protect our children from.

Then-Representative Markey’s original speech suggested one possible answer in “protect[ing] children’s safety when they visit and post information on public chat rooms and message boards.” If COPPA is to be understood in this light, the newest COPPA revision from the FTC and the proposed Do Not Track Kids Act of 2013 largely miss the mark. It seems unlikely that proponents worry about children or teens posting their IP address or device numbers online, allowing online predators to look at this information and track them down. Rather, the clear goal animating the updates to COPPA is to “protect” children from online behavioral advertising. Here’s now-Senator Markey’s press statement:

“The speed with which Facebook is pushing teens to share their sensitive, personal information widely and publicly online must spur Congress to act commensurately to put strong privacy protections on the books for teens and parents,” said Senator Markey. “Now is the time to pass the bipartisan Do Not Track Kids Act so that children and teens don’t have their information collected and sold to the highest bidder. Corporations like Facebook should not be profiting from the personal and sensitive information of children and teens, and parents and teens should have the right to control their personal information online.”

The concern about online behavioral advertising could probably be understood in at least three ways, but each of them is flawed.

  1. Creepiness. Some people believe there is something just “creepy” about companies collecting data on consumers, especially when it comes to children and teens. While nearly everyone would agree that surreptitiously collecting data like email addresses or physical addresses without consent is wrong, many would probably prefer to trade data like IP addresses and device numbers for free content (as nearly everyone does every day on the Internet). It is also unclear that COPPA is the answer to this type of problem, even if it could be defined. As Adam Thierer has pointed out, parents are in a much better position than government regulators or even companies to protect their children from privacy practices they don’t like.
  2. Exploitation. Another way to understand the concern is that companies are exploiting consumers by making money off their data without consumers getting any value. But this fundamentally ignores the multi-sided market at play here. Users trade information for a free service, whether it be Facebook, Google, or Twitter. These services then monetize that information by creating profiles and selling that to advertisers. Advertisers then place ads based on that information with the hopes of increasing sales. In the end, though, companies make money only when consumers buy their products. Free content funded by such advertising is likely a win-win-win for everyone involved.
  3. False Consciousness. A third way to understand the concern over behavioral advertising is that corporations can convince consumers to buy things they don’t need or really want through advertising. Much of this is driven by what Jack Calfee called The Fear of Persuasion: many people don’t understand the beneficial effects of advertising in increasing the information available to consumers and, as a result, misdiagnose the role of advertising. Even accepting this false consciousness theory, the difficulty for COPPA is that no one has ever explained why advertising is a harm to children or teens. If anything, online behavioral advertising is less of a harm to teens and children than adults for one simple reason: Children and teens can’t (usually) buy anything! Kids and teens need their parents’ credit cards in order to buy stuff online. This means that parental involvement is already necessary, and has little need of further empowerment by government regulation.

COPPA may have benefits in preserving children’s safety — as Markey once put it — beyond what underlying laws, industry self-regulation and parental involvement can offer. But as we work to update the law, we shouldn’t allow the Rule to be a solution in search of a problem. It is incumbent upon Markey and other supporters of the latest amendment to demonstrate that the amendment will serve to actually protect kids from something they need protecting from. Absent that, the costs very likely outweigh the benefits.

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.

I’ll be headed to New Orleans tomorrow to participate in the Federalist Society Faculty Conference and the AALS Annual Meeting.

For those attending and interested, I’ll be speaking at the Fed Soc on privacy and antitrust, and at AALS on Google and antitrust.  Details below.  I hope to see you there!

Federalist Society:

Seven-Minute Presentations of Works in Progress – Part I
Friday, January 4, 5:00 p.m. – 6:00 p.m.
Location: Bacchus Room, Wyndham Riverfront Hotel

  • Prof. Geoffrey Manne, Lewis & Clark School of Law, “Is There a Place for Privacy in Antitrust?”
  • Prof. Zvi Rosen, New York University School of Law, “Discharging Fiduciary Debts in Bankruptcy”
  • Prof. Erin Sheley, George Washington University School of Law, “The Body, the Self, and the Legal Account of Harm”
  • Prof. Scott Shepard, John Marshall Law School, “A Negative Externality by Any Other Name: Using Emissions Caps as Models for Constraining Dead-Weight Costs of Regulation”
  • ModeratorProf. David Olson, Boston College Law School

AALS:

Google and Antitrust
Saturday, January 5, 10:30 a.m. – 12:15 p.m.
Location: Newberry, Third Floor, Hilton New Orleans Riverside

  • Moderator: Michael A. Carrier, Rutgers School of Law – Camden
  • Marina L. Lao, Seton Hall University School of Law
  • Geoffrey A. Manne, Lewis & Clark Law School
  • Frank A. Pasquale, Seton Hall University School of Law
  • Mark R. Patterson, Fordham University School of Law
  • Pamela Samuelson, University of California, Berkeley, School of Law

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures.

Commissioner Ohlhausen points out in her dissent that there is no support for the claim that the Policy Statement has led to sub-optimal deterrence and quite sensibly finds no reason for the Commission to withdraw the Policy Statement.  But even more importantly Commissioner Ohlhausen worries about what the Commission’s decision here might portend:

The guidance in the Policy Statement will be replaced by this view: “[T]he Commission withdraws the Policy Statement and will rely instead upon existing law, which provides sufficient guidance on the use of monetary equitable remedies.”  This position could be used to justify a decision to refrain from issuing any guidance whatsoever about how this agency will interpret and exercise its statutory authority on any issue. It also runs counter to the goal of transparency, which is an important factor in ensuring ongoing support for the agency’s mission and activities. In essence, we are moving from clear guidance on disgorgement to virtually no guidance on this important policy issue.

An excellent point.  If the standard for the FTC issuing policy statements is the sufficiency of the guidance provided by existing law, then arguably the FTC need not offer any guidance whatever.

But as we careen toward a more and more active role on the part of the FTC in regulating the collection, use and dissemination of data (i.e., “privacy”), this sets an ominous precedent.  Already the Commission has managed to side-step the courts in establishing its policies on this issue by, well, never going to court.  As Berin Szoka noted in recent Congressional testimony:

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.

This has lead Berin and others to suggest — and the chorus will only grow louder — that the FTC clarify the basis for its enforcement decisions and offer clear guidance on its interpretation of the unfairness and deception standards it applies under the rubric of protecting privacy.  Unfortunately, the Commission’s reasoning in this action suggests it might well not see fit to offer any such guidance.

Last week the New York Times ran an article, “Building the Next Facebook a Tough Task in Europe“, by Eric Pfanner, discussing the lack of major high tech innovation in Europe.  Eric Pfanner discusses the importance of such investment, and then speculates on the reason for the lack of such innovation.  The ultimate conclusion is that there is a lack of venture capital in Europe for various cultural and historical reasons.  This explanation of course makes no sense.  Capital is geographically mobile and if European tech start ups were a profitable investment that Europeans were afraid to bankroll, American investors would be on the next plane.

Here is a better explanation.  In the name of “privacy,” the EU greatly restricts the use of consumer online  information.  Josh Lerner has a recent paper, “The Impact of Privacy Policy Changes on Venture Capital Investment in Online Advertising Companies” (based in part on the work of Avi Goldfarb and Catherine E. Tucker, “Privacy Regulation and Online Advertising“) finding that this restriction on the use of information is a large part of the explanation for the lack of tech investment in Europe.  Tom Lenard and I have written extensively about the costs of privacy regulation (for example, here) and this is just another example of these costs, although the costs are much greater in Europe than they are here (so far.)

Today at 11AM PT I will be participating on the live webcast “This Week in Law” along with TechFreedom Senior Adjunct Fellow Larry Downes. Denise Howell will be hosting and we will also be joined by fellow participant Evan Brown. This week we will be discussing various topics in tech policy including Senator Al Franken’s lambast of Facebook and Google, the newly opened antitrust investigation of Motorola Mobility by the European Commission, and the continued problem of spectrum crunch.

This Week in Law is recorded live every Friday at 11:00am PT/2:00pm ET and covers topics primarily in law, technology, and public policy. You do not have to register, just follow this link at 11:00am PT/2:00pm ET to watch.

Privacy Interview

Paul H. Rubin —  27 January 2012

I was recently interview about privacy on the BBC Online Magazine by Kate Dailey.  Here is the interview:

26 January 2012 Last updated at 13:11 ET

Could Google’s data hoarding be good for you?

By Kate Dailey BBC News Magazine

Google’s announcement that is now tracking users’ web movements has upset privacy advocates. But consider what you get in return for the information.

With the news that Google is to merge data collected from its many platforms – including YouTube, Gmail and Blogger – privacy advocates say the company will have more information than it should. Even before this change, web users had too little control over their online information, they say.

“Your data is out there,” says Jeff Blevins, an associate professor of communications law and policy at Iowa State University.

“It’s really blind to us. We don’t know what information they have and how they’re using it, and we have no right to access it.”

Web companies use browsing behaviour to paint consumers into boxes, making assumptions about their identities and targeting ads at them. Sometimes users can opt out. But often they are tracked without even knowing it.

Risk and rewardBut one economist says concerns about privacy are misguided – and that having more online is better than having less.

Users are richly compensated for their personal information, says Paul Rubin, a professor of economics at Emory University in Atlanta. In exchange for it, he says, they receive a free and useful internet.

“It makes the internet work much better, in many dimensions.

“If you and I search on the same topic, we may have different interests, if the results are tailored to me and tailored to you, that’s a better experience.”

When the data is used to sell ads, the ads we get are tailored to things we might like, and the profits can work in our favour.

“Sure, Google makes some money, but they use that money to give away all kinds of stuff, like Gmail,” says Mr Rubin.

“My life is on Google,” he says, referring to the calendars, documents and other services Google provides. “It needs to be funded somehow.”

Avoiding fraud

Counterintuitively, having more information available online could better protect consumers from fraud, Mr Rubin says.

A consumer seeking a new credit agreement, for example, currently has to provide information found in the public record, such as current and previous addresses.

Thieves with only an incomplete set of information – say, your name and social security number – can often access those answers.

But with more information online, a clearer picture of who that social security number really belongs to emerges, making it easier for online verification systems to ask more relevant questions, such as recent purchase history.

“The other thing people worry about is ID theft and fraud, but with more information that’s available, it’s easy to verify someone’s identity,” he says.

The information companies collect does not form a personal dossier so much as a collection of data points and assumptions about each user based on their web history. It is kept separate from a name, face, or address.

And as Business Insider pointed out, those Google assumptions can often miss the mark – incorrectly classifying users based on the data available.

That is in part because only computers are handling the sensitive information collected online, Mr Rubin notes.

“People have a notion that if something is known about them somebody knows it,” he says. “In fact, there’s a huge amount of stuff that’s only known by computers.”

He says reputable companies do a good job of making sure that data stays on the servers and out of human reach.

A data stereotype of an individual’s online shopping behaviour can make it easier to flag when that behaviour is out of the ordinary, for instance.

‘No protections’

Privacy experts worry that the risks of having too much personal information online far exceed the potential rewards.

“At the moment in the US, there are almost no protections,” says Lorrie Cranor, associate professor of computer science and engineering and public policy at Carnegie Mellon University.

“It would be good to have some baselines established – certain types of data uses that can’t be done. To really make it illegal for companies to go and sell this info to your employer or your insurance company, for instance,” she says.

Social media records can be subpoenaed in legal cases, she said. In 2010, Google sacked an engineer accused of inappropriately accessing Gmail accounts to spy on people.

Currently, it is difficult to determine whether Europe’s strong privacy laws are being enforced, says Jonathan Mayer, fellow at the Center for Internet and Society at Stanford University.

He is part of the World Wide Web Consortium Tracking Protection Working Group, which is drafting rules for what data can be collected, and how, across the web.

“The harm for the moment does not seem to be some particular economic injury that people are out in the wild suffering, but the principal of ‘would you hand your web browsing to a stranger’,” he says.

When it comes to privacy protection, he says he would prefer to err on the side of caution.

“It doesn’t seem to me that we should have to wait for the very bad things that could happen before we let users take control of their data,” he says.

Privacy in Europe

Paul H. Rubin —  24 January 2012

The EU is apparently thinking of adopting common and highly restrictive privacy standards which would make use of information by firms much more difficult and would require, for example, that data be retained only as long as necessary.  This is touted as pro-consumer legislation.  However, the effects would be profoundly anti-consumer.  For one thing, ads would be much less targeted, and so consumers would get less valuable ads and would not learn as much about valuable prodcts and services aimed at their interests.  For another effect, fraud and identity theft would become more common as sellers could not use stored information to verify identity.  Finally, costs of doing buisness would increase, and so we would expect to see fewer innovations aimed at the European market, and some sellers might avoid that market entirely.

By now everyone is probably aware of the “tracking” of certain cellphones (Sprint, iPhone, T-Mobile, AT&T perhaps others) by a company called Carrier IQ.  There are lots of discussions available; a good summary is on one of my favorite websites, Lifehacker;  also here from CNET. Apparently the program gathers lots of anonymous data mainly for the purpose of helping carriers improve their service. Nonetheless, there are lawsuits and calls for the FTC to investigate.

Aside from the fact that the data is used only to improve service, it is also useful to ask just what people are afraid of.  Clearly the phone companies already have access to SMS messages if they want it since these go through the phone system anyway.  Moreover, of course, no person would see the data even if it were somehow collected.  The fear is perhaps that “… marketers can use that data to sell you more stuff or send targeted ads…” (from the Lifehacker site) but even if so, so what?  If apps are using data to try to sell you stuff that they think that you want, what is the harm? If you do want it, then the app has done you a service.  If you don’t want it, then you don’t buy it.  Ads tailored to your behavior are likely to be more useful than ads randomly assigned.

The Lifehacker story does use phrases like “freak people out” and “scary” and “creepy.”  But except for the possibility of being sold stuff, the story never explains what is harmful about the behavior.  As I have said before, I think the basic problem is that people cannot understand the notion that something is known but no person knows it.  If some server somewhere knows where your phone has been, so what?

The end result of this episode will probably be somewhat worse phone service.