Archives For privacy

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures.

Commissioner Ohlhausen points out in her dissent that there is no support for the claim that the Policy Statement has led to sub-optimal deterrence and quite sensibly finds no reason for the Commission to withdraw the Policy Statement.  But even more importantly Commissioner Ohlhausen worries about what the Commission’s decision here might portend:

The guidance in the Policy Statement will be replaced by this view: “[T]he Commission withdraws the Policy Statement and will rely instead upon existing law, which provides sufficient guidance on the use of monetary equitable remedies.”  This position could be used to justify a decision to refrain from issuing any guidance whatsoever about how this agency will interpret and exercise its statutory authority on any issue. It also runs counter to the goal of transparency, which is an important factor in ensuring ongoing support for the agency’s mission and activities. In essence, we are moving from clear guidance on disgorgement to virtually no guidance on this important policy issue.

An excellent point.  If the standard for the FTC issuing policy statements is the sufficiency of the guidance provided by existing law, then arguably the FTC need not offer any guidance whatever.

But as we careen toward a more and more active role on the part of the FTC in regulating the collection, use and dissemination of data (i.e., “privacy”), this sets an ominous precedent.  Already the Commission has managed to side-step the courts in establishing its policies on this issue by, well, never going to court.  As Berin Szoka noted in recent Congressional testimony:

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.

This has lead Berin and others to suggest — and the chorus will only grow louder — that the FTC clarify the basis for its enforcement decisions and offer clear guidance on its interpretation of the unfairness and deception standards it applies under the rubric of protecting privacy.  Unfortunately, the Commission’s reasoning in this action suggests it might well not see fit to offer any such guidance.

As I mentioned in my previous post, there is a strong effort to regulate the use of information on the web in the name of “privacy.” The basic tradeoff that drives the web is that firms use information for advertising and other purposes,and in return consumers get lots of things free.  Google alone offers about 40 free services, including the original  search engine, gmail, maps, and the increasingly popular android operating system for mobile devices. Facebook is another set of free services. There are hundreds of others, all ultimately funded by advertising and the use of information.  Any effort to regulate information is going to change the terms at which these services are offered.

To justify regulation, two conditions must be met.  First there must be some market failure.  Second, there must be at least an expectation that the benefits of the proposed regulation will outweigh the costs.  In a market economy, we generally put the burden of proof on those proposing regulation, since the default assumption is that markets provide net benefits.  Proponents of regulating the use of information on the internet have met neither of these burdens.

One main justification for regulation is that people do not want to be tracked. I discussed this issue in my previous post.  Let me just add that, while people express a desire not to be tracked, in practice they seem quite willing to trade information for other services.  The other issue is identity theft — the possibility that information will be misused for illegitimate purposes.  Tom Lenard and I have written extensively about this issue. The bottom line, however, is that consumers are not liable for much if any of the costs of identity theft, and since firms must bear these costs there is no obvious market failure.

With respect to the second issue, there has been virtually no effort to undertake any cost benefit analysis of the proposed regulations.  However, if there were such an analysis, it is unlikely that regulations would be cost justified since the benefits of the free stuff are huge and the costs are small at best.  While it is conceivable that some tweaking would pass a cost-benefit test, it is very unlikely that any regulation which could get through the political process and then be administered by an agency such as the FTC would in fact pass this test.  Moreover, the proposed regulations, such as a “do not track” list or shifting from opt out to opt in are well beyond “tweaking” and might fundamentally change the terms of the tradeoff.

The bottom line is this:  Privacy advocates act as if privacy is free.  But increased privacy means reduced use of information, and no one has shown that altering the terms of this tradeoff would be beneficial to consumers.

Privacy and Tracking

Paul H. Rubin —  12 March 2011

First I would like to thank Geoff Manne for inviting me to join this blog.  I know most of my fellow bloggers and it is a group I am proud to be associated with.

For my first few posts I am going to write about privacy.  This is a hot topic.  Senators McCain and Kerry are floating a privacy bill, and the FTC is also looking at privacy. I have written a lot about privacy (mostly with Tom Lenard of the Technology Policy Institute, where I am a senior fellow).

The issue of the day is “tracking.”  There are several proposals for “do not track” legislation and polls show that consumers do not want to be tracked.

The entire fear of being tracked is based on an illusion.  It is a deep illusion, and difficult or impossible to eliminate, but still an illusion.   People are uncomfortable with the idea that someone knows what they are doing.  (It is “creepy.”)  But in fact no person knows what you are doing, even if you are being tracked. Only a machine knows.

As humans, we have difficulty understanding that something can be “known” but nonetheless not known by anyone.   We do not understand that we can be “tracked” but that no one is tracking us.  That is, data on our searches may exist on a server somewhere so that the server “knows” it, but no human knows it.  We don’t intuitively grasp this concept because it it entirely alien to our evolved intelligence.

In my most recent paper (with Michael Hammock, coming out in Competition Policy International) we cite two books by Clifford Nass ( C. Nass & C. Yen, The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships (2010), and B. Reeves & C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (1996, 2002).)  Nass and his coauthors show that people automatically treat intelligent machines like other people.  For example, if asked to fill out a questionnaire about the quality of a computer, they rate the machine higher if they are filling out the form on the computer being rated than if it on another computer — they don’t want to hurt the computer’s feelings.  Privacy is like that — people can’t adapt to the notion that a machine knows something. They assume (probably unconsciously) that if somethingis known then a person knows it, and this is why they do not like being tracked.

One final point about tracking.  Even if you are tracked, the purpose is to find out what you want and sell it to you.  Selling people things they want is the essence of the market economy, and if tracking does a better job of this, then it is helping the market function better, and also helping consumers get products that are a better fit.  Why should this make anyone mad?

Chris Hoofnagle writing at the TAP blog about Facebook’s comprehensive privacy options (“To opt out of full disclosure of most information, it is necessary to click through more than 50 privacy buttons, which then require choosing among a total of more than 170 options.”) claims that:

This approach is brilliant. The company can appease regulators with this approach (e.g. Facebook’s Elliot Schrage is quoted as saying, “We have tried to offer the most comprehensive and detailed controls and comprehensive and detailed information about them.”), and at the same time appear to be giving consumers the maximum number of options.

But this approach is manipulative and is based upon a well-known problem in behavioral economics known as the “paradox of choice.”

Too much choice can make decisions more difficult, and once made, those choices tend to be regretted.

But most importantly, too much choice causes paralysis. This is the genius of the Facebook approach: give consumer too much choice, and they will 1) take poor choices, thereby increasing revelation of personal information and higher ROI or 2) take no choice, with the same result. In any case, the fault is the consumer’s, because they were given a choice!

Of all the policy claims made on behalf of behavioral economics, the one that says there is value in suppressing available choices is one of the most pernicious–and absurd.  First, the problem may be “well-known,” but it is not, in fact, well-established.  Citing to one (famous) study purporting to find that decisions are made more difficult when decision-makers are confronted with a wider range of choices is not compelling when the full range of studies demonstrates a “mean effect size of virtually zero.”  In other words, on average, more choice has no discernible effect on decision-making.

But there is more–and it is what proponents of this canard opportunistically (and disingenuously, I believe) leave out:  There is evidence (hardly surprising) that more choices leads to greater satisfaction with the decisions that are made.  And of course this is the case:  People have heterogeneous preferences.  The availability of a wider range of choices is not necessarily optimal for any given decision-maker, particularly one with already-well-formed preferences.  But a wider range of choices is more likely to include the optimal choice for the greatest number of heterogeneous decision-makers selecting from the same set of options.  Even if it is true (and it appears not to be true) that more choice impairs decision-making, there is a trade-off that advocates like Hoofnagle (not himself a behavioral economist, so I don’t necessarily want to tar the discipline with the irresponsible use of its output by outsiders with policy agendas and no expertise in the field) typically ignore.  Confronting each individual decision-maker with more choices is a by-product of offering a greater range of choices to accommodate variation across decision-makers.  Of course we can offer everyone cars only in black.  And some people will be quite happy with the outcome, and delighted also that they have avoided the terrible pain of being forced to decide among a wealth of options that they didn’t even want.  But many other people, still perhaps benefiting from avoiding the onerous decision-making process, will nevertheless be disappointed that there was no option they really preferred. Continue Reading…