Privacy and Tracking

Paul H. Rubin —  12 March 2011

First I would like to thank Geoff Manne for inviting me to join this blog.  I know most of my fellow bloggers and it is a group I am proud to be associated with.

For my first few posts I am going to write about privacy.  This is a hot topic.  Senators McCain and Kerry are floating a privacy bill, and the FTC is also looking at privacy. I have written a lot about privacy (mostly with Tom Lenard of the Technology Policy Institute, where I am a senior fellow).

The issue of the day is “tracking.”  There are several proposals for “do not track” legislation and polls show that consumers do not want to be tracked.

The entire fear of being tracked is based on an illusion.  It is a deep illusion, and difficult or impossible to eliminate, but still an illusion.   People are uncomfortable with the idea that someone knows what they are doing.  (It is “creepy.”)  But in fact no person knows what you are doing, even if you are being tracked. Only a machine knows.

As humans, we have difficulty understanding that something can be “known” but nonetheless not known by anyone.   We do not understand that we can be “tracked” but that no one is tracking us.  That is, data on our searches may exist on a server somewhere so that the server “knows” it, but no human knows it.  We don’t intuitively grasp this concept because it it entirely alien to our evolved intelligence.

In my most recent paper (with Michael Hammock, coming out in Competition Policy International) we cite two books by Clifford Nass ( C. Nass & C. Yen, The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships (2010), and B. Reeves & C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (1996, 2002).)  Nass and his coauthors show that people automatically treat intelligent machines like other people.  For example, if asked to fill out a questionnaire about the quality of a computer, they rate the machine higher if they are filling out the form on the computer being rated than if it on another computer — they don’t want to hurt the computer’s feelings.  Privacy is like that — people can’t adapt to the notion that a machine knows something. They assume (probably unconsciously) that if somethingis known then a person knows it, and this is why they do not like being tracked.

One final point about tracking.  Even if you are tracked, the purpose is to find out what you want and sell it to you.  Selling people things they want is the essence of the market economy, and if tracking does a better job of this, then it is helping the market function better, and also helping consumers get products that are a better fit.  Why should this make anyone mad?

Paul H. Rubin

Posts

PAUL H. RUBIN is Samuel Candler Dobbs Professor of Economics at Emory University in Atlanta and formerly editor in chief of Managerial and Decision Economics. He blogs at Truth on the Market. He was President of the Southern Economic Association in 2013. He is a Fellow of the Public Choice Society and is associated with the Technology Policy Institute, the American Enterprise Institute, and the Independent Institute. Dr. Rubin has been a Senior Economist at President Reagan's Council of Economic Advisers, Chief Economist at the U.S. Consumer Product Safety Commission, Director of Advertising Economics at the Federal Trade Commission, and vice-president of Glassman-Oliver Economic Consultants, Inc., a litigation consulting firm in Washington. He has taught economics at the University of Georgia, City University of New York, VPI, and George Washington University Law School. Dr. Rubin has written or edited eleven books, and published over two hundred and fifty articles and chapters on economics, law, regulation, and evolution in journals including the American Economic Review, Journal of Political Economy, Quarterly Journal of Economics, Journal of Legal Studies, and the Journal of Law and Economics, and he frequently contributes to the Wall Street Journal and other leading newspapers. His work has been cited in the professional literature over 8000 times. Books include Managing Business Transactions, Free Press, 1990, Tort Reform by Contract, AEI, 1993, Privacy and the Commercial Use of Personal Information, Kluwer, 2001, (with Thomas Lenard), Darwinian Politics: The Evolutionary Origin of Freedom, Rutgers University Press, 2002, and Economics, Law and Individual Rights, Routledge, 2008 (edited, with Hugo Mialon). He has consulted widely on litigation related matters and has been an adviser to the Congressional Budget Office on tort reform. He has addressed numerous business, professional, policy, government and academic audiences. Dr. Rubin received his B.A. from the University of Cincinnati in 1963 and his Ph.D. from Purdue University in 1970.

14 responses to Privacy and Tracking

  1. 

    I think we agree: There are many unknowns. The point of my second post was that we should not undertake a major legislative or regulatory interference in this market until we do know the values of some of these variables. There is no evidence that such an interference would pay,and I think we should have such evidence before intervening.

  2. 

    Steve:
    The issue is cost-benefit analysis. For this analysis, of course probabilities are relevant. So “usually” counts. A preference based on anthropomorphizing machines should count, but what really counts is behavior, not expressed opinions. For example, people say life is “priceless” but in cost-benefit analysis we use market prices for risk based on observed behavior. We observe that people in practice trade privacy for small benefits (as you point out), so that is the price that should be used. The fact that their expressed opinions are probably based on erroneous assumptions is only a stronger argument for using market prices.

    • 

      That’s what I’d like to know. What is the evidence on consumer behavior with respect to this type of privacy protection. It looks like search engines are competing on the basis of privacy protection. And, I read the Microsoft is working on a version of IE that allows users to opt out of tracking. The FTC brought a case of a firm apparently falsely advertising that it would not track. My casual empiricism is that almost all consumers are willing to use supermarket and drug store ID cards to obtain discounts. But, my wife pointed out that Whole Foods does not have such ID cards. Maybe WF is responding to customer preferences?

      What does the evidence show?

      By the way, I agree probabilities matter in cost/benefit, as do magnitudes. For example, nuclear reactors usually don’t melt down.

  3. 

    Paul —

    On the first point, “usually” is not “always” or “never in the future.” So doesn’t that mean concerns may be warranted? For example, consider the FTC’s recent case involving a deceptive “do not track” promise? Was the FTC right or wrong on that one?

    On the second point, are you saying that a preference based on anthromorphizing machines should not carry any weight in policy? And, how did the author distinguish the anthromorphizing effect from a concern that there is a man behind the machine?

    Steve

  4. 

    I think everyone here misses the point.

    In the era of computers, a human (or automatic service) can activate a sophisticated program which will then go into the data base of “no interest to any human being”, decide you are a risk to “someone” and issue an automatically signed warrant . . . . .

    Am I the only one here who reads science fiction?

  5. 

    As to the first point, I think the information is generally used with most personal information stripped out. An employee of Google could probably find out what Steve Salop is doing online, but the information is generally not sorted that way.

    As to the second: That is where I started. The two books by Nass document lots of cases where people do anthropomorphize machines. My initial point was exactly that people fear tracking because they don’t realize that it is being done by a machine and because they do anthropomorphize machines.

  6. 

    I think you missed my point. I was making the practical observation that there is usually a man behind the machine. Its very difficult to guarantee that a human will not access the information. And, its difficult to prevent a human from programming the machine. That was also my point about the TSA full body scans. Therefore, I would think that most people would be pretty upset if they received the “Dear Professor Rubin” letter, even if there wasn’t an extortion threat attached.

    For that reason alone, my guess is that many people would not like the idea that even machines “allegedly” not overseen by humans are monitoring them.

    However, at the academic/theoretical level, you raise an interesting question. I haven’t read the material you referenced, but I have a question and and observation.

    First, companies collect all sorts of information on prescription purchases and usage by individual consumers. My understanding is that under the HIPAA law, they strip out all the identifying information so that it cannot be reverse engineered. So this would be a good example of monitoring solely by machine. Is this the case for internet tracking information too? Are you proposing that a HIPAA-type of law be applied to internet tracking information?

    Second, I would think that many people would anthromorphize the machines. For example, the elderly get comfort from robot animal pets. And, of course, children love their dolls. Remember the craze (whose name I can’t recall) where kids had virtual babies that needed to be fed. (That hasn’t disappeared. For example, see http://www.cyberinfants.com/) So, maybe people would be upset if they were ogled by a virtual peeping tom or their purchased were being monitored by a virtual policeman or priest. Interesting experiment. Has it been done?

  7. 

    I think extortion is illegal, whether by man or by machine.

  8. 

    Is this what you mean??

    Dear Professor Rubin,
    I am HAL, a computer in the network of Information Worldwide. As you may know, IW tracks a variety of information available on the Internet, including on-street video cameras owned by the city in which you live. In the last week, our various tracking machines have gleaned the following information about you. You frequent several websites that specialize in gay pornography and recently purchased a type of pipe that is commonly used for smoking crack. You also were seen several times with a female graduate student in one of your classes entering a hotel near campus in the early afternoon. Our machines have surmised from tracking your and your wife’s post to marriage advice qnd legal information site that you may have a troubled marriage and your wife may be contemplating divorce. IW is in the business of selling such information on an exclusive basis. IW is willing to sell you the information about the gay porn site and the grad student for $10,000. If you do not purchase exclusive access to this information, IW will offer it to your wife and Dean. Please let us know. Finally, we want to reassure that no humans at IW have seen this information. Our system is totally automatic.

  9. 

    Steve:
    I agree that people’s preferences should be honored, even if we don’t understand them. But I wonder if people would have the same preferences if they knew and fully understood that they were being tracked by a machine, not a person.

  10. 

    Paul – Your framework sets out a set of very interesting issues. I’d like you to spell out your analysis in more detail. In particular, what are the answers to the following questions?

    1. Why is it unreasonable for a person to fear think that if a machine has the information, then a human being could access the information on the machine and use it? For example, might a person reasonably fear that his/her surfing habits might be somehow publicized? Might the person fear that the tracking firm might accidentally (even if not negligently) publicize the information or a malevolent hacker might obtain it? After all, social security numbers and other identity information has gotten out in the past.

    2. Why is it unreasonable for a person to think that the human might use the information for reasons other than simply selling products? Or, might a person even reasonably fear that a malevolent government official might misuse the information in some way?

    3. With respect to the use of the tracking information to formulate targeted advertising, might a reasonable (but boundedly rational) person fear that the ads could be cleverly formulated to “manipulate” the person’s bounded rationality (or bounded willpower or bounded self-interest) to buy something that is not in the person’s rational self-interest to acquire?

    • 

      Steve

      Good questions.
      1. There is to my knowledge no evidence (even anecdotal) of the legal misuse of commercial information. That is, your hypotheticals are just that. The only real danger is the theft of information to be used for identity theft. But in fact, identity theft is more difficult if sellers have access to information that can be used for verification of identity. The other point is that those who are afraid of tracking do not make any further points; they think that being tracked is itself harmful. I think this indicates the misunderstanding that I identify.

      2. What would a seller do beside try to sell? Again, I am aware of no evidence of a commercial entity misusing this information. The government might, but government entities themselves seem to provide fewer safeguards for information than do commercial sellers.

      3. I think your third point proves too much. One could make the same point about any advertising, but I don’t think you would want to ban all advertising. Moreover, if sellers can better target ads to individual characteristics, they might have less need to manipulate — but I haven’t thought about that point.

      Thanks for your comments.

      • 

        1. I agree that some people (or at least some “advocates”) appear to have a desire not to be tracked, even aside from the commercial implications. As economists, we may find it hard to capture such behavior in our standard model of rational decision-makers. But, it is nonetheless real. Many people close their blinds even though they would not be able to see Peeping Toms or even know if any are watching. There also has been popular dismay at the TSA scanners that shows body images. So, why do you find it so incomprehensible why people would not want their intimate web surfing and online purchasing behavior to be tracked.

        Having said that, I do recognize that most people are willing to use their supermarket and drug store scanner cards for a small price discount. So the willing to pay for privacy apparently is not very high. I would predict, however, that behavior would change somewhat after a tracking scandal occurs.

        2. With respect to misuse of tracking information, I fear that you may be suffering from a failure of imagination. As an economist, I would predict that the benefits of obtaining the information will be sufficiently high at least in some circumstances to lead to misuse of the information. Moreover, the “hackers” might not have purely “commercial interests” in mind. For example, imagine how much an opponent of Clarence Thomas would have been willing to pay to gain access to his web surfing or video rental history. In our polarized political atmosphere, similar efforts could be applied to thousands of political figures. Wouldn’t someone want to know whether Obama surfed on anti-colonialist sites or what Senator Issa looks at.

        But, commercial interests also might want to have such information. The CPC on search engines for key words involving expensive jewelry and wealth planning are quite high. Knowing someone’s purchase history would make them even more valuable. And, scammers might be willing to pay the most for this information. If you were a scammer, wouldn’t you be willing to pay a lot to know the names of the people who responded to the Nigerian Financial Spams, or the people who are enamored with gold as a store of value?

        3. With respect to advertising, no, I would not want to ban all advertising. But, I might want to ban advertising that is “overly effective” at manipulating me. For example, there are those experiments that show that people become more trusting when they are given oxytocin. I don’t think that I would want to allow advertisers to administer oxytocin with their ads.

Trackbacks and Pingbacks:

  1. Fretting over privacy « Truth on the Market - March 28, 2011

    […] Rubin more recently called for a more careful cost-benefit analysis of privacy regulation, here and here. In the latter post he […]