Site icon Truth on the Market

Facile claims of behavioral economics: too much choice; not enough privacy

Chris Hoofnagle writing at the TAP blog about Facebook’s comprehensive privacy options (“To opt out of full disclosure of most information, it is necessary to click through more than 50 privacy buttons, which then require choosing among a total of more than 170 options.”) claims that:

This approach is brilliant. The company can appease regulators with this approach (e.g. Facebook’s Elliot Schrage is quoted as saying, “We have tried to offer the most comprehensive and detailed controls and comprehensive and detailed information about them.”), and at the same time appear to be giving consumers the maximum number of options.

But this approach is manipulative and is based upon a well-known problem in behavioral economics known as the “paradox of choice.”

Too much choice can make decisions more difficult, and once made, those choices tend to be regretted.

But most importantly, too much choice causes paralysis. This is the genius of the Facebook approach: give consumer too much choice, and they will 1) take poor choices, thereby increasing revelation of personal information and higher ROI or 2) take no choice, with the same result. In any case, the fault is the consumer’s, because they were given a choice!

Of all the policy claims made on behalf of behavioral economics, the one that says there is value in suppressing available choices is one of the most pernicious–and absurd.  First, the problem may be “well-known,” but it is not, in fact, well-established.  Citing to one (famous) study purporting to find that decisions are made more difficult when decision-makers are confronted with a wider range of choices is not compelling when the full range of studies demonstrates a “mean effect size of virtually zero.”  In other words, on average, more choice has no discernible effect on decision-making.

But there is more–and it is what proponents of this canard opportunistically (and disingenuously, I believe) leave out:  There is evidence (hardly surprising) that more choices leads to greater satisfaction with the decisions that are made.  And of course this is the case:  People have heterogeneous preferences.  The availability of a wider range of choices is not necessarily optimal for any given decision-maker, particularly one with already-well-formed preferences.  But a wider range of choices is more likely to include the optimal choice for the greatest number of heterogeneous decision-makers selecting from the same set of options.  Even if it is true (and it appears not to be true) that more choice impairs decision-making, there is a trade-off that advocates like Hoofnagle (not himself a behavioral economist, so I don’t necessarily want to tar the discipline with the irresponsible use of its output by outsiders with policy agendas and no expertise in the field) typically ignore.  Confronting each individual decision-maker with more choices is a by-product of offering a greater range of choices to accommodate variation across decision-makers.  Of course we can offer everyone cars only in black.  And some people will be quite happy with the outcome, and delighted also that they have avoided the terrible pain of being forced to decide among a wealth of options that they didn’t even want.  But many other people, still perhaps benefiting from avoiding the onerous decision-making process, will nevertheless be disappointed that there was no option they really preferred.

The trade-off is just as relevant to Facebook’s privacy settings as it is to cars.  Hard as it may be for self-appointed consumer advocates to believe, not everybody wants the same, maximal level of privacy on Facebook that these information teetotalers think they should prefer.  Every limitation on the disclosure or use of information on Facebook entails a trade-off.  With hundreds of millions of users, it is a sure bet that the range of privacy preferences held by Facebook’s users runs the gamut from “none” to “all.”  No doubt, because choices do entail information and information is not free (this is, by the way, neither a behavioral insight nor an indication of human irrationality, although it is one that almost certainly explains a great deal of the behavioralists’ claimed irrationalities, including this one), the rational decision-maker will sometimes “choose not to decide.”  The claim that this torpor stems from the psychological effects of being faced with a wide range of allegedly-onerous options and thus (without consideration of the costs) that the range should be narrowed or simplified is unsupported and ill-conceived.

And of course the claim follows the precise contours of the libertarian paternalism debate recently discussed here and at the Cato Unbound site:  People are induced by irrational, psychological quirks to make bad decisions or no decision at all.  The default should be structured to minimize the cost of these failings (according to the regulator, of course), but we can structure our regulations to permit those who would opt out to do so, to take advantage of the availability of, in this case, more complex options that suit their preferences.

Except actual opt-outs are not so simple.

As Jon Klick noted, when the Consumer Financial Protection Agency was being developed, impelled forward by the intellectual engine of behavioral economics, it explicitly did not include opt outs for fear that people would, well, opt out.  In the case of online privacy, I have no doubt that this would happen.  It has been proposed, for example, that mergers be challenged because the merged company would have access to “too much” information.  This is not a restrained form of regulation that permits companies and their customers to contract out of a “light-touch” regulatory overlay; it is outright prohibition of even just the possibility of disfavored privacy practices.

Meanwhile, as Josh pointed out, regulating the default can affect the remaining available options, rendering opt out a mere theoretical curiosity.  In this case, does anyone think that Facebook will maintain support for (or its advertisers pay for access to) a dizzying array of privacy options that few people will ever use (and that will be of little value to whomever might mine the resulting paucity of information)?  The default option becomes the only option–and it is difficult to escape the sense that this is what was intended in the first place.

Hoofnagle’s specific recommendations here seem innocuous enough:

How could Facebook improve this situation? The paradox of choice would suggest that a simple slider bar that controlled a wide array of individual settings, like Internet Explorer’s privacy settings, would be an improvement. But even better than that would be a “preview” mode; a feature that allowed one to see what their profile actually looks like to friends, friends of friends, the internet, and advertisers.

Perhaps somewhere Hoofnagle has support for his claims about the value of these proposals, but I doubt it.  It seems to me that one predictable consequence of the proposed “slider” option–a much more blunt form of decision-making–could be a systematic shift away from privacy protection, as users decide that the cost of losing control over information x is worth it in order to ensure that information y, now bundled in the slider with information x, is widely available.  No doubt Hoofnagle and his friends will be there looking over Facebook’s proverbial shoulder to see if it is gaming the system or forcing its preferred outcomes (heaven forbid a private company engaged in voluntary transactions with its patrons should be permitted to choose its contract terms) through the design of the proposed slider.  And when the outcome is not the desired one, who can doubt that limited, restrained regulation will turn into the more heavy-handed variety?

Meanwhile, the value of the proposed “preview” mode is independent of the “choice architecture,” and anyway it already largely already exists (you can preview how your profile looks to any of your friends from within the “privacy settings” page).  I’m not sure why it’s supposed to help, though, especially if the problem is that most people don’t care enough to adjust their settings (or presumably to check out all the previews) anyway.

Discussions surrounding the regulation of online privacy are woefully bad.  There is seemingly no effort to inject rigor, definition, clarity and thoughtfulness into the debate which instead starts from an non-rebuttable presumption of consumer irrationality, corporate venality, and market failure (and, of course, regulatory omniscience), and only goes downhill from there.  I’m glad Facebook may have enlisted the services of someone as thoughtful as Tim Muris–maybe he can shape things up.

Exit mobile version