The Internet ecosphere relies on data. Information about browsing, purchases and Internet history (among other things) can be very useful for companies that want to reach consumers efficiently. In exchange for giving up some information about their online behavior, consumers enjoy many websites, apps, and other content available on the Internet for free. They also get tailored recommendations when using shopping services like Amazon and eBay, better results when using search engines like Google and Bing, and more relevant advertisements from nearly all websites that rely on ads for revenue.
Things like search, email, cloud services, social networks, blogs, video, and and an enormous range of other content aren’t produced and maintained at zero cost. But Internet users can access almost all of them for free because much of the Internet ecosphere is set up as a two-sided market: Advertisers are brought together with consumers, who get to use online services at no direct cost to them, financed by advertising.
Additionally, data from connected devices are now powering whole new industries of innovative smart products for consumers. The data from these devices, as well as consumers’ interactions with mobile and traditional Internet applications, are also powering incredible new data-driven insights that benefit not just companies and consumers, but also society at large with new potential answers for some of society’s most difficult problems.
Despite the manifest benefits of this free flow of data, some critics have reasonable concerns about the possible misuse of data, while others see tracking itself as a violation of an asserted right to privacy.
To the extent that they exist, many privacy harms online are currently dealt with by the marketplace itself, bolstered by the Federal Trade Commission under its Section 5 authority as well as state oversight. But some privacy advocates don’t think the FTC or the marketplace have gone far enough, and have pressured Congress to do more. Unfortunately, most (if not all) of these proposals refuse to recognize the successes of the current regime, misunderstand (or perhaps misconstrue) what is involved in data analysis and tracking, overstate the importance of privacy to the average Internet user, and ignore the trade-offs inherent in expanding data regulation.
The Obama Administration’s recently released proposed privacy bill is firmly rooted in this camp. At its core it perpetuates the fantasy that the few consumers who evidence significant concerns about privacy are the norm, and that they irrationally fail to demand it in the marketplace — to such an extent and with such damage to themselves that government must step in (more so than it already does).
But the sorts of alleged problems most directly targeted by the proposed bill simply aren’t substantial problems — or even “problems” at all. Data used by researchers, advertisers and other online entities is already mostly anonymous, and risks of “re-identification” of anonymized data are systematically overstated. In fact, advertisers (to say nothing of health-care and social-science researchers) care less about individual identities than they do consumption patterns and aggregated, broad-based profiles.
Meanwhile the benefits of data analysis are systematically under-appreciated — particularly online, where most consumers likely benefit far more from the current opt-out regime for data tracking than they would from the dramatically expanded control regime outlined in the White House’s proposed bill.
In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….
The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.
Absence of economic or cost-benefit analysis
First, and most fundamentally, the Administration’s proposed bill lacks any meaningful cost-benefit analysis, focusing myopically on the alleged costs of data collection and use without considering the business benefits. Even this framing is overly-generous to the bill because the alleged “costs” of big data analytics are in reality benefits to both businesses and consumers. The findings section of the proposal obliquely references these benefits by saying the rules are aimed at
supporting flexibility and the free flow of information, [and] will promote continued innovation and economic growth in the networked economy.
But nowhere do the proposed rules ever connect even these benefits to consumers at all.
The lack of a rigorous cost-benefit analysis has become all-too-common, even at the FTC, the agency that would be charged with enforcing the proposed rules. FTC Commissioner Josh Wright’s dissent in the Commission’s Section 5 “unfairness” action against Apple emphasized this lack of cost-benefit analysis:
The harm from Apple’s disclosure policy is limited to users that actually make unauthorized purchases. However, the potential benefits from Apple’s disclosure choices are available to the entire set of iDevice users because these are the consumers capable of purchasing apps and making in-app purchases. The disparity in the relative magnitudes of these universes of potential harms and benefits suggests, at a minimum, that further analysis is required before the Commission can conclude that it has satisfied its burden of demonstrating that any consumer injury arising from Apple’s allegedly unfair acts or practices exceeds the countervailing benefits to consumers and competition.
Similarly, the proposed bill fails to compare the magnitude of supposed harm befalling a small cadre of privacy-sensitive consumers (who have not otherwise protected themselves by use of marketplace tools like track-blockers or by use of opt-out options provided by major ad networks and data brokers), to the benefits received by the majority who are less privacy-sensitive.
Failure to consider consumer benefits
One of the hallmarks of the Internet ecosphere has been the diversity of business models designed to enable users to obtain information and services for free once they purchase access from an ISP. This access will likely diminish if content providers are less able to rely on data analytics to help finance and improve their products.
Similarly, because the proposed bill ignores business reality in its largely opt-in approach to privacy (as discussed below), it is insensitive to the deterrent effect on innovation and experimentation. Moreover, the proposed bill does not require the FTC to conduct any such weighing of benefits against harms in implementing the proposed rules.
If companies must seek affirmative consent from users for every new service or for every new use of data that the FTC might deem “unreasonable in light of context” (which is vaguely defined in the proposed bill and, if current practice is any guide, will remain largely undefined by the FTC), the experimentation with new business models (and new uses of data) that lies at the heart of today’s Internet will be imperiled. Denying these benefits — essentially, curtailing the ongoing evolution of online products and, now, connected devices — to consumers would cost them dearly. And yet nothing in the proposed language suggests any meaningful recognition that such lost consumer benefits should be accounted for in assessing the propriety of data-use practices.
It’s possible that the privacy-sensitive among us might be willing to pay for ad-free (and other non-tracking) versions of today’s apps online, and/or bear the cost of finding and using ad- and cookie-blockers. But most people prefer to access apps and content for free, and don’t care much about privacy so long as the personal data they provide is secure and they get something of value in return.
But through its definitions of “personal data” and “de-identified data,” the proposed legislation would likely raise the price (or lower the amount) of content available — typically for free — in the online marketplace. In addition, innovation in the nascent Internet of Things space surely would be stifled, as the proposed bill’s personal data restrictions apply to devices as well. Persistent identifiers like IP addresses or device numbers, or any other ID that is connected to a device — even if not to the identity of an actual human being — count as personal data.
In a world without transaction costs, it wouldn’t matter if we chose an opt-out or opt-in regime for online advertising: In either situation, the bargain struck between advertisers, content providers and users would result in the “right” level of sharing and using of behavioral data. But, in reality, there are transaction costs.
For example, consumers will face more pervasive notice screens that degrade their experience. Even more significantly, failing to recognize that they must “opt-in” to the benefits of data use would leave them excluded from the benefits of personalization and free content. Changing the default to opt-in (or its equivalent via heightened control and transparency requirements) will have real costs for the vast majority of consumers who are less privacy-sensitive than the hypothetical consumer conjured by the proposed bill.
Without any economic analysis to determine if the number and magnitude of consumers harmed outweighs those who are benefitted by such a change, it makes no sense to tout the legislation as unambiguously pro-consumer. And if it is true (as the weight of evidence strongly suggests) that most consumers are not as privacy-sensitive as they are hungry for data-enabled access to Internet offerings, the legislation can only be harmful on net.
Inconsistency with business realities
Until now, the default assumption of privacy protection enshrined in law is that most restrictions should be on the use of information, rather than its collection. In part this stems from the ubiquity of online tracking, the high costs of opt-in and the many benefits that flow from the vast majority of data uses.
Most current law has been crafted to deal directly with the few specific harms that could arise. But the White House’s new proposed rules may shift that balance by restricting the unauthorized collection of data regardless of use (with a few trivial exceptions), therefore prohibiting beneficial as well as detrimental uses. And one thing it will clearly do is to deter some beneficial uses by increasing the costs of data use across the board.
Further, in completely ignoring algorithms and innovative combinations of data, the bill disregards critical business realities. It has never been the mere collection of data that mattered, nor even the simple agglomeration of lots of data; it’s always been the way data collections are put together and analyzed that has yielded valuable insights. But the focus of the proposed White House bill remains steadfastly on consent for the collection and use of data writ large, without nuanced consideration of the way the market actually employs data.
In other words, the bill fails to recognize the world as it is, and instead brings a blunt “solution” to bear on a complex and nuanced market — all in the name of reducing what is sees as privacy harms, where they may not even exist.
Among other things the bill relies heavily on regulation through Privacy Review Boards (PRBs) — or, as we like to call them, “innovation death panels.” These PRBs would operate under authority of the FTC and would be subject to the bill’s prescriptions regarding the FTC process for granting PRB approval (and ongoing authorization). The bill asserts that sign-off on privacy practices by these boards, once they are given the FTC’s imprimatur, will permit a company’s data privacy practices to avoid regulation under the bill’s “heightened” standards when its practices are “not reasonable in light of context.”
There are several problems with the way the proposed bill handles these rules, but we want to point out just the most salient here: While multi-stakeholder processes could be a good way to build bottom-up law on privacy, the bill’s proposed approach effectively ensures that the PRBs approved by the FTC will operate with review standards that squelch innovation.
The proposed bill requires the FTC to consider a lengthy set of factors in determining whether a PRB is good enough, including:
- the range of evaluation processes suitable for the privacy risks posed by various types of personal data;
- the costs and benefits of levels of independence and expertise [of the PRB];
- the importance of mitigating privacy risks;
- the importance of expedient determinations; and
- whether differing requirements are appropriate for Boards that are internal or external to covered entities.
While these parameters may ensure that the approved PRBs demonstrate a strong regard for protecting privacy, only two of the enumerated factors even arguably direct the FTC to consider the cost to businesses or consumers:
- the range of evaluation processes suitable for covered entities of various sizes, experiences, and resources; and
- the costs and benefits of levels of transparency and confidentiality.
In other words, the bill’s short-sighted focus on protecting privacy requires the FTC to condition PRB approval on how well the PRBs take account of alleged privacy concerns, not on how well the PRBs tailor their reviews to relevant businesses and markets — and without regard to whether they engender efficient or appropriate privacy practices.
True, there is some marginal concern for cost-benefit tradeoffs built into the proposed legislation — but even what little there is would almost certainly have limited effectiveness.
One section of the proposed bill, Section 103(c), does seem to encourage PRBs to use cost-benefit analysis and perhaps even to forbear from applying heightened transparency and control requirements to certain uses of data:
[A] covered entity [need not] provide heightened transparency and individual control when [it] analyzes personal data in a manner that is not reasonable in light of context if such analysis is supervised by a [PRB] approved by the [FTC] and… [t]he [PRB] determines that the likely benefits of the analysis outweigh the likely privacy risks.
But the proposed bill’s primary opt-in requirement is triggered regardless of PRB review whenever a covered entity offers a different service or employs new modes of data analysis. Under this provision, such changes obligate the company to
provide individuals with compensating controls designed to mitigate privacy risks that may arise from the material changes, which may include seeking express affirmative consent from individuals.
Meanwhile, of course, data analysis that is “unreasonable in light of context” must be undertaken under direct supervision of a PRB that is beholden to the FTC and the proposed bill’s stilted criteria for FTC approval.
In short, the cost-benefit provision is deeply flawed, and the proposed language doesn’t seem likely to allow PRBs to approve any conduct that would deviate from the bill’s prescriptions for enhanced consumer control (as interpreted by the FTC).
There is a clear difference between data brokers, major advertising networks, major content providers and your cousin’s blog. And the evolution of any of these with respect to data analysis and use may confer great and unexpected benefits — and do so in widely divergent ways. And yet it is not clear that any of the limited business-related or cost-benefit provisions in the proposed bill actually direct the FTC to consider the characteristics that really affect business uses — and consumer benefits — in enforcing the bill or in enacting rules under it.
Unintended — and lamentable — consequences
Ironically, the White House bill may actually reduce privacy. Insofar as online businesses do not currently link “real” identifying information with more-anonymous device and IP numbers now, the bill’s rules appear to require companies to do so in order that customers will have the access and accuracy rights that the bill creates. Further, creating databases for such information may create the proverbial “honey pot” for identity thieves, thus increasing data security risks as a result.
And, as noted above, the proposed bill would also harm innovation. The proposed rules subject new uses of personal data and new business models to enhanced consumer control, up to and including mandatory opt-in. In some cases the rules would further subject them to supervision and approval by a PRB (or else the threat of FTC enforcement) — even if such uses would actually or presumptively benefit consumers. This can only deter innovation, both by chilling it in the first place, as well as by forcing innovations to fit the PRBs’ prescriptive mold. Meanwhile, of course, the proposed bill will lead to any number of regulatory-driven innovations that do less to serve the desires of consumers than those of bureaucrats.
The biggest harm to innovation will arise not from the “seen” problems (like erroneous rejection of consumer-benefitting uses of data), but rather from the unseen. Perhaps it will be easy enough for consumers to deal with fewer free apps and content, but the real cost to society will be the apps and content that never come into existence because the bill’s provisions deter their creation in the first place.
So much for the permissionless innovation supposedly at the heart of the net neutrality debate into which the White House interjected itself.
The Administration saw fit to promote rules constraining ISPs in order to ensure that tried-and-true, content-provider business models didn’t face impediments from ISPs — but may now force content providers to devise new ways to fund themselves, substantially transforming how the Internet works.
Bastiat could have been talking about this very bill when he said:
There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen… Yet this difference is tremendous; for it almost always happens that when the immediate consequence is favorable, the later consequences are disastrous, and vice versa. Whence it follows that the bad economist pursues a small present good that will be followed by a great evil to come, while the good economist pursues a great good to come, at the risk of a small present evil.
In short, in a (misguided) attempt to increase privacy in the short run, the White House’s proposed privacy bill ignores the costs to innovation and consumer welfare down the road. And it does so without ever effectively weighing the relative economic costs and benefits of either, or demanding the same from the bill’s enforcers. The bill is simply not a responsible approach to lawmaking.
Poorly-conceived privacy laws are worse than none at all, as they lull the public into complacency.
I haven’t noticed the White House proposing any restriction on the use of surveillance and monitoring of minors, specifically K-12 students enduring the morass of testing associated with Common Core. Politico has comprehensive coverage here: http://www.politico.com/story/2015/03/cyber-snoops-track-students-116276.html Legislation (FERPA? SHERPA?) that is supposed to protect children from government data mining from cradle to adulthood (and aggressive sharing with crony-selected private enterprises) exists, but it doesn’t seem too effective.
This is what we should be concerned about.
Of course, there is clear conflict of interest that makes regulatory or other reform unlikely under the current administration: Common Core and marathon testing was pushed through under Obama and his Secretary of Education appointee, Arne Duncan.