Archives For privacy

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.


Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.



The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.


If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.


Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive


Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

Last week, the FTC announced its complaint and consent decree with Nomi Technologies for failing to allow consumers to opt-out of cell phone tracking while shopping in retail stores. Whatever one thinks about Nomi itself, the FTC’s enforcement action represents another step in the dubious application of its enforcement authority against deceptive statements.

In response, Geoffrey Manne, Ben Sperry, and Berin Szoka have written a new ICLE White Paper, titled, In the Matter of Nomi, Technologies, Inc.: The Dark Side of the FTC’s Latest Feel-Good Case.

Nomi Technologies offers retailers an innovative way to observe how customers move through their stores, how often they return, what products they browse and for how long (among other things) by tracking the Wi-Fi addresses broadcast by customers’ mobile phones. This allows stores to do what websites do all the time: tweak their configuration, pricing, purchasing and the like in response to real-time analytics — instead of just eyeballing what works. Nomi anonymized the data it collected so that retailers couldn’t track specific individuals. Recognizing that some customers might still object, even to “anonymized” tracking, Nomi allowed anyone to opt-out of all Nomi tracking on its website.

The FTC, though, seized upon a promise made within Nomi’s privacy policy to provide an additional, in-store opt out and argued that Nomi’s failure to make good on this promise — and/or notify customers of which stores used the technology — made its privacy policy deceptive. Commissioner Wright dissented, noting that the majority failed to consider evidence that showed the promise was not material, arguing that the inaccurate statement was not important enough to actually affect consumers’ behavior because they could opt-out on the website anyway. Both Commissioners Wright’s and Commissioner Ohlhausen’s dissents argued that the FTC majority’s enforcement decision in Nomi amounted to prosecutorial overreach, imposing an overly stringent standard of review without any actual indication of consumer harm.

The FTC’s deception authority is supposed to provide the agency with the authority to remedy consumer harms not effectively handled by common law torts and contracts — but it’s not a blank check. The 1983 Deception Policy Statement requires the FTC to demonstrate:

  1. There is a representation, omission or practice that is likely to mislead the consumer;
  2. A consumer’s interpretation of the representation, omission, or practice is considered reasonable under the circumstances; and
  3. The misleading representation, omission, or practice is material (meaning the inaccurate statement was important enough to actually affect consumers’ behavior).

Under the DPS, certain types of claims are treated as presumptively material, although the FTC is always supposed to “consider relevant and competent evidence offered to rebut presumptions of materiality.” The Nomi majority failed to do exactly that in its analysis of the company’s claims, as Commissioner Wright noted in his dissent:

the Commission failed to discharge its commitment to duly consider relevant and competent evidence that squarely rebuts the presumption that Nomi’s failure to implement an additional, retail-level opt out was material to consumers. In other words, the Commission neglects to take into account evidence demonstrating consumers would not “have chosen differently” but for the allegedly deceptive representation.

As we discuss in detail in the white paper, we believe that the Commission committed several additional legal errors in its application of the Deception Policy Statement in Nomi, over and above its failure to adequately weigh exculpatory evidence. Exceeding the legal constraints of the DPS isn’t just a legal problem: in this case, it’s led the FTC to bring an enforcement action that will likely have the very opposite of its intended result, discouraging rather than encouraging further disclosure.

Moreover, as we write in the white paper:

Nomi is the latest in a long string of recent cases in which the FTC has pushed back against both legislative and self-imposed constraints on its discretion. By small increments (unadjudicated consent decrees), but consistently and with apparent purpose, the FTC seems to be reverting to the sweeping conception of its power to police deception and unfairness that led the FTC to a titanic clash with Congress back in 1980.

The Nomi case presents yet another example of the need for FTC process reforms. Those reforms could ensure the FTC focuses on cases that actually make consumers better off. But given the FTC majority’s unwavering dedication to maximizing its discretion, such reforms will likely have to come from Congress.

Find the full white paper here.

Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.

A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.

According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.

In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.

Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.

FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.

In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact  a data barrier to entry that makes these harms possible.

As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.

First, data is useful to all industries — this is not some new phenomenon particular to online companies

It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:

  • Macy’s analyzes tens of millions of terabytes of data every day to gain insights from social media and store transactions. Over the past three years, the use of big data analytics alone has helped Macy’s boost its revenue growth by 4 percent annually.
  • Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
  • Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.

And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.

Second, it’s not the amount of data that leads to success but building a better mousetrap

The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.

Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to  Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.

Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.

At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.

In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.

Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low

Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.

Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:

  • If consumers want to make a purchase, they are more likely to do their research on Amazon than Google.
  • Google flight search has failed to seriously challenge — let alone displace —  its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
  • People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
  • Pinterest, one of the most highly valued startups today, is now a serious challenger to traditional search engines when people want to discover new products.
  • With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
  • Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.

Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.

Fourth, access to data is not exclusive

Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.

This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.


Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.

If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.

Fifth, the data barrier to entry argument does not have workable antitrust remedies

The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?

It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.

On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.

Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.

It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.

If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Rate this:

Continue Reading...

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures.

Commissioner Ohlhausen points out in her dissent that there is no support for the claim that the Policy Statement has led to sub-optimal deterrence and quite sensibly finds no reason for the Commission to withdraw the Policy Statement.  But even more importantly Commissioner Ohlhausen worries about what the Commission’s decision here might portend:

The guidance in the Policy Statement will be replaced by this view: “[T]he Commission withdraws the Policy Statement and will rely instead upon existing law, which provides sufficient guidance on the use of monetary equitable remedies.”  This position could be used to justify a decision to refrain from issuing any guidance whatsoever about how this agency will interpret and exercise its statutory authority on any issue. It also runs counter to the goal of transparency, which is an important factor in ensuring ongoing support for the agency’s mission and activities. In essence, we are moving from clear guidance on disgorgement to virtually no guidance on this important policy issue.

An excellent point.  If the standard for the FTC issuing policy statements is the sufficiency of the guidance provided by existing law, then arguably the FTC need not offer any guidance whatever.

But as we careen toward a more and more active role on the part of the FTC in regulating the collection, use and dissemination of data (i.e., “privacy”), this sets an ominous precedent.  Already the Commission has managed to side-step the courts in establishing its policies on this issue by, well, never going to court.  As Berin Szoka noted in recent Congressional testimony:

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.

This has lead Berin and others to suggest — and the chorus will only grow louder — that the FTC clarify the basis for its enforcement decisions and offer clear guidance on its interpretation of the unfairness and deception standards it applies under the rubric of protecting privacy.  Unfortunately, the Commission’s reasoning in this action suggests it might well not see fit to offer any such guidance.

As I mentioned in my previous post, there is a strong effort to regulate the use of information on the web in the name of “privacy.” The basic tradeoff that drives the web is that firms use information for advertising and other purposes,and in return consumers get lots of things free.  Google alone offers about 40 free services, including the original  search engine, gmail, maps, and the increasingly popular android operating system for mobile devices. Facebook is another set of free services. There are hundreds of others, all ultimately funded by advertising and the use of information.  Any effort to regulate information is going to change the terms at which these services are offered.

To justify regulation, two conditions must be met.  First there must be some market failure.  Second, there must be at least an expectation that the benefits of the proposed regulation will outweigh the costs.  In a market economy, we generally put the burden of proof on those proposing regulation, since the default assumption is that markets provide net benefits.  Proponents of regulating the use of information on the internet have met neither of these burdens.

One main justification for regulation is that people do not want to be tracked. I discussed this issue in my previous post.  Let me just add that, while people express a desire not to be tracked, in practice they seem quite willing to trade information for other services.  The other issue is identity theft — the possibility that information will be misused for illegitimate purposes.  Tom Lenard and I have written extensively about this issue. The bottom line, however, is that consumers are not liable for much if any of the costs of identity theft, and since firms must bear these costs there is no obvious market failure.

With respect to the second issue, there has been virtually no effort to undertake any cost benefit analysis of the proposed regulations.  However, if there were such an analysis, it is unlikely that regulations would be cost justified since the benefits of the free stuff are huge and the costs are small at best.  While it is conceivable that some tweaking would pass a cost-benefit test, it is very unlikely that any regulation which could get through the political process and then be administered by an agency such as the FTC would in fact pass this test.  Moreover, the proposed regulations, such as a “do not track” list or shifting from opt out to opt in are well beyond “tweaking” and might fundamentally change the terms of the tradeoff.

The bottom line is this:  Privacy advocates act as if privacy is free.  But increased privacy means reduced use of information, and no one has shown that altering the terms of this tradeoff would be beneficial to consumers.

Privacy and Tracking

Paul H. Rubin —  12 March 2011

First I would like to thank Geoff Manne for inviting me to join this blog.  I know most of my fellow bloggers and it is a group I am proud to be associated with.

For my first few posts I am going to write about privacy.  This is a hot topic.  Senators McCain and Kerry are floating a privacy bill, and the FTC is also looking at privacy. I have written a lot about privacy (mostly with Tom Lenard of the Technology Policy Institute, where I am a senior fellow).

The issue of the day is “tracking.”  There are several proposals for “do not track” legislation and polls show that consumers do not want to be tracked.

The entire fear of being tracked is based on an illusion.  It is a deep illusion, and difficult or impossible to eliminate, but still an illusion.   People are uncomfortable with the idea that someone knows what they are doing.  (It is “creepy.”)  But in fact no person knows what you are doing, even if you are being tracked. Only a machine knows.

As humans, we have difficulty understanding that something can be “known” but nonetheless not known by anyone.   We do not understand that we can be “tracked” but that no one is tracking us.  That is, data on our searches may exist on a server somewhere so that the server “knows” it, but no human knows it.  We don’t intuitively grasp this concept because it it entirely alien to our evolved intelligence.

In my most recent paper (with Michael Hammock, coming out in Competition Policy International) we cite two books by Clifford Nass ( C. Nass & C. Yen, The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships (2010), and B. Reeves & C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (1996, 2002).)  Nass and his coauthors show that people automatically treat intelligent machines like other people.  For example, if asked to fill out a questionnaire about the quality of a computer, they rate the machine higher if they are filling out the form on the computer being rated than if it on another computer — they don’t want to hurt the computer’s feelings.  Privacy is like that — people can’t adapt to the notion that a machine knows something. They assume (probably unconsciously) that if somethingis known then a person knows it, and this is why they do not like being tracked.

One final point about tracking.  Even if you are tracked, the purpose is to find out what you want and sell it to you.  Selling people things they want is the essence of the market economy, and if tracking does a better job of this, then it is helping the market function better, and also helping consumers get products that are a better fit.  Why should this make anyone mad?

Chris Hoofnagle writing at the TAP blog about Facebook’s comprehensive privacy options (“To opt out of full disclosure of most information, it is necessary to click through more than 50 privacy buttons, which then require choosing among a total of more than 170 options.”) claims that:

This approach is brilliant. The company can appease regulators with this approach (e.g. Facebook’s Elliot Schrage is quoted as saying, “We have tried to offer the most comprehensive and detailed controls and comprehensive and detailed information about them.”), and at the same time appear to be giving consumers the maximum number of options.

But this approach is manipulative and is based upon a well-known problem in behavioral economics known as the “paradox of choice.”

Too much choice can make decisions more difficult, and once made, those choices tend to be regretted.

But most importantly, too much choice causes paralysis. This is the genius of the Facebook approach: give consumer too much choice, and they will 1) take poor choices, thereby increasing revelation of personal information and higher ROI or 2) take no choice, with the same result. In any case, the fault is the consumer’s, because they were given a choice!

Of all the policy claims made on behalf of behavioral economics, the one that says there is value in suppressing available choices is one of the most pernicious–and absurd.  First, the problem may be “well-known,” but it is not, in fact, well-established.  Citing to one (famous) study purporting to find that decisions are made more difficult when decision-makers are confronted with a wider range of choices is not compelling when the full range of studies demonstrates a “mean effect size of virtually zero.”  In other words, on average, more choice has no discernible effect on decision-making.

But there is more–and it is what proponents of this canard opportunistically (and disingenuously, I believe) leave out:  There is evidence (hardly surprising) that more choices leads to greater satisfaction with the decisions that are made.  And of course this is the case:  People have heterogeneous preferences.  The availability of a wider range of choices is not necessarily optimal for any given decision-maker, particularly one with already-well-formed preferences.  But a wider range of choices is more likely to include the optimal choice for the greatest number of heterogeneous decision-makers selecting from the same set of options.  Even if it is true (and it appears not to be true) that more choice impairs decision-making, there is a trade-off that advocates like Hoofnagle (not himself a behavioral economist, so I don’t necessarily want to tar the discipline with the irresponsible use of its output by outsiders with policy agendas and no expertise in the field) typically ignore.  Confronting each individual decision-maker with more choices is a by-product of offering a greater range of choices to accommodate variation across decision-makers.  Of course we can offer everyone cars only in black.  And some people will be quite happy with the outcome, and delighted also that they have avoided the terrible pain of being forced to decide among a wealth of options that they didn’t even want.  But many other people, still perhaps benefiting from avoiding the onerous decision-making process, will nevertheless be disappointed that there was no option they really preferred. Continue Reading…