The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paper, The Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).
In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.
The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.
PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION
The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.
First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.
Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.
PRICE DISCRIMINATION AS A PRIVACY HARM
If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.
The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.
This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.
While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.
If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.
Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:
Data is useful to all industries, not just online companies;
It’s not the amount of data, but how you use it;
Competition online is one click or swipe away; and
Access to data is not exclusive
Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.
Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.
A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.
According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.
In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.
Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.
FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.
In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact a data barrier to entry that makes these harms possible.
As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.
First, data is useful to all industries — this is not some new phenomenon particular to online companies
It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:
Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on Walmart.com after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.
And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.
Second, it’s not the amount of data that leads to success but building a better mousetrap
The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.
Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.
Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.
At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.
In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.
Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low
Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.
Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:
Google flight search has failed to seriously challenge — let alone displace — its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.
Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.
Fourth, access to data is not exclusive
Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.
This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.
Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.
If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.
Fifth, the data barrier to entry argument does not have workable antitrust remedies
The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?
It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.
On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.
Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.
It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.
If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.
The International Center for Law & Economics (ICLE) and TechFreedom filed two joint comments with the FCC today, explaining why the FCC has no sound legal basis for micromanaging the Internet and why “net neutrality” regulation would actually prove counter-productive for consumers.
New regulation is unnecessary. “An open Internet and the idea that companies can make special deals for faster access are not mutually exclusive,” said Geoffrey Manne, Executive Director of ICLE. “If the Internet really is ‘open,’ shouldn’t all companies be free to experiment with new technologies, business models and partnerships?”
“The media frenzy around this issue assumes that no one, apart from broadband companies, could possibly question the need for more regulation,” said Berin Szoka, President of TechFreedom. “In fact, increased regulation of the Internet will incite endless litigation, which will slow both investment and innovation, thus harming consumers and edge providers.”
Title II would be a disaster. The FCC has proposed re-interpreting the Communications Act to classify broadband ISPs under Title II as common carriers. But reinterpretation might unintentionally ensnare edge providers, weighing them down with onerous regulations. “So-called reclassification risks catching other Internet services in the crossfire,” explained Szoka. “The FCC can’t easily forbear from Title II’s most onerous rules because the agency has set a high bar for justifying forbearance. Rationalizing a changed approach would be legally and politically difficult. The FCC would have to simultaneously find the broadband market competitive enough to forbear, yet fragile enough to require net neutrality rules. It would take years to sort out this mess — essentially hitting the pause button on better broadband.”
Section 706 is not a viable option. In 2010, the FCC claimed Section 706 as an independent grant of authority to regulate any form of “communications” not directly barred by the Act, provided only that the Commission assert that regulation would somehow promote broadband. “This is an absurd interpretation,” said Szoka. “This could allow the FCC to essentially invent a new Communications Act as it goes, regulating not just broadband, but edge companies like Google and Facebook, too, and not just neutrality but copyright, cybersecurity and more. The courts will eventually strike down this theory.”
A better approach. “The best policy would be to maintain the ‘Hands off the Net’ approach that has otherwise prevailed for 20 years,” said Manne. “That means a general presumption that innovative business models and other forms of ‘prioritization’ are legal. Innovation could thrive, and regulators could still keep a watchful eye, intervening only where there is clear evidence of actual harm, not just abstract fears.” “If the FCC thinks it can justify regulating the Internet, it should ask Congress to grant such authority through legislation,” added Szoka. “A new communications act is long overdue anyway. The FCC could also convene a multistakeholder process to produce a code enforceable by the Federal Trade Commission,” he continued, noting that the White House has endorsed such processes for setting Internet policy in general.
Manne concluded: “The FCC should focus on doing what Section 706 actually commands: clearing barriers to broadband deployment. Unleashing more investment and competition, not writing more regulation, is the best way to keep the Internet open, innovative and free.”
For some of our other work on net neutrality, see:
“Understanding Net(flix) Neutrality,” an op-ed by Geoffrey Manne in the Detroit News on Netflix’s strategy to confuse interconnection costs with neutrality issues.
Drip pricing is a pricing technique in which firms advertise only part of a product’s price and reveal other charges later as the customer goes through the buying process. The additional charges can be mandatory charges, such as hotel resort fees, or fees for optional upgrades and add-ons. Drip pricing is used by many types of firms, including internet sellers, automobile dealers, financial institutions, and rental car companies.
Economists and marketing academics will be brought together to examine the theoretical motivation for drip pricing and its impact on consumers, empirical studies, and policy issues pertaining to drip pricing. The sessions will address the following questions: Why do firms engage in drip pricing? How does drip pricing affect consumer search? Where does drip pricing occur? When is drip pricing harmful? Are there efficiency justifications for the practice in some situations? Can competition prevent firms from harming consumers through drip pricing? Can consumer experience or firm reputation limit harm from drip pricing? What types of policies could lead to improved consumer decision making and under what circumstances should such policies be applied?
The workshop, which will be free and open to the public, will be held at the FTC’s Conference Center, located at 601 New Jersey Avenue, N.W., Washington, DC. A government-issued photo ID is required for entry. Pre-registration for this workshop is not necessary, but is encouraged, so that we may better plan for the event.
Here is the conference agenda:
Welcome and Opening Remarks Jon Leibowitz, Chairman, Federal Trade Commission
Overview of Drip Pricing Mary Sullivan, Federal Trade Commission
Consumer and Competitive Effects of Obscure Pricing Joseph Farrell, Director, Bureau of Economics, Federal Trade Commission
Theories of Drip Pricing Chair,Doug Smith, Federal Trade Commission
David Laibson, Harvard University
Michael Baye, Indiana University
Michael Waldman, Cornell University
Michael Salinger, Boston University
Keynote Address Amelia Fletcher, Chief Economist, Office of Fair Trading, UK
Empirical Analysis of Drip Pricing Chair, Erez Yoeli, Federal Trade Commission
Vicki Morwitz, New York University
Meghan Busse, Northwestern University
Sara Fisher Ellison, Massachusetts Institute of Technology
Jonathan Zinman, Dartmouth College
Public Policy Roundtable
Moderator, Mary Sullivan, Federal Trade Commission
Michael Baye, Indiana University
Sara Fisher Ellison, Massachusetts Institute of Technology
The Federal Communications Commission’s Network Neutrality Order regulates how broadband networks explain their services to customers, mandates that subscribers be permitted to deploy whatever computers, mobile devices, or applications they like for use with the network access service they purchase, imposes a prohibition upon unreasonable discrimination in network management such that Internet Service Provider efforts to maintain service quality (e.g. mitigation congestion) or to price and package their services do not burden rival applications.
This paper offers legal and economic critique of the new Network Neutrality policy and particularly the no blocking and no discrimination rules. While we argue the FCC‘s rules are likely to be declared beyond the scope of the agency‘s charter, we focus upon the economic impact of net neutrality regulations. It is beyond paradoxical that the FCC argues that it is imposing new regulations so as to preserve the Internet‘s current economic structure; that structure has developed in an unregulated environment where firms are free to experiment with business models – and vertical integration – at will. We demonstrate that Network Neutrality goes far further than existing law, categorically prohibiting various forms of economic integration in a manner equivalent to antitrust’s per se rule, properly reserved for conduct that is so likely to cause competitive harm that the marginal benefit of a fact-intensive analysis cannot be justified. Economic analysis demonstrates that Network Neutrality cannot be justified upon consumer welfare grounds. Further, the Commission‘s attempt to justify its new policy simply ignores compelling evidence that “open access” regulations have distorted broadband build-out in the United States, visibly reducing subscriber growth when imposed and visibly increasing subscriber growth when repealed. On the other, the FCC manages to cite just one study – not of the broadband market – to support its claims of widespread foreclosure threats. This empirical study, upon closer scrutiny than the Commission appears to have given it, actually shows no evidence of anti-competitive foreclosure. This fatal analytical flaw constitutes a smoking gun in the FCC‘s economic analysis of net neutrality.
Read the whole thing. Under review at a law review near you …
Have you ever had to get on your hands and knees at Office Depot to find precisely the right printer cartridge? It’s maddening, no? Why can’t the printer manufacturers just settle on a single design configuration, the way lamp manufacturers use common light bulbs?
You might think the printer manufacturer is trying to enhance its profits simply by forcing you to buy two of its products (the printer + the manufacturer’s own ink cartridge) rather than one (just the printer). But that story is wrong (or, at best, incomplete). Printers tend to be sufficiently brand-differentiated to enable manufacturers to charge a price above their marginal cost. Ink, by contrast, is more like a commodity, so competition among ink manufacturers should drive price down near the level of marginal cost. A printer manufacturer could fully exercise its market power over its printer — i.e., its ability to profitably charge a printer price that exceeds the printer’s cost — by raising the price of its printer alone. It could not enhance its profits by charging that price and then requiring purchasers to buy its ink cartridge at some above-cost price. Consumers would view the requirement to purchase the manufacturer’s “supracompetitively priced” ink cartridge as tantamount to an increase in the price of the printer itself, so the manufacturer’s tie-in would effectively raise the printer price above profit-maximizing levels (i.e., profits would fall, despite the higher effective price, because too many “marginal” consumers — those who value the manufacturer’s printer the least — would curtail their purchases).
If printer buyers consume multiple ink cartridges, though, a printer manufacturer may enhance its profits by tying its printer and its ink cartridges in an attempt to price discriminate among consumers. The manufacturer would lower its printer price from the profit-maximizing level to some level closer to (but still at or above) its cost, raise the price of its ink cartridge above the competitive level (which should approximate its marginal cost), and require purchasers of its printer to use the manufacturer’s (supracompetitively priced) ink cartridges. This tack enables the manufacturer to charge higher effective prices to high-intensity users, who are likely to value the printer the most, and lower (but still above-cost) prices to low-intensity users, who likely value the printer the least. Economists call this sort of tying arrangement a “metering tie-in” because it aims to meter demand for the seller’s tying product (the printer) and charge an effective price that corresponds to a buyer’s likely willingness to pay.
When a seller imposes a metering tie-in, higher-intensity consumers get less “surplus” from their purchases (the difference between their outlays and the amount by which they value what they’re buying), but total market output tends to increase, as the manufacturer sells printers to some buyers who value the printer below the amount the manufacturer would charge for the printer alone (i.e., the profit-maximizing, single-product price).
In his recent high-profile article, Tying, Bundled Discounts, and the Death of the Single Monopoly Profit Theory, Professor Einer Elhauge contends that metering tie-ins like the one described above tend to reduce total and consumer welfare. He maintains that tie-ins of the type described are a form of welfare-reducing “third-degree” price discrimination. He illustrates his point using a stylized example involving a printer manufacturer who sells consumers up to three ink cartridges.
In a response to Professor Elhauge’s interesting article, I attempted to show that his welfare analysis turns on his assumption that printer buyers use only 1, 2, or 3 ink cartridges. I demonstrated that Professor Elhauge’s hypo generates a different outcome — even assuming that this sort of metering tie-in is “third-degree” price discrimination — if ink cartridges are smaller, so that high-intensity consumers purchase 4 or more ink cartridges.
In some very helpful comments on my forthcoming response article, Professor Herbert Hovenkamp observed that there is a bigger problem with Elhauge’s analysis: It assumes that the price discrimination here is third-degree price discrimination, when in fact it is second-degree price discrimination.
Below the fold, I discuss Elhauge’s analysis, my initial response (which remains valid), and the more fundamental problem Hovenkamp observed. (And for those interested, please download my revised response article, which now contains both my original and Hovenkamp’s arguments.) Continue Reading…
The Ninth Circuit recently issued a decision that pushes the doctrine governing tying in the right direction. If appealed, the decision could provide the Roberts Court with an opportunity to do for tying what its Leegin decision did for resale price maintenance: reduce error costs by bringing an overly prohibitory liability rule in line with economic learning. First, some background on the law and economics of tying. Then, a little about the Ninth Circuit’s decision.
Some Background on the Law and Economics of Tying
Tying (or a “tie-in”) occurs when a monopolist sells its monopoly “tying” product on the condition that the buyer also purchase some “tied” product. Under prevailing doctrine, tying is per se illegal if: (1) the tie-in involves two truly separate products (e.g., a patented printer and unpatented ink, not a left shoe and a right shoe), (2) the seller possesses monopoly power over the tying product, and (3) the tie-in affects a “not insubstantial” dollar volume (not share) of commerce in the tied product market (e.g., $50,000 or so will suffice).
Scholars from both the Chicago and Harvard Schools of antitrust analysis (including yours truly) have argued that this rule is too prohibitory and that tie-ins should be condemned only when they foreclose a substantial percentage of sales opportunities in the tied product market. This sort of rule of reason approach, we maintain, would prevent liability for tie-ins that could not possibly be anticompetitive and would align tying doctrine with the liability rule governing tying’s close cousin, exclusive dealing. The governing per se rule, we contend, is a relic of the days when courts believed that a monopolist could immediately earn two monopoly profits by tying in a separate product and charging both a supracompetitive price for that tied product and the monopoly price for its monopoly product. This so-called leverage theory has been debunked. (Consumers will view the supracompetitive tied product price as an increase in the price of the tying product, which will push the tying product price above the profit-maximizing level and cause the seller to lose profits. In short, there is only one monopoly profit to exploit, and the seller can do so by charging its profit-maximizing monopoly price for the monopoly product alone.)
A couple of years ago, Harvard Law’s Einer Elhauge published a much-discussed article arguing that we critics of current tying doctrine are wrong. Prevailing doctrine, Elhauge argued, is appropriate because tie-ins can cause anticompetitive effects even if they do not occasion substantial tied market foreclosure. In particular, a tie-in can permit a seller to price discriminate among consumers and thereby extract a greater proportion of the trade surplus for itself. For example, in a variable proportion tie-in (one where there is no fixed ratio between the number of tying and tied units purchased, as when a buyer of a printer is required to purchase all his ink requirements from the printer seller), the seller can price discriminate by tying in a complement (ink) whose consumption corresponds to the degree to which consumers value the tying product (e.g., consumers who most value the printer likely buy lots of ink). By lowering the price of the tying product (the printer) from monopoly levels and charging a supracompetitive price for the tied product (the ink), the seller can effectively charge higher prices to consumers who value the tying product more, thereby capturing more surplus for itself. Elhauge argues (incorrectly, as I show in this article) that this is an anticompetitive effect.
A second form of “anticompetitive” price discrimination, Elhauge contends, may result from fixed proportion tie-ins of products for which demand is not positively correlated. George Stigler provided the classic example of this dynamic in his discussion of the Loew’s case, which involved the block booking of feature films (i.e., selling the films only in packages).
Suppose, for example, that a firm has two customers, A and B; that A values product X at $8,000 and product Y at $2,500; and that B values product X at $7,000 and product Y at $3,000. (For simplicity’s sake, assume that the marginal cost of both products is zero.) If the firm were to sell the products separately, it would charge $7,000 for X and $2,500 for Y, and it would earn profits of $19,000 ($9,500 x 2). By tying the products together and selling them as a bundle, the seller can charge a total of $10,000 per customer, an amount less than or equal to each customer’s reservation price for the package, thereby earning profits of $20,000. While each consumer is charged the same amount for the package, the pricing is in some sense discriminatory, for the seller effectively discriminates against A, the low-elasticity X buyer, on A’s purchase of X and against B, the low-elasticity Y buyer, on B’s purchase of Y. (This is because, absent the tying of X and Y, A would have enjoyed surplus of $1,000 on X but no surplus on Y, and B would have enjoyed surplus of $500 on Y but no surplus on X.) By engaging in this sort of price discrimination, the seller may enhance its profits for, as Judge Posner explains, “When the products are priced separately, the price is depressed by the buyer who values each one less than the other buyer does; the bundling eliminates this effect.”
According to Elhauge, the sort of price discrimination/surplus extraction occasioned by “Stigler-type” tying, like the price discrimination resulting from a variable proportion “metering” tie-in, is anticompetitive and justifies the prevailing liability rule against tying. In a subsequent post, I will explain why Elhauge is wrong and why Stigler-type price discrimination is output-enhancing and thus procompetitive. For now, though, let’s consider the Ninth Circuit’s recent case, which rejected Elhauge’s view.
The Ninth Circuit’s Recent Brantley Decision
Brantley, et al. v. NBC Universal, Inc., et al., involved a challenge by cable television subscribers to T.V. programmers’ practice of selling cable channels only in packages. The plaintiffs, who preferred to purchase individual channels a la carte, maintained that the programmers’ policy violated Sherman Act Section 1. As the Ninth Circuit correctly recognized, the arrangement really amounted to tying, for the programmers would sell their “must have” channels only if subscribers would also take other, less desirable channels. (Indeed, the practice is closely analogous to the block booking at issue in Loew’s, where the distributor required that licensees of popular films also license flops.)
The district court dismissed plaintiffs’ first complaint without prejudice on the ground that plaintiffs failed to allege that their injuries (purportedly higher prices) were caused by an injury to competition. Plaintiffs then amended their complaint to include an allegation “that Programmers’ practice of selling bundled cable channels foreclosed independent programmers from entering and competing in the upstream market for programming channels.” In other words, plaintiffs alleged, the tying at issue occasioned substantial tied market foreclosure.
After conducting some discovery, plaintiffs decided to abandon that theory of harm. They prepared a new complaint that omitted all market foreclosure allegations and asked the court to rule “that plaintiffs did not have to allege that potential competitors were foreclosed from the market in order to defeat a motion to dismiss.” Defendants again sought to dismiss the complaint. The district court, reasoning that the plaintiffs had failed to allege any cognizable injury to competition, granted defendants’ motion to dismiss, and plaintiffs appealed.
Given this procedural posture, the Ninth Circuit starkly confronted whether, as Elhauge maintains, the price discrimination/surplus extraction inherent in Stigler-type bundling is an “anticompetitive” effect that warrants liability. In affirming the district court and holding that plaintiffs’ claims of higher prices were not enough to establish anticompetitive harm, it effectively held, as I and a number of others have urged, that there should be no tying liability absent substantial tied market foreclosure.
This holding, while correct as a policy matter, seems to conflict with the Supreme Court’s quasi-per se rule. That rule assigns automatic liability if the tie-in involves multiple products (it did here), the seller has monopoly power over the tying product (it did here), and the tie-in involves a not insubstantial dollar volume of commerce in the tied product market (it did here). Thus, the Ninth Circuit has provided the Supreme Court with a perfect opportunity to revisit the liability rule governing tying.
I for one am hoping that the Brantley plaintiffs appeal and that the Supreme Court agrees to take the case and reconsider the prerequisites to tying liability. If it does so, I predict that it will overrule its Jefferson Parish decision, jettison the quasi-per se rule against tying, and hold that there can be no tying liability absent substantial foreclosure of marketing opportunities in the tied product market.
In recent years, antitrust scholars have largely agreed on a couple of propositions involving tying and bundled discounting. With respect to tying (selling one’s monopoly “tying” product only on the condition that buyers also purchase another “tied” product), scholars from both the Chicago and Harvard Schools of antitrust analysis have generally concluded that there should be no antitrust liability unless the tie-in results in substantial foreclosure of marketing opportunities in the tied product market. Absent such foreclosure, scholars have reasoned, truly anticompetitive harm is unlikely to occur. The prevailing liability rule, however, condemns tie-ins without regard to whether they occasion substantial tied market foreclosure.
With respect to bundled discounting (selling a package of products for less than the aggregate price of the products if purchased separately), scholars have generally concluded that there should be no antitrust liability if the discount at issue could be matched by an equally efficient single-product rival of the discounter. That will be the case if each product in the bundle is priced above cost after the entire bundled discount is attributed to that product. Antitrust scholars have therefore generally endorsed a safe harbor for bundled discounts that are “above cost” under a “discount attribution test.”
In an article appearing in the December 2009 Harvard Law Review, Harvard law professor Einer Elhauge challenged each of these near-consensus propositions. According to Elhauge, the conclusion that significant tied market foreclosure should be a prerequisite to tying liability stems from scholars’ naïve acceptance of the Chicago School’s “single monopoly profit” theory. Elhauge insists that the theory is infirm and that instances of tying may occasion anticompetitive “power” (i.e., price discrimination) effects even if they do not involve substantial tied market foreclosure. He maintains that the Supreme Court has deemed such effects to be anticompetitive and that it was right to do so.
With respect to bundled discounting, Elhauge calls for courts to forego price-cost comparisons in favor of a rule that asks whether the defendant seller has “coerced” consumers into buying the bundle by first raising its unbundled monopoly (“linking”) product price above the “but-for” level that would prevail absent the bundled discounting scheme and then offering a discount from that inflated level.
I have just posted to SSRN an article criticizing Elhauge’s conclusions on both tying and bundled discounting. On tying, the article argues, Elhauge makes both descriptive and normative mistakes. As a descriptive matter, Supreme Court precedent does not deem the so-called power effects (each of which was well-known to Chicago School scholars) to be anticompetitive. As a normative matter, such effects should not be regulated because they tend to enhance total social welfare, especially when one accounts for dynamic efficiency effects. Because tying can create truly anticompetitive effect only when it involves substantial tied market foreclosure, such foreclosure should be a prerequisite to liability.
On bundled discounting, I argue, Elhauge’s proposed rule would be a disaster. The rule fails to account for the fact that bundled discounts may create immediate consumer benefit even if the seller has increased unbundled linking prices above but-for levels. It is utterly inadministrable and would chill procompetitive instances of bundled discounting. It is motivated by a desire to prevent “power” effects that are not anticompetitive under governing Supreme Court precedent (and should not be deemed so). Accordingly, courts should reject Elhauge’s proposed rule in favor of an approach that first focuses on the genuine prerequisite to discount-induced anticompetitive harm—“linked” market foreclosure—and then asks whether any such foreclosure is anticompetitive in that it could not be avoided by a determined competitive rival. To implement such a rule, courts would need to apply the discount attribution test.
The paper is a work-in-progress. Herbert Hovenkamp has already given me a number of helpful comments, which I plan to incorporate shortly. In the meantime, I’d love to hear what TOTM readers think.