Archives For antitrust

The precise details underlying the European Commission’s (EC) April 15 Statement of Objections (SO), the EC’s equivalent of an antitrust complaint, against Google, centered on the company’s promotion of its comparison shopping service (CSS), “Google Shopping,” have not yet been made public.  Nevertheless, the EC’s fact sheet describing the theory of the case is most discouraging to anyone who believes in economically sound, consumer welfare-oriented antitrust enforcement.   Put simply, the SO alleges that Google is “abusing its dominant position” in online search services throughout Europe by systematically positioning and prominently displaying its CSS in its general search result pages, “irrespective of its merits,” causing the Google CSS to achieve higher rates of growth than CSSs promoted by rivals.  According to the EC, this behavior “has a negative impact on consumers and innovation”.  Why so?  Because this “means that users do not necessarily see the most relevant shopping results in response to their queries, and that incentives to innovate from rivals are lowered as they know that however good their product, they will not benefit from the same prominence as Google’s product.”  (Emphasis added.)  The EC’s proposed solution?  “Google should treat its own comparison shopping services and those of rivals in the same way.”

The EC’s latest action may represent only “the tip of a Google EC antitrust iceberg,” since the EC has stated that it is continuing to investigate other aspects of Google’s behavior, including Google agreements with respect to the Android operating system, plus “the favourable treatment by Google in its general search results of other specialised search services, and concerns with regard to copying of rivals’ web content (known as ‘scraping’), advertising exclusivity and undue restrictions on advertisers.”  For today, I focus on the tip, leaving consideration of the bulk of the iceberg to future commentaries, as warranted.  (Truth on the Market has addressed Google-related antitrust issues previously — see, for example, here, here, and here.)

The EC’s April 15 Google SO is troublesome in multiple ways.

First, the claim that Google does not “necessarily” array the most relevant search results in a manner desired by consumers appears to be in tension with the findings of an exhaustive U.S. antitrust investigation of the company.  As U.S. Federal Trade Commissioner Josh Wright pointed out in a recent speech, the FTC’s 2013 “closing statement [in its Google investigation] indicates that Google’s so-called search bias did not, in fact, harm consumers; to the contrary, the evidence suggested that ‘Google likely benefited consumers by prominently displaying its vertical content on its search results page.’  The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior – so-called ‘click through’ data – which showed how consumers reacted to Google’s promotion of its vertical properties.”

Second, even assuming that Google’s search engine practices have weakened competing CSSs, that would not justify EC enforcement action against Google.  As Commissioner Wright also explained, the FTC “accepted arguments made by competing websites that Google’s practices injured them and strengthened Google’s market position, but correctly found that these were not relevant considerations in a proper antitrust analysis focused upon consumer welfare rather than harm to competitors.”  The EC should keep this in mind, given that, as former EC Competition Commissioner Joaquin Almunia emphasized, “[c]onsumer welfare is not just a catchy phrase.  It is the cornerstone, the guiding principle of EU competition policy.”

Third, and perhaps most fundamentally, although EC disclaims an interest in “interfere[ing] with” Google’s search engine algorithm, dictating an “equal treatment of competitors” result implicitly would require intrusive micromanagement of Google’s search engine – a search engine which is at the heart of the company’s success and has bestowed enormous welfare benefits on consumers and producers alike.  There is no reason to believe that EC policing of EC CSS listings to promote an “equal protection of competitors” mandate would result in a search experience that better serves consumers than the current Google policy.  Consistent with this point, in its 2013 Google closing statement, the FTC observed that it lacked the ability to “second-guess” product improvements that plausibly benefit consumers, and it stressed that “condemning legitimate product improvements risks harming consumers.”

Fourth, competing CSSs have every incentive to inform consumers if they believe that Google search results are somehow “inferior” to their offerings.  They are free to advertise and publicize the merits of their services, and third party intermediaries that rate browsers may be expected to report if Google Shopping consistently offers suboptimal consumer services.  In short, “the word will get out.”  Even in the absence of perfect information, consumers can readily at low cost browse alternative CSSs to determine whether they prefer their services to Google’s – “help is only a click away.”

Fifth, the most likely outcome of an EC “victory” in this case would be a reduced incentive for Google to invest in improving its search engine, knowing that its ability to monetize search engine improvements could be compromised by future EC decisions to prevent an improved search engine from harming rivals.  What’s worse, other developers of service platforms and other innovative business improvements would similarly “get the message” that it would not be worth their while to innovate to the point of dominance, because their returns to such innovation would be constrained.  In sum, companies in a wide variety of sectors would have less of an incentive to innovate, and this in turn would lead to reduced welfare gains and benefits to consumers.  This would yield (as the EC’s fact sheet put it) “a negative impact on consumers and innovation”, because companies across industries operating in Europe would know that if their product were too good, they would attract the EC’s attention and be put in their place.  In other words, a successful EC intervention here could spawn the very welfare losses (magnified across sectors) that the Commission cited as justification for reining in Google in the first place!

Finally, it should come as no surprise that a coalition of purveyors of competing search engines and online shopping sites lobbied hard for EC antitrust action against Google.  When government intervenes heavily and often in markets to “correct” perceived “abuses,” private actors have a strong incentive to expend resources on achieving government actions that disadvantage their rivals – resources that could otherwise have been used to compete more vigorously and effectively.  In short, the very existence of expansive regulatory schemes disincentivizes competition on the merits, and in that regard tends to undermine welfare.  Government officials should keep that firmly in mind when private actors urge them to act decisively to “cure” marketplace imperfections by limiting a rival’s freedom of action.

Let us hope that the EC takes these concerns to heart before taking further action against Google.

By a 3-2 vote, the Federal Communications Commission (FCC) decided on February 26 to preempt state laws in North Carolina and Tennessee that bar municipally-owned broadband providers from providing services beyond their geographic boundaries.  This decision raises substantial legal issues and threatens economic harm to state taxpayers and consumers.

The narrow FCC majority rested its decision on its authority to remove broadband investment barriers, citing Section 706 of the Telecommunications Act of 1996.  Section 706 requires the FCC to encourage the deployment of broadband to all Americans by using “measures that promote competition in the local telecommunications market, or other regulating methods that remove barriers to infrastructure investment.”  As dissenting Commissioner Ajit Pai pointed out, however, Section 706 contains no specific language empowering it to preempt state laws, and the FCC’s action trenches upon the sovereign power of the states to control their subordinate governmental entities.  Moreover, it is far from clear that authorizing government-owned broadband companies to expand into new territories promotes competition or eliminates broadband investment barriers.  Indeed, the opposite is more likely to be the case.

Simply put, government-owned networks artificially displace market forces and are an affront to a reliance on free competition to provide the goods and services consumers demand – including broadband communications.  Government-owned networks use local taxpayer monies and federal grants (also taxpayer funded, of course) to compete unfairly with existing private sector providers.  Those taxpayer subsidies put privately funded networks at a competitive disadvantage, creating barriers to new private sector entry or expansion, as private businesses decide they cannot fairly compete against government-backed enterprises.  In turn, reduced private sector investment tends to diminish quality and effective consumer choice.

These conclusions are based on hard facts, not mere theory.  There is no evidence that municipal broadband is needed because “market failure” has deterred private sector provision of broadband – indeed, firms such as Verizon, AT&T, and Comcast spend many billions of dollars annually to maintain, upgrade, and expand their broadband networks.  Indeed, far more serious is the risk of “government failure.”  Municipal corporations, free from market discipline and accountability due to their public funding, may be expected to be bureaucratic, inefficient, and slow to react to changing market conditions.  Consistent with this observation, an economic study of government-operated municipal broadband networks reveals failures to achieve universal service in areas that they serve; lack of cost-benefit analysis that has caused costs to outweigh benefits; the inefficient use of scarce resources; the inability to cover costs; anticompetitive behavior fueled by unfair competitive advantages; the inefficient allocation of limited tax revenues that are denied to more essential public services; and the stifling of private firm innovation.  In a time of tight budget constraints, the waste of taxpayer funds and competitive harm stemming from municipal broadband activities is particularly unfortunate.  In short, real world evidence demonstrates that “[i]n a dynamic market such as broadband services, government ownership has proven to be an abject failure.”  What is required is not more government involvement, but, rather, fewer governmental constraints on private sector broadband activities.

Finally, what’s worse, the FCC’s decision has harmful constitutional overtones.  The Chattanooga, Tennessee and Wilson, North Carolina municipal broadband networks that requested FCC preemption impose troublesome speech limitations as conditions of service.  The utility that operates the Chattanooga network may “reject or remove any material residing on or transmitted to or through” the network that violates its “Accepted Use Policy.”  That Policy, among other things, prohibits using the network to send materials that are “threatening, abusive or hateful” or that offend “the privacy, publicity, or other personal rights of others.”  It also bars the posting of messages that are “intended to annoy or harass others.”  In a similar vein, the Wilson network bars transmission of materials that are “harassing, abusive, libelous or obscene” and “activities or actions intended to withhold or cloak any user’s identity or contact information.”  Content-based prohibitions of this type broadly restrict carriage of constitutionally protected speech and, thus, raise serious First Amendment questions.  Other municipal broadband systems may, of course, elect to adopt similarly questionable censorship-based policies.

In short, the FCC’s broadband preemption decision is likely to harm economic welfare and is highly problematic on legal grounds to boot.  The FCC should rescind that decision.  If it fails to do so, and if the courts do not strike the decision down, Congress should consider legislation to bar the FCC from meddling in state oversight of municipal broadband.

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.

Much ink has been spilled (and with good reason) about the excessive and totally unnecessary regulatory burdens associated with the Federal Communications Commission’s (FCC) February 26 “Open Internet Order” (OIO), which imposes public utility regulation on Internet traffic.  For example, as Heritage Foundation Senior Research Fellow James Gattuso recently explained, “[d]evised for the static monopolies, public-utility regulation will be corrosive to today’s dynamic Internet. There’s a reason the phrase ‘innovative public utility’ doesn’t flows easily from the tongue. The hundreds of rules that come with public utility status are geared to keeping monopolies in line, not encouraging new or innovative ways of doing things. . . .  Even worse, by imposing burdens on big and small carriers alike, the new rules may actually stifle chances of increasing competition among broadband providers.”

Apart from its excessive and unjustifiable economic costs, the OIO has another unfortunate feature which has not yet been widely commented upon – it is an invitation to cronyism, which is an affront to the neutral application of the laws.  As Heritage Foundation President Jim DeMint and Heritage Action President Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations. Those special preferences undermine competition on the merits firms that lack insider status, harming the public.

But what scope is there for cronyism in the FCC’s application of its OIO?  A lotAs I explain in a March 30 Heritage Foundation Daily Signal blog posting, the FCC will provide OIO guidance through “enforcement advisories” and “advisory opinions,” and the Commission’s Enforcement Bureau can request written opinions from outside organizations.  Translating this bureaucratese into English, the FCC is saying that the inherently open-ended language that determines whether an Internet business practice is given a thumbs up or thumbs down will turn on “opinions” that will require the input of high-priced lawyers and advisers.  Smaller and emerging firms that cannot afford to pay for influence may be out of luck.  Moreover, large established companies that are experts at the “Washington game” and engage in administration-approved activities or expenditures (such as politically correct green projects or the right campaign contributions) may be given special consideration when the FCC’s sages decide whether an Internet business practice is “unreasonable” or not.  This means, for example, that firms that are willing to pay more for better Internet access to challenge such powerful firms as Netflix in video services or Google in search activities or Facebook in social networking may be out of luck, if they are less effective at playing the Washington influence game than at competing on the merits.  Those who downplay this risk should recall that the FCC has a long and sad record of using regulations to advantage powerful incumbents (for decades the FCC shielded AT&T from cellular telephony competition and the over-the-air television broadcasters from cable competition).

In short, the benefits to American consumers and the overall American economy generated by a regulation-free Internet—not to mention the ability of entrepreneurs to thrive, free from cronyism—may soon become a thing of the past, unless action is taken by Congress or the courts.  American citizens deserve better than that from their government.

In its February 25 North Carolina Dental v. Federal Trade Commission decision, the U.S. Supreme Court held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state. Will this decision discourage harmful protectionist regulation, such as the prohibition on tooth whitening by non-dentists at issue in this case? Will it also interfere with the ability of states to shape their regulatory programs as they see fit? U.S. Federal Trade Commissioner Maureen Ohlhausen will address this important set of questions in a March 31 luncheon presentation at the Heritage Foundation, with Clark Neily of the Institute for Justice and Misha Tseytlin of the West Virginia State Attorney General’s Office providing expert commentary. (You may view this event online or register to attend it in person here).

Just in time for this event, the Heritage Foundation has released a legal memorandum on “North Carolina Dental Board and the Reform of State-Sponsored Protectionism.”  The  memorandum explains that North Carolina Dental “has far-reaching ramifications for the reform of ill-conceived protectionist state regulations that limit entry into myriad professions and thereby harm consumers. In holding that a state regulatory board controlled by market participants in the industry being regulated cannot cloak its anticompetitive rules in ‘state action’ antitrust immunity unless it is ‘actively supervised’ by the state, the Court struck a significant blow against protectionist rent-seeking legislation and for economic liberty. The states may re-examine their licensing statutes in light of the Court’s decision, but if they decline to revise their regulatory schemes to eliminate their unjustifiable exclusionary effect, there may well be yet another round of challenges to those programs—this time based on the federal Constitution.”

Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.

A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.

According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.

In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.

Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.

FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.

In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact  a data barrier to entry that makes these harms possible.

As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.

First, data is useful to all industries — this is not some new phenomenon particular to online companies

It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:

  • Macy’s analyzes tens of millions of terabytes of data every day to gain insights from social media and store transactions. Over the past three years, the use of big data analytics alone has helped Macy’s boost its revenue growth by 4 percent annually.
  • Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on Walmart.com after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
  • Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.

And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.

Second, it’s not the amount of data that leads to success but building a better mousetrap

The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.

Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to  Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.

Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.

At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.

In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.

Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low

Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.

Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:

  • If consumers want to make a purchase, they are more likely to do their research on Amazon than Google.
  • Google flight search has failed to seriously challenge — let alone displace —  its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
  • People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
  • Pinterest, one of the most highly valued startups today, is now a serious challenger to traditional search engines when people want to discover new products.
  • With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
  • Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.

Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.

Fourth, access to data is not exclusive

Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.

This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.

Friend-Finder

Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.

If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.

Fifth, the data barrier to entry argument does not have workable antitrust remedies

The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?

It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.

On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.

Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.

It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.

If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.

As I explained in a recent Heritage Foundation Legal Memorandum, the Institute of Electrical and Electronics Engineers’ (IEEE) New Patent Policy (NPP) threatens to devalue patents that cover standards; discourage involvement by innovative companies in IEEE standard setting; and undermine support for strong patents, which are critical to economic growth and innovation.  The Legal Memorandum focused on how the NPP undermines patentees’ rights and reduces returns to patents that “read on” standards (“standard essential patents” or “SEPs”).  It did not, however, address the merits of the Justice Department Antitrust Division’s (DOJ) February 2 Business Review Letter (BRL), which found no antitrust problems with the NPP.

Unfortunately, the BRL does little more than opine on patent policy questions, such as the risk of patent “hold-up” that the NPP allegedly is designed to counteract.  The BRL is virtually bereft of antitrust analysis.  It states in conclusory fashion that the NPP is on the whole procompetitive, without coming to grips with the serious risks of monopsony and collusion, and reduced investment in standards-related innovation, inherent in the behavior that it analyzes.  (FTC Commissioner Wright and prominent economic consultant Greg Sidak expressed similar concerns about the BRL in a March 12 program on standard setting and patents hosted by the Heritage Foundation.)

Let’s examine the BRL in a bit more detail, drawing from a recent scholarly commentary by Stuart Chemtob.  The BRL eschews analyzing the risk that by sharply constraining expected returns to SEPs, the NPP’s requirements may disincentivize technology contributions to standards, harming innovation.  The BRL focuses on how the NPP may reduce patentee “hold-up” by effectively banning injunctions and highlighting three factors that limit royalties – basing royalties on the value of the smallest saleable unit, the value contributed to that unit in light of all the SEPs practiced the unit, and existing licenses covering the unit that were not obtained under threat of injunction.  The BRL essentially ignores, however, the very real problem of licensee “hold-out” by technology implementers who may gain artificial bargaining leverage over patentees.  Thus there is no weighing of the NPP’s anticompetitive risks against its purported procompetitive benefits.  This is particularly unfortunate, given the absence of hard evidence of hold-up.  (Very recently, the Federal Circuit in Ericsson v. D-Link denied jury instructions citing the possibility of hold-up, given D-Link’s failure to provide any evidence of hold-up.)   Also, by forbidding injunctive actions prior to first level appellate review, the NPP effectively precludes SEP holders from seeking exclusion orders against imports that infringe their patents, under Section 337 of the Tariff Act.  This eliminates a core statutory protection that helps shield American patentees from foreign anticompetitive harm, further debasing SEPs.  Furthermore, the BRL fails to assess the possible competitive harm firms may face if they fail to accede to the IEEE’s NPP.

Finally, and most disturbingly, the BRL totally ignores the overall thrust of the NPP – which is to encourage potential licensees to insist on anticompetitive terms that reduce returns to SEP holders below the competitive level.  Such terms, if jointly agreed to by potential licensees, could well be deemed a monopsony buyers’ cartel (with the potential licensees buying license rights), subject to summary antitrust condemnation in line with such precedents as Mandeville Island Farms and Todd v. Exxon.

In sum, the BRL is an embarrassingly one-sided document that would merit a failing grade as an antitrust exam essay.  DOJ would be wise to withdraw the letter or, at the very least, rewrite it from scratch, explaining that the NPP raises serious antitrust questions that merit close examination.  If it fails to do so, one can only conclude that DOJ has decided that it is suitable to use business review letters as vehicles for unsupported statements of patent policy preferences, rather than as serious, meticulously crafted memoranda of guidance on difficult antitrust questions.

The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the Commission approve an antitrust suit against the company.

While this is excellent fodder for a few hours of Twitter hysteria, it takes more than 140 characters to delve into the nuances of a 20-month federal investigation. And the bottom line is, frankly, pretty ho-hum.

As I said recently,

One of life’s unfortunate certainties, as predictable as death and taxes, is this: regulators regulate.

The Bureau of Competition staff is made up of professional lawyers — many of them litigators, whose existence is predicated on there being actual, you know, litigation. If you believe in human fallibility at all, you have to expect that, when they err, FTC staff errs on the side of too much, rather than too little, enforcement.

So is it shocking that the FTC staff might recommend that the Commission undertake what would undoubtedly have been one of the agency’s most significant antitrust cases? Hardly.

Nor is it surprising that the commissioners might not always agree with staff. In fact, staff recommendations are ignored all the time, for better or worse. Here are just a few examples: R.J Reynolds/Brown & Williamson merger, POM Wonderful , Home Shopping Network/QVC merger, cigarette advertising. No doubt there are many, many more.

Regardless, it also bears pointing out that the staff did not recommend the FTC bring suit on the central issue of search bias “because of the strong procompetitive justifications Google has set forth”:

Complainants allege that Google’s conduct is anticompetitive because if forecloses alternative search platforms that might operate to constrain Google’s dominance in search and search advertising. Although it is a close call, we do not recommend that the Commission issue a complaint against Google for this conduct.

But this caveat is enormous. To report this as the FTC staff recommending a case is seriously misleading. Here they are forbearing from bringing 99% of the case against Google, and recommending suit on the marginal 1% issues. It would be more accurate to say, “FTC staff recommends no case against Google, except on a couple of minor issues which will be immediately settled.”

And in fact it was on just these minor issues that Google agreed to voluntary commitments to curtail some conduct when the FTC announced it was not bringing suit against the company.

The Wall Street Journal quotes some other language from the staff report bolstering the conclusion that this is a complex market, the conduct at issue was ambiguous (at worst), and supporting the central recommendation not to sue:

We are faced with a set of facts that can most plausibly be accounted for by a narrative of mixed motives: one in which Google’s course of conduct was premised on its desire to innovate and to produce a high quality search product in the face of competition, blended with the desire to direct users to its own vertical offerings (instead of those of rivals) so as to increase its own revenues. Indeed, the evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.

On a global level, the record will permit Google to show substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.

This is exactly when you want antitrust enforcers to forbear. Predicting anticompetitive effects is difficult, and conduct that could be problematic is simultaneously potentially vigorous competition.

That the staff concluded that some of what Google was doing “harmed competitors” isn’t surprising — there were lots of competitors parading through the FTC on a daily basis claiming Google harmed them. But antitrust is about protecting consumers, not competitors. Far more important is the staff finding of “substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.”

Indeed, the combination of “substantial innovation,” “intense competition from Microsoft and others,” and “Google’s strong procompetitive justifications” suggests a well-functioning market. It similarly suggests an antitrust case that the FTC would likely have lost. The FTC’s litigators should probably be grateful that the commissioners had the good sense to vote to close the investigation.

Meanwhile, the Wall Street Journal also reports that the FTC’s Bureau of Economics simultaneously recommended that the Commission not bring suit at all against Google. It is not uncommon for the lawyers and the economists at the Commission to disagree. And as a general (though not inviolable) rule, we should be happy when the Commissioners side with the economists.

While the press, professional Google critics, and the company’s competitors may want to make this sound like a big deal, the actual facts of the case and a pretty simple error-cost analysis suggests that not bringing a case was the correct course.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Rate this:

Continue Reading...

In a recent post, I explained how the U.S. Supreme Court’s February 25 opinion in North Carolina Dental Board v. FTC (holding that a state regulatory board controlled by market participants must be “actively supervised” by the state to receive antitrust immunity) struck a significant blow against protectionist rent-seeking and for economic liberty.  Maureen Ohlhausen, who has spoken out against special interest government regulation as an FTC Commissioner (and formerly as Director of the FTC’s Office of Policy Planning), will discuss the ramifications of the Court’s North Carolina Dental decision in a March 31 luncheon speech at the Heritage Foundation.  Senior Attorney Clark Neily of the Institute for Justice and Misha Tseytlin, General Counsel in the West Virginia Attorney General’s Office, will provide expert commentary on the Commissioner’s speech.  You can register for this event here.