Archives For Exclusionary Conduct

Early last month, the Italian competition authority issued a record 1.128 billion euro fine against Amazon for abuse of dominance under Article 102 of the Treaty on the Functioning of the European Union (TFEU). In its order, the Agenzia Garante della Concorrenza e del Mercato (AGCM) essentially argues that Amazon has combined its Amazon.it marketplace and Fulfillment by Amazon (FBA) services to exclude logistics rivals such as FedEx, DHL, UPS, and Poste Italiane. 

The sanctions came exactly one month after the European General Court seconded the European Commission’s “discovery” in the Google Shopping case of a new antitrust infringement known as “self-preferencing,” which also cited Article 102 TFEU. Perhaps not entirely coincidentally, legislation was introduced in the United States earlier this year to prohibit the practice. Meanwhile, the EU’s legislative bodies have been busy taking steps to approve the Digital Markets Act (DMA), which would regulate so-called digital “gatekeepers.”

Italy thus joins a wave of policymakers that have either imposed heavy-handed decisions to “rein in” online platforms, or are seeking to implement ex ante regulations toward that end. Ultimately, the decision is reminiscent of the self-preferencing prohibition contained in Article 6a of the current draft of the DMA and reflects much of what is wrong with the current approach to regulating tech. It presages some of the potential problems with punishing efficient behavior for the sake of protecting competitors through “common carrier antitrust.” However, if this decision is anything to go by, these efforts will end up hurting the very consumers authorities purport to protect and lending color to more general fears over the DMA. 

In this post, we discuss how the AGCM’s reasoning departs from sound legal and economic thinking to reach a conclusion at odds with the traditional goal of competition law—i.e., the protection of consumer welfare. Neo-Brandeisians and other competition scholars who dispute the centrality of the consumer welfare standard and would use antitrust to curb “bigness” may find this result acceptable, in principle. But even they must admit that the AGCM decision ultimately serves to benefit large (if less successful) competitors, and not the “small dealers and worthy men” of progressive lore.

Relevant Market Definition

Market definition constitutes a preliminary step in any finding of abuse under Article 102 TFEU. An excessively narrow market definition can result in false positives by treating neutral or efficient conduct as anticompetitive, while an overly broad market definition might allow anticompetitive conduct to slip through the cracks, leading to false negatives. 

Amazon Italy may be an example of the former. Here, the AGCM identified two relevant markets: the leveraging market, which it identified as the Italian market for online marketplace intermediation, and the leveraged market, which it identified as the market for e-commerce logistics. The AGCM charges that Amazon is dominant in the former and that it gained an illegal advantage in the latter. It found, in this sense, that online marketplaces constitute a uniquely relevant market that is not substitutable for other offline or online sales channels, such as brick-and-mortar shops, price-comparison websites (e.g., Google Shopping), or dedicated sales websites (e.g., Nike.com/it). Similarly, it concluded that e-commerce logistics are sufficiently different from other forms of logistics as to comprise a separate market.

The AGCM’s findings combine qualitative and quantitative evidence, including retailer surveys and “small but significant and non-transitory increase in price” (SSNIP) tests. They also include a large dose of speculative reasoning.

For instance, the AGCM asserts that online marketplaces are fundamentally different from price-comparison sites because, in the latter case, purchase transactions do not take place on the platform. It asserts that e-commerce logistics are different from traditional logistics because the former require a higher degree of automation for transportation and storage. And in what can only be seen as a normative claim, rather than an objective assessment of substitutability, the Italian watchdog found that marketplaces are simply better than dedicated websites because, e.g., they offer greater visibility and allow retailers to save on marketing costs. While it is unclear what weights the AGCM assigned to each of these considerations when defining the relevant markets, it is reasonable to assume they played some part in defining the nature and scope of Amazon’s market presence in Italy.

In all of these instances, however, while the AGCM carefully delineated superficial distinctions between these markets, it did not actually establish that those differences are relevant to competition. Fetishizing granular but ultimately irrelevant differences between products and services—such as between marketplaces and shopping comparison sites—is a sure way to incur false positives, a misstep tantamount to punishing innocuous or efficient business conduct.

Dominance

The AGCM found that Amazon was “hyper-dominant” in the online marketplace intermediation market. Dominance was established by looking at revenue from marketplace sales, where Amazon’s share had risen from about 65% in 2016 to 75% in 2019. Taken in isolation, this figure might suggest that Amazon’s competitors cannot thrive in the market. A broader look at the data, however, paints a picture of more generalized growth, with some segments greatly benefiting newcomers and small, innovative marketplaces. 

For instance, virtually all companies active in the online marketplace intermediation market have experienced significant growth in terms of monthly visitors. It is true that Amazon’s visitors grew significantly, up 150%, but established competitors like Aliexpress and eBay also saw growth rates of 90% and 25%, respectively. Meanwhile, Wish grew a massive 10,000% from 2016 to 2019; while ManoMano and Zalando grew 450% and 100%, respectively.

In terms of active users (i.e., visits that result in a purchase), relative numbers seem to have stayed roughly the same, although the AGCM claims that eBay saw a 20-30% drop. The number of third-party products Amazon offered through Marketplace grew from between 100 and 500 million to between 500 million and 1 billion, while other marketplaces appear to have remained fairly constant, with some expanding and others contracting.

In sum, while Amazon has undeniably improved its position in practically all of the parameters considered by the AGCM, indicators show that the market as a whole has experienced and is experiencing growth. The improvement in Amazon’s position relative to some competitors—notably eBay, which AGCM asserts is Amazon’s biggest competitor—should therefore not obscure the fact that there is entry and expansion both at the fringes (ManoMano, Wish), and in the center of the market for online marketplace intermediation (Aliexpress).

Amazon’s Allegedly Abusive Conduct

According to the AGCM, Amazon has taken advantage of vertical integration to engage in self-preferencing. Specifically, the charge is that the company offers exclusive and purportedly crucial advantages on the Amazon.it marketplace to sellers who use Amazon’s own e-commerce logistics service, FBA. The purported advantages of this arrangement include, to name a few, the coveted Prime badge, the elimination of negative user feedback on sale or delivery, preferential algorithmic treatment, and exclusive participation in Amazon’s sales promotions (e.g., Black Friday, Cyber Monday). As a result, according to the AGCM, products sold through FBA enjoy more visibility and a better chance to win the “Buy Box.”

The AGCM claims this puts competing logistics operators like FedEx, Poste Italiane, and DHL at a disadvantage, because non-FBA products have less chance to be sold than FBA products, regardless of any efficiency or quality criteria. In the AGCM’s words, “Amazon has stolen demand for other e-commerce logistics operators.” 

Indirectly, Amazon’s “self-preferencing” purportedly also harms competing marketplaces like eBay by creating incentives for sellers to single-home—i.e., to sell only through Amazon Marketplace. The argument here is that retailers will not multi-home to avoid duplicative costs associated with FBA, e.g., storing goods in several warehouses. 

Although it is not necessary to demonstrate anticompetitive effects under Article 102 TFEU, the AGCM claims that Amazon’s behavior has caused drastic worsening in other marketplaces’ competitive position by constraining their ability to reach the minimum scale needed to enjoy direct and indirect network effects. The Italian authorities summarily assert that this results in consumer harm, although the gargantuan 250-page decision spends scarcely one paragraph on this point. 

Intuitively, however, Amazon’s behavior should, in principle, benefit consumers by offering something that most find tremendously valuable: a guarantee of quick delivery for a wide range of goods. Indeed, this is precisely why it is so misguided to condemn self-preferencing by online platforms.

As some have already argued, we cannot assume that something is bad for competition just because it is bad for certain competitors. For instance, a lot of unambiguously procompetitive behavior, like cutting prices, puts competitors at a disadvantage. The same might be true for a digital platform that preferences its own service because it is generally better than the alternatives provided by third-party sellers. In the case at hand, for example, Amazon’s granting marketplace privileges to FBA products may help users to select the products that Amazon can guarantee will best satisfy their needs. This is perfectly plausible, as customers have repeatedly shown that they often prefer less open, less neutral options.

The key question, therefore, should be whether the behavior in question excludes equally efficient rivals in such a way as to harm consumer welfare. Otherwise, we would essentially be asking companies to refrain from offering services that benefit their users in order to make competing products comparatively more attractive. This is antithetical to the nature of competition, which is based on the principle that what is good for consumers is frequently bad for competitors.

AGCM’s Theory of Harm Rests on Four Weak Pillars

Building on the logic that Amazon enjoys “hyper-dominance” in marketplace intermediation; that most online sales are marketplace sales; and that most marketplace sales are, in turn, Amazon.it sales, the AGCM decision finds that succeeding on Amazon.it is indispensable for any online retailer in Italy. This argument hinges largely on whether online and offline retailers are thought of as distinct relevant markets—i.e., whether, from the perspective of the retailer, online and offline sales channels are substitutable (see also the relevant market definition section above). 

Ultimately, the AGCM finds that they are not, as online sales enjoy such advantages as lower fixed costs, increased sale flexibility, and better geographical reach. To an outsider, the distinction between the two markets may seem artificial—and it largely is—but such theoretical market segmentation is the bread-and-butter of antitrust analysis. Still, even by EU competition law standards, the relevant market definitions on which the AGCM relies to conclude that selling on Amazon is indispensable appear excessively narrow. 

This market distinction also serves to set up the AGCM’s second, more controversial argument: that the benefits extended to products sold through the FBA channel are also indispensable for retailers’ success on the Amazon.it marketplace. Here, the AGCM seeks a middle ground between competitive advantage and indispensability, finally settling on the notion that a sufficiently large competitive advantage itself translates into indispensability.

But how big is too big? The facts that 40-45% of Amazon’s third-party retailers do not use FBA (p. 57 of the decision) and that roughly 40 of the top 100 products sold on Amazon.it are not fulfilled through Amazon’s logistics service (p. 58) would appear to suggest that FBA is more of a convenience than an obligation. At the least, it does not appear that the advantage conferred is so big as to amount to indispensability. This may be because sellers that choose not to use Amazon’s logistics service (including offline, of course) can and do cut prices to compete with FBA-sold products. If anything, this should be counted as a good thing from the perspective of consumer welfare.

Instead, and signaling the decision’s overarching preoccupation with protecting some businesses at the expense of others (and, ultimately, at the expense of consumers), the AGCM has expanded the already bloated notion of a self-preferencing offense to conclude that expecting sellers to compete on pricing parameters would unfairly slash profit margins for non-FBA sellers.

The third pillar of the AGCM’s theory of harm is the claim that the benefits conferred on products sold through FBA are not awarded based on any objective quality criteria, but purely on whether the seller has chosen FBA or third-party logistics. Thus, even if a logistics operator were, in principle, capable of offering a service as efficient as FBA’s, it would not qualify for the same benefits. 

But this is a disingenuous line of reasoning. One legitimate reason why Amazon could choose to confer exclusive advantages on products fulfilled by its own logistics operation is because no other service is, in fact, routinely as reliable. This does not necessarily mean that FBA is always superior to the alternatives, but rather that it makes sense for Amazon to adopt this presumption a general rule based on past experience, without spending the resources to constantly evaluate it. In other words, granting exclusive benefits is based on quality criteria, just on a prior measurement of quality rather than an ongoing assessment. This is presumably what a customer-obsessed business that does not want to take chances with consumer satisfaction would do. 

Fourth, the AGCM posits that Prime and FBA constitute two separate products that have been artificially tied by Amazon, thereby unfairly excluding third-party logistics operators. Co-opting Amazon’s own terminology, the AGCM claims that the company has created a flywheel of artificial interdependence, wherein Prime benefits increase the number of Prime users, which drives demand for Prime products, which creates demand for Prime-eligible FBA products, and so on. 

To support its case, the AGCM repeatedly adduces a 2015 letter in which Jeff Bezos told shareholders that Amazon Marketplace and Prime are “happily and deeply intertwined,” and that FBA is the “glue” that links them together. Instead of taking this for what it likely is—i.e., a case of legitimate, efficiency-enhancing vertical integration—the AGCM has preferred to read into it a case of illicit tying, an established offense under Article 102 TFEU whereby a dominant firm makes the purchase of one product conditional on the purchase of another, unrelated one. 

The problem with this narrative is that it is perfectly plausible that Prime and FBA are, in fact, meant to be one product that is more than the sum of its parts. For one, the inventory of sellers who use FBA is stowed in fulfillment centers, meaning that Amazon takes care of all logistics, customer service, and product returns. As Bezos put it in the same 2015 letter, this is a huge efficiency gain. It thus makes sense to nudge consumers towards products that use FBA.

In sum, the AGCM’s case rests on a series of questionable assumptions that build on each other: a narrow relevant market definition; a finding of “hyper-dominance” that downplays competitors’ growth and expansion, as well as competition from outside the narrowly defined market; a contrived notion of indispensability at two levels (Marketplace and FBA); and a refusal to contemplate the possibility that Amazon integrates its marketplace and logistics services in orders to enhance efficiency, rather than to exclude competitors.

Remedies

The AGCM sees “only one way to restore a level-playing field in e-commerce logistics”: Amazon must redesign its existing Self-Fulfilled Prime (SFP) program in such a way as to grant all logistics operators—FBA or non-FBA—equal treatment on Amazon.it, based on a set of objective, transparent, standard, uniform, and non-discriminatory criteria. Any logistics operator that demonstrates the ability to fulfill such criteria must be awarded SFP status and the accompanying Prime badge, along with all the perks associated with it. Further, SFP- and FBA-sold products must be subject to the same monitoring mechanism with regard to the observance of Prime standards, as well as to the same evaluation standards. 

In sum, Amazon Italy now has a duty to treat Marketplace sales fulfilled by third-party operators the same as those fulfilled by its own logistics service. This is a significant step toward “common carrier antitrust.” in which vertically integrated firms are expected to comply with perfect neutrality obligations with respect to customers, suppliers, and competitors

Beyond the philosophical question of whether successful private companies should be obliged by law to treat competitors analogously to its affiliates (they shouldn’t), the pitfalls of this approach are plain to see. Nearly all consumer-facing services use choice architectures as a means to highlight products that rank favorably in terms of price and quality, and ensuring consumers enjoy a seamless user experience: Supermarkets offer house brands that signal a product has certain desirable features; operating system developers pre-install certain applications to streamline users’ “out of the box “experience; app stores curate the apps that users will view; search engines use specialized boxes that anticipate the motives underlying users’ search queries, etc. Suppressing these practices through heavy-handed neutrality mandates is liable to harm consumers. 

Second, monitoring third-party logistics operators’ compliance with the requisite standards is going to come at a cost for Amazon (and, presumably, its customers)—a cost likely much higher than that of monitoring its own operations—while awarding the Prime badge liberally may deteriorate the consumer experience on Amazon Marketplace.

Thus, one way for Amazon to comply with AGCM’s remedies while also minimizing monitoring costs is simply to dilute or even remove the criteria for Prime, thereby allowing sellers using any logistics provider to be eligible for Prime. While this would presumably insulate Amazon from any future claims against exclusionary self-preferencing, it would almost certainly also harm consumer welfare. 

A final point worth noting is that vertical integration may well be subsidizing Amazon’s own first-party products. In other words, even if FBA is not fully better than other logistics operators, the revenue that it derives from FBA enables Amazon to offer low prices, as well as a range of other benefits from Prime, such as, e.g., free video. Take that source of revenue away, and those subsidized prices go up and the benefits disappear. This is another reason why it may be legitimate to consider FBA and Prime as a single product.

Of course, this argument is moot if all one cares about is how Amazon’s vertical integration affects competitors, not consumers. But consumers care about the whole package. The rationale at play in the AGCM decision ultimately ends up imposing a narrow, boring business model on all sellers, precluding them from offering interesting consumer benefits to bolster their overall product.

Conclusion

Some have openly applauded AGCM’s use of EU competition law to protect traditional logistics operators like FedEx, Poste Italiane, DHL, and UPS. Others lament the competition authority’s apparent abandonment of the consumer welfare standard in favor of a renewed interest in punishing efficiency to favor laggard competitors under the guise of safekeeping “competition.” Both sides ultimately agree on one thing, however: Amazon Italy is about favoring Amazon’s competitors. If competition authorities insist on continuing down this populist rabbit hole,  the best they can hope for is a series of Pyrrhic victories against the businesses that are most bent on customer satisfaction, i.e., the successful ones.

Some may intuitively think that this is fair; that Amazon is just too big and that it strangles small competitors. But Amazon’s “small” competitors are hardly the “worthy men” of Brandeisian mythology. They are FedEx, DHL, UPS, and the state-backed goliath Poste Italiane; they are undeniably successful companies like eBay, Alibaba – or Walmart in the United States. It is, conversely, the smallest retailers and consumers who benefit the most from Amazon’s integrated logistics and marketplace services, as the company’s meteoric rise in popularity in Italy since 2016 attests. But it seems that, in the brave new world of antitrust, such stakeholders are now too small to matter.

Even as delivery services work to ship all of those last-minute Christmas presents that consumers bought this season from digital platforms and other e-commerce sites, the U.S. House and Senate are contemplating Grinch-like legislation that looks to stop or limit how Big Tech companies can “self-preference” or “discriminate” on their platforms.

A platform “self-preferences” when it blends various services into the delivery of a given product in ways that third parties couldn’t do themselves. For example, Google self-preferences when it puts a Google Shopping box at the top of a Search page for Adidas sneakers. Amazon self-preferences when it offers its own AmazonBasics USB cables alongside those offered by Apple or Anker. Costco’s placement of its own Kirkland brand of paper towels on store shelves can also be a form of self-preferencing.

Such purportedly “discriminatory” behavior constitutes much of what platforms are designed to do. Virtually every platform that offers a suite of products and services will combine them in ways that users find helpful, even if competitors find it infuriating. It surely doesn’t help Yelp if Google Search users can see a Maps results box next to a search for showtimes at a local cinema. It doesn’t help other manufacturers of charging cables if Amazon sells a cheaper version under a brand that consumers trust. But do consumers really care about Yelp or Apple’s revenues, when all they want are relevant search results and less expensive products?

Until now, competition authorities have judged this type of conduct under the consumer welfare standard: does it hurt consumers in the long run, or does it help them? This test does seek to evaluate whether the conduct deprives consumers of choice by foreclosing rivals, which could ultimately allow the platform to exploit its customers. But it doesn’t treat harm to competitors—in the form of reduced traffic and profits for Yelp, for example—as a problem in and of itself.

“Non-discrimination” bills introduced this year in both the House and Senate aim to change that, but they would do so in ways that differ in important respects.

The House bill would impose a blanket ban on virtually all “discrimination” by platforms. This means that even such benign behavior as Facebook linking to Facebook Marketplace on its homepage would become presumptively unlawful. The measure would, as I’ve written before, break a lot of the Internet as we know it, but it has the virtue of being explicit and clear about its effects.

The Senate bill is, in this sense, a lot more circumspect. Instead of a blanket ban, it would prohibit what the bill refers to as “unfair” discrimination that “materially harm[s] competition on the covered platform,” with a carve-out exception for discrimination that was “necessary” to maintain or enhance the “core functionality” of the platform. In theory, this would avoid a lot of the really crazy effects of the House bill. Apple likely still could, for example, pre-install a Camera app on the iPhone.

But this greater degree of reasonableness comes at the price of ambiguity. The bill does not define “unfair discrimination,” nor what it would mean for something to be “necessary” to improve the core functionality of a platform. Faced with this ambiguity, companies would be wise to be overly cautious, given the steep penalties they would face for conduct found to be “unfair”: 15% of total U.S. revenues earned during the period when the conduct was ongoing. That’s a lot of money to risk over a single feature!

Also unlike the House legislation, the Senate bill would not create a private right of action, thereby limiting litigation to enforce the bill’s terms to actions brought by the Federal Trade Commission (FTC), U.S. Justice Department (DOJ), or state attorneys general.

Put together, these features create the perfect recipe for extensive discretionary power held by a handful of agencies. With such vague criteria and such massive penalties for lawbreaking, the mere threat of a lawsuit could force a company to change its behavior. The rules are so murky that companies might even be threatened with a lawsuit over conduct in one area in order to make them change their behavior in another.

It’s hardly unprecedented for powers like this to be misused. During the Obama administration, the Internal Revenue Service (IRS) was alleged to have targeted conservative groups for investigation, for which the agency eventually had to apologize (and settle a lawsuit brought by some of the targeted groups). More than a decade ago, the Bank Secrecy Act was used to uncover then-New York Attorney General Eliot Spitzer’s involvement in an international prostitution ring. Back in 2008, the British government used anti-terrorism powers to seize the assets of some Icelandic banks that had become insolvent and couldn’t repay their British depositors. To this day, municipal governments in Britain use anti-terrorism powers to investigate things like illegal waste dumping and people who wrongly park in spots reserved for the disabled.

The FTC itself has a history of abusing its authority. As Commissioners Noah Phillips and Christine Wilson remind us, the commission was nearly shut down in the 1970s after trying to use its powers to “protect” children from seeing ads for sugary foods, interpreting its consumer-protection mandate so broadly that it considered tooth decay as falling within its scope.

As I’ve written before, both Chair Lina Khan and Commissioner Rebecca Kelly Slaughter appear to believe that the FTC ought to take a broad vision of its goals. Slaughter has argued that antitrust ought to be “antiracist.” Khan believes that the “the dispersion of political and economic control” is the proper goal of antitrust, not consumer welfare or some other economic goal.

Khan in particular does not appear especially bound by the usual norms that might constrain this sort of regulatory overreach. In recent weeks, she has pushed through contentious decisions by relying on more than 20 “zombie votes” cast by former Commissioner Rohit Chopra on the final day before he left the agency. While it has been FTC policy since 1984 to count votes cast by departed commissioners unless they are superseded by their successors, Khan’s FTC has invoked this relatively obscure rule to swing more decisions than every single predecessor combined.

Thus, while the Senate bill may avoid immediately breaking large portions of the Internet in ways the House bill would, it would instead place massive discretionary powers into the hands of authorities who have expansive views about the goals those powers ought to be used to pursue.

This ought to be concerning to anyone who disapproves of public policy being made by unelected bureaucrats, rather than the people’s chosen representatives. If Republicans find an empowered Khan-led FTC worrying today, surely Democrats ought to feel the same about an FTC run by Trump-style appointees in a few years. Both sides may come to regret creating an agency with so much unchecked power.

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

The European Commission and its supporters were quick to claim victory following last week’s long-awaited General Court of the European Union ruling in the Google Shopping case. It’s hard to fault them. The judgment is ostensibly an unmitigated win for the Commission, with the court upholding nearly every aspect of its decision. 

However, the broader picture is much less rosy for both the Commission and the plaintiffs. The General Court’s ruling notably provides strong support for maintaining the current remedy package, in which rivals can bid for shopping box placement. This makes the Commission’s earlier rejection of essentially the same remedy  in 2014 look increasingly frivolous. It also pours cold water on rivals’ hopes that it might be replaced with something more far-reaching.

More fundamentally, the online world continues to move further from the idealistic conception of an “open internet” that regulators remain determined to foist on consumers. Indeed, users consistently choose convenience over openness, thus rejecting the vision of online markets upon which both the Commission’s decision and the General Court’s ruling are premised. 

The Google Shopping case will ultimately prove to be both a pyrrhic victory and a monument to the pitfalls of myopic intervention in digital markets.

Google’s big remedy win

The main point of law addressed in the Google Shopping ruling concerns the distinction between self-preferencing and refusals to deal. Contrary to Google’s defense, the court ruled that self-preferencing can constitute a standalone abuse of Article 102 of the Treaty on the Functioning of the European Union (TFEU). The Commission was thus free to dispense with the stringent conditions laid out in the 1998 Bronner ruling

This undoubtedly represents an important victory for the Commission, as it will enable it to launch new proceedings against both Google and other online platforms. However, the ruling will also constrain the Commission’s available remedies, and rightly so.

The origins of the Google Shopping decision are enlightening. Several rivals sought improved access to the top of the Google Search page. The Commission was receptive to those calls, but faced important legal constraints. The natural solution would have been to frame its case as a refusal to deal, which would call for a remedy in which a dominant firm grants rivals access to its infrastructure (be it physical or virtual). But going down this path would notably have required the Commission to show that effective access was “indispensable” for rivals to compete (one of the so-called Bronner conditions)—something that was most likely not the case here. 

Sensing these difficulties, the Commission framed its case in terms of self-preferencing, surmising that this would entail a much softer legal test. The General Court’s ruling vindicates this assessment (at least barring a successful appeal by Google):

240    It must therefore be concluded that the Commission was not required to establish that the conditions set out in the judgment of 26 November 1998, Bronner (C‑7/97, EU:C:1998:569), were satisfied […]. [T]he practices at issue are an independent form of leveraging abuse which involve […] ‘active’ behaviour in the form of positive acts of discrimination in the treatment of the results of Google’s comparison shopping service, which are promoted within its general results pages, and the results of competing comparison shopping services, which are prone to being demoted.

This more expedient approach, however, entails significant limits that will undercut both the Commission and rivals’ future attempts to extract more far-reaching remedies from Google.

Because the underlying harm is no longer the denial of access, but rivals being treated less favorably, the available remedies are much narrower. Google must merely ensure that it does not treat itself more preferably than rivals, regardless whether those rivals ultimately access its infrastructure and manage to compete. The General Court says this much when it explains the theory of harm in the case at hand:

287. Conversely, even if the results from competing comparison shopping services would be particularly relevant for the internet user, they can never receive the same treatment as results from Google’s comparison shopping service, whether in terms of their positioning, since, owing to their inherent characteristics, they are prone to being demoted by the adjustment algorithms and the boxes are reserved for results from Google’s comparison shopping service, or in terms of their display, since rich characters and images are also reserved to Google’s comparison shopping service. […] they can never be shown in as visible and as eye-catching a way as the results displayed in Product Universals.

Regulation 1/2003 (Art. 7.1) ensures the European Commission can only impose remedies that are “proportionate to the infringement committed and necessary to bring the infringement effectively to an end.” This has obvious ramifications for the Google Shopping remedy.

Under the remedy accepted by the Commission, Google agreed to auction off access to the Google Shopping box. Google and rivals would thus compete on equal footing to display comparison shopping results.

Illustrations taken from Graf & Mostyn, 2020

Rivals and their consultants decried this outcome; and Margrethe Vestager intimated the commission might review the remedy package. Both camps essentially argued the remedy did not meaningfully boost traffic to rival comparison shopping services (CSSs), because those services were not winning the best auction slots:

All comparison shopping services other than Google’s are hidden in plain sight, on a tab behind Google’s default comparison shopping page. Traffic cannot get to them, but instead goes to Google and on to merchants. As a result, traffic to comparison shopping services has fallen since the remedy—worsening the original abuse.

Or, as Margrethe Vestager put it:

We may see a show of rivals in the shopping box. We may see a pickup when it comes to clicks for merchants. But we still do not see much traffic for viable competitors when it comes to shopping comparison

But these arguments are entirely beside the point. If the infringement had been framed as a refusal to supply, it might be relevant that rivals cannot access the shopping box at what is, for them,  cost-effective price. Because the infringement was framed in terms of self-preferencing, all that matters is whether Google treats itself equally.

I am not aware of a credible claim that this is not the case. At best, critics have suggested the auction mechanism favors Google because it essentially pays itself:

The auction mechanism operated by Google to determine the price paid for PLA clicks also disproportionately benefits Google. CSSs are discriminated against per clickthrough, as they are forced to cede most of their profit margin in order to successfully bid […] Google, contrary to rival CSSs, does not, in reality, have to incur the auction costs and bid away a great part of its profit margins.

But this reasoning completely omits Google’s opportunity costs. Imagine a hypothetical (and oversimplified) setting where retailers are willing to pay Google or rival CSSs 13 euros per click-through. Imagine further that rival CSSs can serve these clicks at a cost of 2 euros, compared to 3 euros for Google (excluding the auction fee). Google is less efficient in this hypothetical. In this setting, rivals should be willing to bid up to 11 euros per click (the difference between what they expect to earn and their other costs). Critics claim Google will accept to bid higher because the money it pays itself during the auction is not really a cost (it ultimately flows to Google’s pockets). That is clearly false. 

To understand this, readers need only consider Google’s point of view. On the one hand, it could pay itself 11 euros (and some tiny increment) to win the auction. Its revenue per click-through would be 10 euros (13 euros per click-through, minus its cost of 3 euros). On the other hand, it could underbid rivals by a tiny increment, ensuring they bid 11 euros. When its critics argue that Google has an advantage because it pays itself, they are ultimately claiming that 10 is larger than 11.

Google’s remedy could hardly be more neutral. If it wins more auction slots than rivals CSSs, the appropriate inference should be that it is simply more efficient. Nothing in the Commission’s decision or the General Court’s ruling precludes that outcome. In short, while Google has (for the time being, at least) lost its battle to appeal the Commission’s decision, the remedy package—the same it put forward way back in 2014—has never looked stronger.

Good news for whom?

The above is mostly good news for both Google and consumers, who will be relieved that the General Court’s ruling preserves Google’s ability to show specialized boxes (of which the shopping unit is but one example). But that should not mask the tremendous downsides of both the Commission’s case and the court’s ruling. 

The Commission and rivals’ misapprehensions surrounding the Google Shopping remedy, as well as the General Court’s strong stance against self-preferencing, are revealing of a broader misunderstanding about online markets that also permeates through other digital regulation initiatives like the Digital Markets Act and the American Choice and Innovation Act. 

Policymakers wrongly imply that platform neutrality is a good in and of itself. They assume incumbent platforms generally have an incentive to favor their own services, and that preventing them from doing so is beneficial to both rivals and consumers. Yet neither of these statements is correct.

Economic research suggests self-preferencing is only harmful in exceptional circumstances. That is true of the traditional literature on platform threats (here and here), where harm is premised on the notion that rivals will use the downstream market, ultimately, to compete with an upstream incumbent. It’s also true in more recent scholarship that compares dual mode platforms to pure marketplaces and resellers, where harm hinges on a platform being able to immediately imitate rivals’ offerings. Even this ignores the significant efficiencies that might simultaneously arise from self-preferencing and closed platforms, more broadly. In short, rules that categorically prohibit self-preferening by dominant platforms overshoot the mark, and the General Court’s Google Shopping ruling is a troubling development in that regard.

It is also naïve to think that prohibiting self-preferencing will automatically benefit rivals and consumers (as opposed to harming the latter and leaving the former no better off). If self-preferencing is not anticompetitive, then propping up inefficient firms will at best be a futile exercise in preserving failing businesses. At worst, it would impose significant burdens on consumers by destroying valuable synergies between the platform and its own downstream service.

Finally, if the past years teach us anything about online markets, it is that consumers place a much heavier premium on frictionless user interfaces than on open platforms. TikTok is arguably a much more “closed” experience than other sources of online entertainment, like YouTube or Reddit (i.e. users have less direct control over their experience). Yet many observers have pinned its success, among other things, on its highly intuitive and simple interface. The emergence of Vinted, a European pre-owned goods platform, is another example of competition through a frictionless user experience.

There is a significant risk that, by seeking to boost “choice,” intervention by competition regulators against self-preferencing will ultimately remove one of the benefits users value most. By increasing the information users need to process, there is a risk that non-discrimination remedies will merely add pain points to the underlying purchasing process. In short, while Google Shopping is nominally a victory for the Commission and rivals, it is also a testament to the futility and harmfulness of myopic competition intervention in digital markets. Consumer preferences cannot be changed by government fiat, nor can the fact that certain firms are more efficient than others (at least, not without creating significant harm in the process). It is time this simple conclusion made its way into European competition thinking.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division), and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP).

[Kolasky & Giordano: The authors thank Katherine Taylor, an associate at Hughes Hubbard & Reed, for her help in researching this article.]

On January 10, the Department of Justice (DOJ) withdrew the 1984 DOJ Non-Horizontal Merger Guidelines, and, together with the Federal Trade Commission (FTC), released new draft 2020 Vertical Merger Guidelines (“DOJ/FTC draft guidelines”) on which it seeks public comment by February 26.[1] In announcing these new draft guidelines, Makan Delrahim, the Assistant Attorney General for the Antitrust Division, acknowledged that while many vertical mergers are competitively beneficial or neutral, “some vertical transactions can raise serious concern.” He went on to explain that, “The revised draft guidelines are based on new economic understandings and the agencies’ experience over the past several decades and better reflect the agencies’ actual practice in evaluating proposed vertical mergers.” He added that he hoped these new guidelines, once finalized, “will provide more clarity and transparency on how we review vertical transactions.”[2]

While we agree with the DOJ and FTC that the 1984 Non-Horizontal Merger Guidelines are now badly outdated and that a new set of vertical merger guidelines is needed, we question whether the draft guidelines released on January 10, will provide the desired “clarity and transparency.” In our view, the proposed guidelines give insufficient recognition to the wide range of efficiencies that flow from most, if not all, vertical mergers. In addition, the guidelines fail to provide sufficiently clear standards for challenging vertical mergers, thereby leaving too much discretion in the hands of the agencies as to when they will challenge a vertical merger and too much uncertainty for businesses contemplating a vertical merger. 

What is most troubling is that this did not need to be so. In 2008, the European Commission, as part of its merger process reform initiative, issued an excellent set of non-horizontal merger guidelines that adopt basically the same analytical framework as the new draft guidelines for evaluating vertical mergers.[3] The EU guidelines, however, lay out in much more detail the factors the Commission will consider and the standards it will apply in evaluating vertical transactions. That being so, it is difficult to understand why the DOJ and FTC did not propose a set of vertical merger guidelines that more closely mirror those of the European Commission, rather than try to reinvent the wheel with a much less complete set of guidelines.

Rather than making the same mistake ourselves, we will try to summarize the EU vertical mergers and to explain why we believe they are markedly better than the draft guidelines the DOJ and FTC have proposed. We would urge the DOJ and FTC to consider revising their draft guidelines to make them more consistent with the EU vertical merger guidelines. Doing so would, among other things, promote greater convergence between the two jurisdictions, which is very much in the interest of both businesses and consumers in an increasingly global economy.

The principal differences between the draft joint guidelines and the EU vertical merger guidelines

1. Acknowledgement of the key differences between horizontal and vertical mergers

The EU guidelines begin with an acknowledgement that, “Non-horizontal mergers are generally less likely to significantly impede effective competition than horizontal mergers.” As they explain, this is because of two key differences between vertical and horizontal mergers.

  • First, unlike horizontal mergers, vertical mergers “do not entail the loss of direct competition between the merging firms in the same relevant market.”[4] As a result, “the main source of anti-competitive effect in horizontal mergers is absent from vertical and conglomerate mergers.”[5]
  • Second, vertical mergers are more likely than horizontal mergers to provide substantial, merger-specific efficiencies, without any direct reduction in competition. The EU guidelines explain that these efficiencies stem from two main sources, both of which are intrinsic to vertical mergers. The first is that, “Vertical integration may thus provide an increased incentive to seek to decrease prices and increase output because the integrated firm can capture a larger fraction of the benefits.”[6] The second is that, “Integration may also decrease transaction costs and allow for a better co-ordination in terms of product design, the organization of the production process, and the way in which the products are sold.”[7]

The DOJ/FTC draft guidelines do not acknowledge these fundamental differences between horizontal and vertical mergers. The 1984 DOJ non-horizontal guidelines, by contrast, contained an acknowledgement of these differences very similar to that found in the EU guidelines. First, the 1984 guidelines acknowledge that, “By definition, non-horizontal mergers involve firms that do not operate in the same market. It necessarily follows that such mergers produce no immediate change in the level of concentration in any relevant market as defined in Section 2 of these Guidelines.”[8] Second, the 1984 guidelines acknowledge that, “An extensive pattern of vertical integration may constitute evidence that substantial economies are afforded by vertical integration. Therefore, the Department will give relatively more weight to expected efficiencies in determining whether to challenge a vertical merger than in determining whether to challenge a horizontal merger.”[9] Neither of these acknowledgements can be found in the new draft guidelines.

These key differences have also been acknowledged by the courts of appeals for both the Second and D.C. circuits in the agencies’ two most recent litigated vertical mergers challenges: Fruehauf Corp. v. FTC in 1979[10] and United States v. AT&T in 2019.[11] In both cases, the courts held, as the D.C. Circuit explained in AT&T, that because of these differences, the government “cannot use a short cut to establish a presumption of anticompetitive effect through statistics about the change in market concentration” – as it can in a horizontal merger case – “because vertical mergers produce no immediate change in the relevant market share.”[12] Instead, in challenging a vertical merger, “the government must make a ‘fact-specific’ showing that the proposed merger is ‘likely to be anticompetitive’” before the burden shifts to the defendants “to present evidence that the prima facie case ‘inaccurately predicts the relevant transaction’s probable effect on future competition,’ or to ‘sufficiently discredit’ the evidence underlying the prima facie case.”[13]

While the DOJ/FTC draft guidelines acknowledge that a vertical merger may generate efficiencies, they propose that the parties to the merger bear the burden of identifying and substantiating those efficiencies under the same standards applied by the 2010 Horizontal Merger Guidelines. Meeting those standards in the case of a horizontal merger can be very difficult. For that reason, it is important that the DOJ/FTC draft guidelines be revised to make it clear that before the parties to a vertical merger are required to establish efficiencies meeting the horizontal merger guidelines’ evidentiary standard, the agencies must first show that the merger is likely to substantially lessen competition, based on the type of fact-specific evidence the courts required in both Fruehauf and AT&T.

2. Safe harbors

Although they do not refer to it as a “safe harbor,” the DOJ/FTC draft guidelines state that, 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.[14] 

If we understand this statement correctly, it means that the agencies may challenge a vertical merger in any case where one party has a 20% share in a relevant market and the other party has a 20% or higher share of any “related product,” i.e., any “product or service” that is supplied by the other party to firms in that relevant market. 

By contrast, the EU guidelines state that,

The Commission is unlikely to find concern in non-horizontal mergers . . . where the market share post-merger of the new entity in each of the markets concerned is below 30% . . . and the post-merger HHI is below 2,000.[15] 

Both the EU guidelines and the DOJ/FTC draft guidelines are careful to explain that these statements do not create any “legal presumption” that vertical mergers below these thresholds will not be challenged or that vertical mergers above those thresholds are likely to be challenged.

The EU guidelines are more consistent than the DOJ/FTC draft guidelines both with U.S. case law and with the actual practice of both the DOJ and FTC. It is important to remember that the raising rivals’ costs theory of vertical foreclosure was first developed nearly four decades ago by two young economists, David Scheffman and Steve Salop, as a theory of exclusionary conduct that could be used against dominant firms in place of the more simplistic theories of vertical foreclosure that the courts had previously relied on and which by 1979 had been totally discredited by the Chicago School for the reasons stated by the Second Circuit in Fruehauf.[16] 

As the Second Circuit explained in Fruehauf, it was “unwilling to assume that any vertical foreclosure lessens competition” because 

[a]bsent very high market concentration or some other factor threatening a tangible anticompetitive effect, a vertical merger may simply realign sales patterns, for insofar as the merger forecloses some of the market from the merging firms’ competitors, it may simply free up that much of the market, in which the merging firm’s competitors and the merged firm formerly transacted, for new transactions between the merged firm’s competitors and the merging firm’s competitors.[17] 

Or, as Robert Bork put it more colorfully in The Antitrust Paradox, in criticizing the FTC’s decision in A.G. Spalding & Bros., Inc.,[18]:

We are left to imagine eager suppliers and hungry customers, unable to find each other, forever foreclosed and left languishing. It would appear the commission could have cured this aspect of the situation by throwing an industry social mixer.[19]

Since David Scheffman and Steve Salop first began developing their raising rivals’ cost theory of exclusionary conduct in the early 1980s, gallons of ink have been spilled in legal and economic journals discussing and evaluating that theory.[20] The general consensus of those articles is that while raising rivals’ cost is a plausible theory of exclusionary conduct, proving that a defendant has engaged in such conduct is very difficult in practice. It is even more difficult to predict whether, in evaluating a proposed merger, the merged firm is likely to engage in such conduct at some time in the future. 

Consistent with the Second Circuit’s decision in Fruehauf and with this academic literature, the courts, in deciding cases challenging exclusive dealing arrangements under either a vertical foreclosure theory or a raising rivals’ cost theory, have generally been willing to consider a defendant’s claim that the alleged exclusive dealing arrangements violated section 1 of the Sherman Act only in cases where the defendant had a dominant or near-dominant share of a highly concentrated market — usually meaning a share of 40 percent or more.[21] Likewise, all but one of the vertical mergers challenged by either the FTC or DOJ since 1996 have involved parties that had dominant or near-dominant shares of a highly concentrated market.[22] A majority of these involved mergers that were not purely vertical, but in which there was also a direct horizontal overlap between the two parties.

One of the few exceptions is AT&T/Time Warner, a challenge the DOJ lost in both the district court and the D.C. Circuit.[23] The outcome of that case illustrates the difficulty the agencies face in trying to prove a raising rivals’ cost theory of vertical foreclosure where the merging firms do not have a dominant or near-dominant share in either of the affected markets.

Given these court decisions and the agencies’ historical practice of challenging vertical mergers only between companies with dominant or near-dominant shares in highly concentrated markets, we would urge the DOJ and FTC to consider raising the market share threshold below which it is unlikely to challenge a vertical merger to at least 30 percent, in keeping with the EU guidelines, or to 40 percent in order to make the vertical merger guidelines more consistent with the U.S. case law on exclusive dealing.[24] We would also urge the agencies to consider adding a market concentration HHI threshold of 2,000 or higher, again in keeping with the EU guidelines.

3. Standards for applying a raising rivals’ cost theory of vertical foreclosure

Another way in which the EU guidelines are markedly better than the DOJ/FTC draft guidelines is in explaining the factors taken into consideration in evaluating whether a vertical merger will give the parties both the ability and incentive to raise their rivals’ costs in a way that will enable the merged entity to increase prices to consumers. Most importantly, the EU guidelines distinguish clearly between input foreclosure and customer foreclosure, and devote an entire section to each. For brevity, we will focus only on input foreclosure to show why we believe the more detailed approach the EU guidelines take is preferable to the more cursory discussion in the DOJ/FTC draft guidelines.

In discussing input foreclosure, the EU guidelines correctly distinguish between whether a vertical merger will give the merged firm the ability to raise rivals’ costs in a way that may substantially lessen competition and, if so, whether it will give the merged firm an incentive to do so. These are two quite distinct questions, which the DOJ/FTC draft guidelines unfortunately seem to lump together.

The ability to raise rivals’ costs

The EU guidelines identify four important conditions that must exist for a vertical merger to give the merged firm the ability to raise its rivals’ costs. First, the alleged foreclosure must concern an important input for the downstream product, such as one that represents a significant cost factor relative to the price of the downstream product. Second, the merged entity must have a significant degree of market power in the upstream market. Third, the merged entity must be able, by reducing access to its own upstream products or services, to affect negatively the overall availability of inputs for rivals in the downstream market in terms of price or quality. Fourth, the agency must examine the degree to which the merger may free up capacity of other potential input suppliers. If that capacity becomes available to downstream competitors, the merger may simple realign purchase patterns among competing firms, as the Second Circuit recognized in Fruehauf.

The incentive to foreclose access to inputs: 

The EU guidelines recognize that the incentive to foreclose depends on the degree to which foreclosure would be profitable. In making this determination, the vertically integrated firm will take into account how its supplies of inputs to competitors downstream will affect not only the profits of its upstream division, but also of its downstream division. Essentially, the merged entity faces a trade-off between the profit lost in the upstream market due to a reduction of input sales to (actual or potential) rivals and the profit gained from expanding sales downstream or, as the case may be, raising prices to consumers. This trade-off is likely to depend on the margins the merged entity obtains on upstream and downstream sales. Other things constant, the lower the margins upstream, the lower the loss from restricting input sales. Similarly, the higher the downstream margins, the higher the profit gain from increasing market share downstream at the expense of foreclosed rivals.

The EU guidelines recognize that the incentive for the integrated firm to raise rivals’ costs further depends on the extent to which downstream demand is likely to be diverted away from foreclosed rivals and the share of that diverted demand the downstream division of the integrated firm can capture. This share will normally be higher the less capacity constrained the merged entity will be relative to non-foreclosed downstream rivals and the more the products of the merged entity and foreclosed competitors are close substitutes. The effect on downstream demand will also be higher if the affected input represents a significant proportion of downstream rivals’ costs or if it otherwise represents a critical component of the downstream product.

The EU guidelines recognize that the incentive to foreclose actual or potential rivals may also depend on the extent to which the downstream division of the integrated firm can be expected to benefit from higher price levels downstream as a result of a strategy to raise rivals’ costs. The greater the market shares of the merged entity downstream, the greater the base of sales on which to enjoy increased margins. However, an upstream monopolist that is already able to fully extract all available profits in vertically related markets may not have any incentive to foreclose rivals following a vertical merger. Therefore, the ability to extract available profits from consumers does not follow immediately from a very high market share; to come to that conclusion requires a more thorough analysis of the actual and future constraints under which the monopolist operates.

Finally, the EU guidelines require the Commission to examine not only the incentives to adopt such conduct, but also the factors liable to reduce, or even eliminate, those incentives, including the possibility that the conduct is unlawful. In this regard, the Commission will consider, on the basis of a summary analysis: (i) the likelihood that this conduct would be clearly be unlawful under Community law, (ii) the likelihood that this illegal conduct could be detected, and (iii) the penalties that could be imposed.

Overall likely impact on effective competition: 

Finally, the EU guidelines recognize that a vertical merger will raise foreclosure concerns only when it would lead to increased prices in the downstream market. This normally requires that the foreclosed suppliers play a sufficiently important role in the competitive process in the downstream market. In general, the higher the proportion of rivals that would be foreclosed in the downstream market, the more likely the merger can be expected to result in a significant price increase in the downstream market and, therefore, to significantly impede effective competition. 

In making these determinations, the Commission must under the EU guidelines also assess the extent to which a vertical merger may raise barriers to entry, a criterion that is also found in the 1984 DOJ non-horizontal merger guidelines but is strangely missing from the DOJ/FTC draft guidelines. As the 1984 guidelines recognize, a vertical merger can raise entry barriers if the anticipated input foreclosure would create a need to enter at both the downstream and the upstream level in order to compete effectively in either market.

* * * * *

Rather than issue a set of incomplete vertical merger guidelines, we would urge the DOJ and FTC to follow the lead of the European Commission and develop a set of guidelines setting out in more detail the factors the agencies will consider and the standards they will use in evaluating vertical mergers. The EU non-horizontal merger guidelines provide an excellent model for doing so.


[1] U.S. Department of Justice & Federal Trade Commission, Draft Vertical Merger Guidelines, available at https://www.justice.gov/opa/press-release/file/1233741/download (hereinafter cited as “DOJ/FTC draft guidelines”).

[2] U.S. Department of Justice, Office of Public Affairs, “DOJ and FTC Announce Draft Vertical Merger Guidelines for Public Comment,” Jan. 10, 2020, available at https://www.justice.gov/opa/pr/doj-and-ftc-announce-draft-vertical-merger-guidelines-public-comment.

[3] See European Commission, Guidelines on the assessment of non-horizontal mergers under the Council Regulation on the control of concentrations between undertakings (2008) (hereinafter cited as “EU guidelines”), available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52008XC1018(03)&from=EN.

[4] Id. at § 12.

[5] Id.

[6] Id. at § 13.

[7] Id. at § 14. The insight that transactions costs are an explanation for both horizontal and vertical integration in firms first occurred to Ronald Coase in 1932, while he was a student at the London School of Economics. See Ronald H. Coase, Essays on Economics and Economists 7 (1994). Coase took five years to flesh out his initial insight, which he then published in 1937 in a now-famous article, The Nature of the Firm. See Ronald H. Coase, The Nature of the Firm, Economica 4 (1937). The implications of transactions costs for antitrust analysis were explained in more detail four decades later by Oliver Williamson in a book he published in 1975. See Oliver E. William, Markets and Hierarchies: Analysis and Antitrust Implications (1975) (explaining how vertical integration, either by ownership or contract, can, for example, protect a firm from free riding and other opportunistic behavior by its suppliers and customers). Both Coase and Williamson later received Nobel Prizes for Economics for their work recognizing the importance of transactions costs, not only in explaining the structure of firms, but in other areas of the economy as well. See, e.g., Ronald H. Coase, The Problem of Social Cost, J. Law & Econ. 3 (1960) (using transactions costs to explain the need for governmental action to force entities to internalize the costs their conduct imposes on others).

[8] U.S. Department of Justice, Antitrust Division, 1984 Merger Guidelines, § 4, available at https://www.justice.gov/archives/atr/1984-merger-guidelines.

[9] EU guidelines, at § 4.24.

[10] Fruehauf Corp. v. FTC, 603 F.2d 345 (2d Cir. 1979).

[11] United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[12] Id. at 1032; accord, Fruehauf, 603 F.2d, at 351 (“A vertical merger, unlike a horizontal one, does not eliminate a competing buyer or seller from the market . . . . It does not, therefore, automatically have an anticompetitive effect.”) (emphasis in original) (internal citations omitted).

[13] AT&T, 419 F.2d, at 1032 (internal citations omitted).

[14] DOJ/FTC draft guidelines, at 3.

[15] EU guidelines, at § 25.

[16] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73 AM. ECON. REV. 267 (1983).

[17] Fruehauf, supra note11, 603 F.2d at 353 n.9 (emphasis added).

[18] 56 F.T.C. 1125 (1960).

[19] Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself 232 (1978).

[20] See, e.g., Alan J. Meese, Exclusive Dealing, the Theory of the Firm, and Raising Rivals’ Costs: Toward a New Synthesis, 50 Antitrust Bull., 371 (2005); David T. Scheffman and Richard S. Higgins, Twenty Years of Raising Rivals Costs: History, Assessment, and Future, 12 George Mason L. Rev.371 (2003); David Reiffen & Michael Vita, Comment: Is There New Thinking on Vertical Mergers, 63 Antitrust L.J. 917 (1995); Thomas G. Krattenmaker & Steven Salop, Anticompetitive Exclusion: Raising Rivals’ Costs to Achieve Power Over Price, 96 Yale L. J. 209, 219-25 (1986).

[21] See, e.g., United States v. Microsoft, 87 F. Supp. 2d 30, 50-53 (D.D.C. 1999) (summarizing law on exclusive dealing under section 1 of the Sherman Act); id. at 52 (concluding that modern case law requires finding that exclusive dealing contracts foreclose rivals from 40% of the marketplace); Omega Envtl, Inc. v. Gilbarco, Inc., 127 F.3d 1157, 1162-63 (9th Cir. 1997) (finding 38% foreclosure insufficient to make out prima facie case that exclusive dealing agreement violated the Sherman and Clayton Acts, at least where there appeared to be alternate channels of distribution).

[22] See, e.g., United States, et al. v. Comcast, 1:11-cv-00106 (D.D.C. Jan. 18, 2011) (Comcast had over 50% of MVPD market), available at https://www.justice.gov/atr/case-document/competitive-impact-statement-72; United States v. Premdor, Civil No.: 1-01696 (GK) (D.D.C. Aug. 3, 2002) (Masonite manufactured more than 50% of all doorskins sold in the U.S.; Premdor sold 40% of all molded doors made in the U.S.), available at https://www.justice.gov/atr/case-document/final-judgment-151.

[23] See United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[24] See Brown Shoe Co. v. United States, 370 U.S. 294, (1962) (relying on earlier Supreme Court decisions involving exclusive dealing and tying claims under section 3 of the Clayton Act for guidance as to what share of a market must be foreclosed before a vertical merger can be found unlawful under section 7).

Big Tech and Antitrust

John Lopatka —  19 July 2019

[This post is the third in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]

[This post is authored by John E. Lopatka, Robert Noll Distinguished Professor of Law, School of Law, The Pennsylvania State University]

Big Tech firms stand accused of many evils, and the clamor to break them up is loud.  Should we fetch our pitchforks? The antitrust laws are designed to address a range of wrongs and authorize a set of remedies, which include but do not emphasize divestiture.  When the harm caused by a Big Tech company is of a kind the antitrust laws are intended to prevent, an appropriate antitrust remedy can be devised. In such a case, it makes sense to use antitrust: If antitrust and its remedies are adequate to do the job fully, no legislative changes are required.  When the harm falls outside the ambit of antitrust and any other pertinent statute, a choice must be made. Antitrust can be expanded; other statutes can be amended or enacted; or any harms that are not perfectly addressed by existing statutory and common law can be left alone, for legal institutions are never perfect, and a disease can be less harmful than a cure.

A comprehensive list of the myriad and changing attacks on Big Tech firms would be difficult to compile.  Indeed, the identity of the offenders is not self-evident, though Google (Alphabet), Facebook, Amazon, and Apple have lately attracted the most attention.  The principal charges against Big Tech firms seem to be these: 1) compromising consumer privacy; 2) manipulating the news; 3) accumulating undue social and political influence; 4) stifling innovation by acquiring creative upstarts; 5) using market power in one market to injure competitors in adjacent markets; 6) exploiting input suppliers; 7) exploiting their own employees; and 8) damaging communities by location choices.

These charges are not uniform across the Big Tech targets.  Some charges have been directed more forcefully against some firms than others.  For instance, infringement of consumer privacy has been a focus of attacks on Facebook.  Both Facebook and Google have been accused of manipulating the news. And claims about the exploitation of input suppliers and employees and the destruction of communities have largely been directed at Amazon.

What is “Big Tech”?

Despite the variance among firms, the attacks against all of them proceed from the same syllogism: Some tech firms are big; big tech firms do social harm; therefore, big tech firms should be broken up.   From an antitrust perspective, something is missing. Start with the definition of a “tech” firm. In the modern economy, every firm relies on sophisticated technology – from an auto repair shop to an airplane manufacturer to a social media website operator.  Every firm is a tech firm. But critics have a more limited concept in mind. They are concerned about platforms, or intermediaries, in multi-sided markets. These markets exhibit indirect network effects. In a two-sided market, for instance, each side of the market benefits as the size of the other side grows.  Platforms provide value by coordinating the demand and supply of different groups of economic actors where the actors could not efficiently interact by themselves. In short, platforms reduce transaction costs. They have been around for centuries, but their importance has been magnified in recent years by rapid advances in technology.  Rational observers can sensibly ask whether platforms are peculiarly capable of causing harm. But critics tend to ignore or at least to discount the value that platforms provide, and doing so presents a distorted image that breeds bad policy.

Assuming we know what a tech firm is, what is “big”?  One could measure size by many standards. Most critics do not bother to define “big,” though at least Senator Elizabeth Warren has proposed defining one category of bigness as firms with annual global revenue of $25 billion or more and a second category as those with annual global revenue of between $90 million and $25 billion.  The proper standard for determining whether tech firms are objectionably large is not self-evident. Indeed, a size threshold embodied in any legal policy will almost always be somewhat arbitrary. That by itself is not a failing of a policy prescription. But why use a size screen at all? A few answers are possible. Large firms may do more harm than small firms when harm is proportionate to size.  Size may matter because government intervention is costly and less sensitive to firm size than is harm, implying that only harm caused by large firms is large enough to outweigh the costs of enforcement. And most important, the size of a firm may be related to the kind of harm the firm is accused of doing. Perhaps only a firm of a certain size can inflict a particular kind of injury. A clear standard of size and its justification ought to precede any policy prescription.

What’s the (antitrust) beef?

The social harms that Big Tech firms are accused of doing are a hodgepodge.  Some are familiar to antitrust scholars as either current or past objects of antitrust concern; others are not.  Antitrust protects against a certain kind of economic harm: The loss of economic welfare caused by a restriction on competition.  Though the terms are sometimes used in different ways, the core concept is reasonably clear and well accepted. In most cases, economic welfare is synonymous with consumer welfare.  Economic welfare, though, is a broader concept. For example, economic welfare is reduced when buyers exercise market power to the detriment of sellers and by productive inefficiencies.  But despite the claim of some Big Tech critics, when consumer welfare is at stake, it is not measured exclusively by the price consumers pay. Economists often explicitly refer to quality-adjusted prices and implicitly have the qualification in mind in any analysis of price.  Holding quality constant makes quantitative models easier to construct, but a loss of quality is a matter of conventional antitrust concern. The federal antitrust agencies’ horizontal merger guidelines recognize that “reduced product quality, reduced product variety, reduced service, [and] diminished innovation” are all cognizable adverse effects.  The scope of antitrust is not as constricted as some critics assert. Still, it has limits.

Leveraging market power is standard antitrust fare, though it is not nearly as prevalent as once thought.  Horizontal mergers that reduce economic welfare are an antitrust staple. The acquisition and use of monopsony power to the detriment of input suppliers is familiar antitrust ground.  If Big Tech firms have committed antitrust violations of this ilk, the offenses can be remedied under the antitrust laws.

Other complaints against the Big Tech firms do not fit comfortably or at all within the ambit of antitrust.  Antitrust does not concern itself with political or social influence. Influence is a function of size, but not relative to any antitrust market.  Firms that have more resources than other firms may have more influence, but the deployment of those resources across the economy is irrelevant. The use of antitrust to attack conglomerate mergers was an inglorious period in antitrust history.  Injuries to communities or to employees are not a proper antitrust concern when they result from increased efficiency. Acquisitions might stifle innovation, which is a proper antitrust concern, but they might spur innovation by inducing firms to create value and thereby become attractive acquisition targets or by facilitating integration.  Whether the consumer interest in informational privacy has much to do with competition is difficult to say. Privacy in this context means the collection and use of data. In a multi-sided market, one group of participants may value not only the size but also the composition and information about another group. Competition among platforms might or might not occur on the dimension of privacy.  For any platform, however, a reduction in the amount of valuable data it can collect from one side and provide to another side will reduce the price it can charge the second side, which can flow back and injure the first side. In all, antitrust falters when it is asked to do what it cannot do well, and whether other laws should be brought to bear depends on a cost/benefit calculus.

Does Big Tech’s conduct merit antitrust action?

When antitrust is used, it unquestionably requires a causal connection between conduct and harm.  Conduct must restrain competition, and the restraint must cause cognizable harm. Most of the attacks against Big Tech firms if pursued under the antitrust laws would proceed as monopolization claims.  A firm must have monopoly power in a relevant market; the firm must engage in anticompetitive conduct, typically conduct that excludes rivals without increasing efficiency; and the firm must have or retain its monopoly power because of the anticompetitive conduct.

Put aside the flaccid assumption that all the targeted Big Tech platforms have monopoly power in relevant markets.  Maybe they do, maybe they don’t, but an assumption is unwarranted. Focus instead on the conduct element of monopolization.  Most of the complaints about Big Tech firms concern their use of whatever power they have. Use isn’t enough. Each of the firms named above has achieved its prominence by extraordinary innovation, shrewd planning, and effective execution in an unforgiving business climate, one in which more platforms have failed than have succeeded.  This does not look like promising ground for antitrust.

Of course, even firms that generally compete lawfully can stray.  But to repeat, monopolists do not monopolize unless their unlawful conduct is causally connected to their market power.  The complaints against the Big Tech firms are notably weak on allegations of anticompetitive conduct that resulted in the acquisition or maintenance of their market positions.  Some critics have assailed Facebook’s acquisitions of WhatsApp and Instagram. Even assuming these firms competed with Facebook in well-defined antitrust markets, the claim that Facebook’s dominance in its core business was created or maintained by these acquisitions is a stretch.  

The difficulty fashioning remedies

The causal connection between conduct and monopoly power becomes particularly important when remedies are fashioned for monopolization.  Microsoft, the first major monopolization case against a high tech platform, is instructive.  DOJ in its complaint sought only conduct remedies for Microsoft’s alleged unlawful maintenance of a monopoly in personal computer operating systems.  The trial court found that Microsoft had illegally maintained its monopoly by squelching Netscape’s Navigator and Sun’s Java technologies, and by the end of trial DOJ sought and the court ordered structural relief in the form of “vertical” divestiture, separating Microsoft’s operating system business from its applications business.  Some commentators at the time argued for various kinds of “horizontal” divestiture, which would have created competing operating system platforms. The appellate court set aside the order, emphasizing that an antitrust remedy must bear a close causal connection to proven anticompetitive conduct. Structural remedies are drastic, and a plaintiff must meet a heightened standard of proof of causation to justify any kind of divestiture in a monopolization case.  On remand, DOJ abandoned its request for divestiture. The evidence that Microsoft maintained its market position by inhibiting the growth of middleware was sufficient to support liability, but not structural relief.

The court’s trepidation was well-founded.  Divestiture makes sense when monopoly power results from acquisitions, because the mergers expose joints at which the firm might be separated without rending fully integrated operations.  But imposing divestiture on a monopolist for engaging in single-firm exclusionary conduct threatens to destroy the integration that is the essence of any firm and is almost always disproportional to the offense.  Even if conduct remedies can be more costly to enforce than structural relief, the additional cost is usually less than the cost to the economy of forgone efficiency.   

The proposals to break up the Big Tech firms are ill-defined.  Based on what has been reported, no structural relief could be justified as antitrust relief.  Whatever conduct might have been unlawful was overwhelmingly unilateral. The few acquisitions that have occurred didn’t appreciably create or preserve monopoly power, and divestiture wouldn’t do much to correct the misbehavior critics see anyway.  Big Tech firms could be restructured through new legislation, but that would be a mistake. High tech platform markets typically yield dominant firms, though heterogeneous demand often creates space for competitors. Markets are better at achieving efficient structures than are government planners.  Legislative efforts at restructuring are likely to invite circumvention or lock in inefficiency.

Regulate “Big Tech” instead?

In truth, many critics are willing to put up with dominant tech platforms but want them regulated.  If we learned any lesson from the era of pervasive economic regulation of public utilities, it is that regulation is costly and often yields minimal benefits.  George Stigler and Claire Friedland demonstrated 57 years ago that electric utility regulation had little impact. The era of regulation was followed by an era of deregulation.  Yet the desire to regulate remains strong, and as Stigler and Friedland observed, “if wishes were horses, one would buy stock in a harness factory.” And just how would Big Tech platform regulators regulate?  Senator Warren offers a glimpse of the kind of regulation that critics might impose: “Platform utilities would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users.” This kind of standard has some meaning in the context of a standard-setting organization dealing with patent holders.  What it would mean in the context of a social media platform, for example, is anyone’s guess. Would it prevent biasing of information for political purposes, and what government official should be entrusted with that determination? What is certain is that it would invite government intervention into markets that are working well, if not perfectly.  It would invite public officials to tradeoff economic welfare for a host of values embedded in the concept of fairness. Federal agencies charged with promoting the “public interest” have a difficult enough time reaching conclusions where competition is one of several specific values to be considered. Regulation designed to address all the evils high tech platforms are thought to perpetrate would make traditional economic or public-interest regulation look like child’s play.

Concluding remarks

Big Tech firms have generated immense value.  They may do real harm. From all that can now be gleaned, any harm has had little to do with antitrust, and it certainly doesn’t justify breaking them up.  Nor should they be broken up as an exercise in central economic planning. If abuses can be identified, such as undesirable invasions of privacy, focused legislation may be in order, but even then only if the government action is predictably less costly than the abuses.

[This post is the second in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]

[This post is authored by Philip Marsden, Bank of England & College of Europe, IG/Twitter:  @competition_flaneur]

Since the release of our Furman Report, I have been blessed with an uptick in #antitrusttourism. Everywhere I go, people are talking about what to do about Big Tech. Europe, the Middle East, LatAm, Asia, Down Under — and everyone has slightly different views. But the direction of travel is similar: something is going to be done, some action will be taken. The discussions I’ve been privileged to have with agency officials, advisors, tech in-house counsel and complainants have been balanced and fair. Disagreements tend to focus on the “how, now” rather than on re-hashing arguments about whether anything need be done at all. However, there is one jurisdiction which is the exception — and that is the US.   There, pragmatism seems to have been defenestrated — it is all or nothing: we break tech up, or we praise tech from the rooftops. The thing is, neither is an appropriate response, and the longer the debate paralyses the US antitrust community, the more the rest of the world will say “maybe we should see other people” and break with the hard-earned precedent of evidence-based inquiries for which the US agencies are famous.

In the Land of the Free, there is so much broad-brush polarisation. Of course, there is the political main stage, and we have our share of that in the UK too. But in the theatre of American antitrust we have Chicken Littles running around shrieking that all tech platforms are run by creeps, there is an evil design behind every algo tweak or acqui-hire, and the only solution is to ditch antitrust, and move fast and break things, especially break up the G-MAFIA and the upcoming BAT from Asia, ASAP. The Chicken Littles run rings around another group, the ostriches with their heads in the sand saying “nothing to look at here”, the platforms are only forces for good, markets tip tip and tip again, sit back and enjoy the “free” goodies, and leave any mopping up of the tears of whining complainants to fresh “studies” by antitrust enforcers.  

There is also an endemic American debate which is pitched as a deep existential crisis, but seems more of a distraction: this says let’s change the consumer welfare standard and import broader social concerns — which is matched by a shocked response that price-based consumer welfare analysis is surely tried and true, and any alteration would send the heavens crashing down again. I view this as a distraction because from my experience as an enforcer and advisor, I only see an enlightened use of the consumer welfare standard as already considering harms to innovation, non-price effects, and lately privacy. So it may be interesting academic conference-fodder, but it largely misses the point that modern antitrust analysis is far broader, and more aware of non-price harms than it is portrayed.   

The US though is the only jurisdiction I’ve been to lately that seems to generate the most heat in the debates, and the least light. It is also where demands for tech break-ups are loudest but where any suggestion of regulatory intervention is knee-jerk rejected with abject horror. So there is a lot of noise but not much signal. The US seems disconnected from the international consensus on the need for actual action — and is a lone singleton debating its split-brain into the ground. And when they travel to the rest of the world — many American enforcers say — commendably with honesty — “Hey it’s not me, it’s you.”   “You’re the crazy ones with your Google fines, your Amazon own-sales bans, and your Facebook privacy abuse cases, we’ll just press ahead with our usual measured prosecutorial approach — oh and do a big study.”   

The thing is: no one believes the US will be anti-NIKE and “just do nothing”. If that was true there wouldn’t have been a massive drop of tech stock value on the announcement of DOJ, FTC and particularly Senate inquiries.   So some action will come stateside too… but what should that look like?

What I’d like to see is more engagement in the US with the international proposals. In our Furman Report, we supported a consumer welfare standard, but not laissez-faire. We supported a regulatory model developed through participative antitrust, but not common carrier regulation. And we did not favour breakups or presumptions against acquisitions by tech firms.  We tried to do some good, while preventing greater evils. Now, I still think that the most anti-competitive activity I’ve ever seen comes from government not from the abuses of market power of firms, so we do need to tread very carefully in designing our solutions and remedies. But we must remain vigilant against competitive problems in the tech sector and try to get ahead of them, particularly where they are created through structural aspects of these multi-sided markets, consumer inertia, entrenchment and enveloping, even in a world of “free” “goods” and “services”  (all in quotes since not everything online is free, or good, or even a service). So in Furman, we engaged with the debate but we avoided non-informative polarisation; not out of cowardice but to produce something hopefully relevant, informative, and which can actually be acted upon. It is an honour that our current Prime Minister and Chancellor have supported our work, and there are active moves to implement almost all of our proposals.   

We grounded our work in maintaining a focus on a dynamic consumer welfare standard, but we still firmly agreed that more intervention was needed. We concluded this after laying out our findings of myriad structural reasons for regulatory intervention (with no antitrust cause of complaint), and improving antitrust enforcement to address bad conduct as well. We sought to #dialupantitrust — through speeding up enforcement, and modernising merger control analysis — as well as #unlockingdigitalcompetition by developing a pro-competitive code of conduct, and data mobility (not just portability) and open API and similar remedies. There’s been lots of talk about that, and similarly-directed reports from the EU Trio and the Stigler Centre. I think discussing this sort of approach is the most pragmatic, evidence-based way forward: namely a model of participative antitrust, where the tech companies, their customers, consumer groups and government work out how to ensure platforms with strategic market status take on firm conduct obligations to get ahead of problems ex ante, and clear out many of the most toxic exclusionary or exploitative practices.  

Our approach would leave antitrust authorities to focus on the more nuanced behaviour, where #evidencematters and economic analysis and judgment really need to be brought to bear. This will primarily be in merger control — which we argue needs to be more forward-looking, more focussed on dynamic non-price impacts, and more able to address both the likelihood and magnitude of harms in a balanced way. This may also mean that authorities are less accepting of even heavily-sweated entry stories from merging parties. In ex post antitrust enforcement the main problem is speed, and we need to adjust the overall investigatory and appeal mechanism to ensure it is not captured not so much by the defendants and their armies of lawyers and economists, but by the mistaken focus on victory of our own team.   

I’ve seen senior agency lawyers refuse to release a decision until it has been sweated by 10 litigators and 3 QC’s and is “appeal-proof” — which no decision ever is — adding months or even years to the process. And woe betide a case team, inquiry chair or agency head who tries to cut through that — for the response is always “oh so you’re (much sucking of teeth and shaking of heads) content with Legal Risk???”.   This is lazy — I’d much rather work with lawyers whose default is “What are we trying to achieve?” not “I’ll just say No and then head off home” — a flaw that pervades some in-house counsel too. Legal risk is inherent in antitrust enforcement, not something to be feared. Frankly so many agencies have too many levels of internal scrutiny now which — when married to a system of full merits appeals — makes it incredible that any enforcement ever happens at all. And don’t get me started on the gaming inherent in negotiating commitments that may not even be effective but don’t even get a chance to operate before going through years of  review processes dominated by third party “market tests”. These flaws in enforcement systems contribute to the perception (and reality) of antitrust law’s weakness, slowness and inapplicability to reality — and hence fuel the calls for much stronger, much more intrusive and more chilling regulation, that could truly stifle a lot of genuine innovation.   

So our Furman report tries to cut through this, by speeding up antitrust enforcement, making merger control more forward looking — without achieving mathematical certainty but still allowing judgement of what is harmful on balance — and proposes a pro-competitive code of conduct for tech firms to help develop and “walk the talk”.   Developing that code will be a key challenge as we need to further refine what level of economic dependency on a platform customers and suppliers need to have, before that tech co is deemed to have strategic market status and must take on additional responsibilities to act fairly with respect to its customers, users, and suppliers. Fortunately, the British Government’s approval of our plans for a Digital Markets Unit means we can get started — so watch this space.

I’ve never said that this will be easy to do. We have a model in the Groceries Code Adjudicator — which was set up as a competition remedy — after a long market investigation of the offline retail platform market identified a range of harms that could occur, that might even be price-lowering to consumers but could harm innovation, choice and legitimate competition on the merits. A list of platforms was drawn up, a code was applied, and a range of toxic exploitative and exclusionary conduct was driven out of the market, and while not everything is perfect in retailing, far fewer complaints are landing on the CEO’s desk at the Competition & Markets Authority — so it can focus on other priorities. Our view is similar — while recognising that tech is a lot more complicated. Part of our model is thus also drawn on other CMA work with which I was honoured to be involved, a two year investigation of the retail banking platforms, and a degree of supply side and demand side inertia that I had never seen before, except maybe in energy. Here the solution was not — as politicians wanted — to break up the big banks. That would have done nothing good, and a lot of bad. Instead we found that the dynamic between supply and demand was so broken that remedies on both sides of the equation were needed. Here it was truly an example not of “it’s not you, it’s me” but “it’s both of us”: suppliers and consumers were contributing to the problem. We decided not to break up the platforms, though — but open them up — making data they were just sitting on (and which was a form of barrier to entry) available to fintech intermediaries, who would compete to access the data, train their new algos and thereby offer new choice tools to consumers.    

Breakups wold have added limping suppliers to the market, but much less competitive constraint. Opening up their data banks spurred the incumbents on to innovate faster than they might have, and customers to engage more with their banks. Our measure of success wasn’t switching — there is firm evidence that Britons switch their spouses more often than they switch their banks. So the remedy wasn’t breakup, and the KPI isn’t divorce, but is… engagement, on both sides of the relationship. And if it resulted in “maybe we should see other people” and multi-bank, then that is all to the overall good, for customer satisfaction, better engagement, and a more innovative retail banking ecosystem.  

And that is where I think we should seek new remedies in the tech sphere. Breakups wouldn’t help us stimulate a more innovative creative ecosystem. But only opening up platforms after litigating on an essential facilities doctrine for 8 years wouldn’t get us there either. We need informed analysis, with tech experts and competition and consumer officials, to identify the drivers of business developments, to balance the myriad issues that we all have as citizens, and voters, and shoppers, and then to act surgically when we see that a competition law problem of abuse of market power, or structural economic dependency, is causing real harm.  

I believe that the Furman report, and other international proposals from Australia, Canada, the EU, the UK’s Digital Markets Strategy, and enforcement action in the EU, Spain, Germany, Italy and elsewhere will help provide us with natural experiments and targeted solutions to specific problems. And in the process, will help fend off calls for short-term ‘fixes’ like breakups and other regulation that are retrograde and chill rather than go with the flow of — or better — stimulate innovation.   

Finally, we must not lose sight of one of my current bugbears, the incredible dependency we have allowed our governments and private sector to have on a handful of cloud computing companies. This may well have developed through superior skill, foresight and industry, and may be subject to rigorous procurement procedures and testing, but frankly, this is a ‘market’ that is too important to ignore. Social media and advertising may be pervasive but cloud is huge — with defence departments and banks and key infrastructure dependent on what are essentially private sector resiliency programmes. Even more than Facebook’s proposed currency Libra becoming “instantly systemic”, I fear we are already there with cloud: huge benefits, amazing efficiencies, but with it some zombie-apocalypse-level systemic risks not of one bank falling over, but many. Here it may well be that the bigger they are the more resilient they are, and the more able they are to police and rectify problems… but we have heard that before in other sectors and I just hope we can apply our developing proposals for digital platforms, to new challenges as well. The way tech is developing, we can’t live without it — but to live with it, we need to accept more responsibilities as enforcers, consumers and providers of these crucial services. So let’s stay together and work harder to #makeantitrustgreatagain and #unlockdigitalcompetition.   

On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.

At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.

The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:

  • It has been engaging in “exclusionary conduct”  that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.  
  • Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
  • The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
  • Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.

The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:

[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.

Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court  found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.

It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.

(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)

On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.

In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):

  1. The Telecommunications Industry Association (TIA) and
  2. The Alliance for Telecommunications Industry Solutions (ATIS).

These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.

The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for  extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.”

A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.

Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.

The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.

Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.

In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.

These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.

An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”

It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”  

Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.

The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”

Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:

[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.

Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.

The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.  As discussed, the policies of ATIS and TIA are unclear on this.

In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidence on what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.

On Tuesday, August 28, 2018, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Is Amazon’s Appetite Bottomless? The Whole Foods Merger After One Year — that looked at the concerns surrounding the closing of the Amazon-Whole Foods merger, and how those concerns had played out over the last year.

The difficulty presented by the merger was, in some ways, its lack of difficulty: Even critics, while hearkening back to the Brandeisian fear of large firms, had little by way of legal objection to offer against the merger. Despite the acknowledged lack of an obvious legal basis for challenging the merger, most critics nevertheless expressed a somewhat inchoate and generalized concern that the merger would hasten the death of brick-and-mortar retail and imperil competition in the grocery industry. Critics further pointed to particular, related issues largely outside the scope of modern antitrust law — issues relating to the presumed effects of the merger on “localism” (i.e., small, local competitors), retail workers, startups with ancillary businesses (e.g., delivery services), data collection and use, and the like.

Steven Horwitz opened the symposium with an insightful and highly recommended post detailing the development of the grocery industry from its inception. Tracing through that history, Horwitz was optimistic that

Viewed from the long history of the evolution of the grocery store, the Amazon-Whole Foods merger made sense as the start of the next stage of that historical process. The combination of increased wealth that is driving the demand for upscale grocery stores, and the corresponding increase in the value of people’s time that is driving the demand for one-stop shopping and various forms of pick-up and delivery, makes clear the potential benefits of this merger.

Others in the symposium similarly acknowledged the potential transformation of the industry brought on by the merger, but challenged the critics’ despairing characterization of that transformation (Auer, Manne & Stout, Rinehart, Fruits, Atkinson).

At the most basic level, it was noted that, in the immediate aftermath of the merger, Whole Foods dropped prices across a number of categories as it sought to shore up its competitive position (Auer). Further, under relevant antitrust metrics — e.g., market share, ease of competitive entry, potential for exclusionary conduct — the merger was completely unobjectionable under existing doctrine (Fruits).

To critics’ claims that Amazon in general, and the merger in particular, was decimating the retail industry, several posts discussed the updated evidence suggesting that retail is not actually on the decline (although some individual retailers are certainly struggling to compete) (Auer, Manne & Stout). Moreover, and following from Horwitz’s account of the evolution of the grocery industry, it appears that the actual trajectory of the industry is not an either/or between online and offline, but instead a movement toward integrating both models into a single retail experience (Manne & Stout). Further, the post-merger flurry of business model innovation, venture capital investment, and new startup activity demonstrates that, confronted with entrepreneurial competitors like Walmart, Kroger, Aldi, and Instacart, Amazon’s impressive position online has not translated into an automatic domination of the traditional grocery industry (Manne & Stout).  

Symposium participants more circumspect about the merger suggested that Amazon’s behavior may be laying the groundwork for an eventual monopsony case (Sagers). Further, it was suggested, a future Section 2 case, difficult under prevailing antitrust orthodoxy, could be brought with a creative approach to market definition in light of Amazon’s conduct with its marketplace participants, its aggressive ebook contracting practices, and its development and roll-out of its own private label brands (Sagers).

Skeptics also picked up on early critics’ concerns about the aggregation of large amounts of consumer data, and worried that the merger could be part of a pattern representing a real, long-term threat to consumers that antitrust does not take seriously enough (Bona & Levitsky). Sounding a further alarm, Hal Singer noted that Amazon’s interest in pushing into new markets with data generated by, for example, devices like its Echo line could bolster its ability to exclude competitors.

More fundamentally, these contributors echoed the merger critics’ concerns that antitrust does not adequately take account of other values such as “promoting local, community-based, organic food production or ‘small firms’ in general.” (Bona & Levitsky; Singer).

Rob Atkinson, however, pointed out that these values are idiosyncratic and not likely shared by the vast majority of the population — and that antitrust law shouldn’t have anything to do with them:

In short, most of the opposition to Amazon/Whole Foods merger had little or nothing to do with economics and consumer welfare. It had everything to do with a competing vision for the kind of society we want to live in. The neo-Brandesian opponents, who Lind and I term “progressive localists”, seek an alternative economy predominantly made up of small firms, supported by big government and protected from global competition.

And Dirk Auer noted that early critics’ prophecies of foreclosure of competition through “data leveraging” and below-cost pricing hadn’t remotely come to pass, thus far.

Meanwhile, other contributors noted the paucity of evidence supporting many of these assertions, and pointed out the manifest value the merger seemed to be creating by pressuring competitors to adapt and better respond to consumers’ preferences (Horwitz, Rinehart, Auer, Fruits, Manne & Stout) — in the process shoring up, rather than killing, even smaller retailers that are willing and able to evolve with changing technology and shifting consumer preferences. “For all the talk of retail dying, the stores that are actually dying are the ones that fail to cater to their customers, not the ones that happen to be offline” (Manne & Stout).

At the same time, not all merger skeptics were moved by the Neo-Brandeisian assertions. Chris Sagers, for example, finds much of the populist antitrust objection more public relations than substance. He suggested perhaps not taking these ideas and their promoters so seriously, and instead focusing on antitrust advocates with “real ideas” (like Sagers himself, of course).

Coming from a different angle, Will Rinehart also suggested not taking the criticisms too seriously, pointing to the evolving and complicated effects of the merger as Exhibit A for the need for regulatory humility:

Finally, this deal reiterates the need for regulatory humility. Almost immediately after the Amazon-Whole Foods merger was closed, prices at the store dropped and competitors struck a flurry of deals. Investments continue and many in the grocery retail space are bracing for a wave of enhancement to take hold. Even some of the most fierce critics of deal will have to admit there is a lot of uncertainty. It is unclear what business model will make the most sense in the long run, how these technologies will ultimately become embedded into production processes, and how consumers will benefit. Combined, these features underscore the difficulty, but the necessity, in implementing dynamic insights into antitrust institutions.

Offering generous praise for this symposium (thanks, Will!) and echoing the points made by other participants regarding the dynamic and unknowable course of competition (Auer, Horwitz, Manne & Stout, Fruits), Rinehart concludes:

Retrospectives like this symposium offer a chance to understand what the discussion missed at the time and what is needed to better understand innovation and competition in markets. While it might be too soon to close the book on this case, the impact can already be felt in the positions others are taking in response. In the end, the deal probably won’t be remembered for extending Amazon’s dominance into another market because that is a phantom concern. Rather, it will probably be best remembered as the spark that drove traditional retail outlets to modernize their logistics and fulfillment efforts.  

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners, and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

 

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.