Archives For DGComp

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

Ursula von der Leyen has just announced the composition of the next European Commission. For tech firms, the headline is that Margrethe Vestager will not only retain her job as the head of DG Competition, she will also oversee the EU’s entire digital markets policy in her new role as Vice-President in charge of digital policy. Her promotion within the Commission as well as her track record at DG Competition both suggest that the digital economy will continue to be the fulcrum of European competition and regulatory intervention for the next five years.

The regulation (or not) of digital markets is an extremely important topic. Not only do we spend vast swaths of both our professional and personal lives online, but firms operating in digital markets will likely employ an ever-increasing share of the labor force in the near future

Likely recognizing the growing importance of the digital economy, the previous EU Commission intervened heavily in the digital sphere over the past five years. This resulted in a series of high-profile regulations (including the GDPR, the platform-to-business regulation, and the reform of EU copyright) and competition law decisions (most notably the Google cases). 

Lauded by supporters of the administrative state, these interventions have drawn flak from numerous corners. This includes foreign politicians (especially  Americans) who see in these measures an attempt to protect the EU’s tech industry from its foreign rivals, as well as free market enthusiasts who argue that the old continent has moved further in the direction of digital paternalism. 

Vestager’s increased role within the new Commission, the EU’s heavy regulation of digital markets over the past five years, and early pronouncements from Ursula von der Leyen all suggest that the EU is in for five more years of significant government intervention in the digital sphere.

Vestager the slayer of Big Tech

During her five years as Commissioner for competition, Margrethe Vestager has repeatedly been called the most powerful woman in Brussels (see here and here), and it is easy to see why. Yielding the heavy hammer of European competition and state aid enforcement, she has relentlessly attacked the world’s largest firms, especially American’s so-called “Tech Giants”. 

The record-breaking fines imposed on Google were probably her most high-profile victory. When Vestager entered office, in 2014, the EU’s case against Google had all but stalled. The Commission and Google had spent the best part of four years haggling over a potential remedy that was ultimately thrown out. Grabbing the bull by the horns, Margrethe Vestager made the case her own. 

Five years, three infringement decisions, and 8.25 billion euros later, Google probably wishes it had managed to keep the 2014 settlement alive. While Vestager’s supporters claim that justice was served, Barack Obama and Donald Trump, among others, branded her a protectionist (although, as Geoffrey Manne and I have noted, the evidence for this is decidedly mixed). Critics also argued that her decisions would harm innovation and penalize consumers (see here and here). Regardless, the case propelled Vestager into the public eye. It turned her into one of the most important political forces in Brussels. Cynics might even suggest that this was her plan all along.

But Google is not the only tech firm to have squared off with Vestager. Under her watch, Qualcomm was slapped with a total of €1.239 Billion in fines. The Commission also opened an investigation into Amazon’s operation of its online marketplace. If previous cases are anything to go by, the probe will most probably end with a headline-grabbing fine. The Commission even launched a probe into Facebook’s planned Libra cryptocurrency, even though it has yet to be launched, and recent talk suggests it may never be. Finally, in the area of state aid enforcement, the Commission ordered Ireland to recover €13 Billion in allegedly undue tax benefits from Apple.   

Margrethe Vestager also initiated a large-scale consultation on competition in the digital economy. The ensuing report concluded that the answer was more competition enforcement. Its findings will likely be cited by the Commission as further justification to ramp up its already significant competition investigations in the digital sphere.

Outside of the tech sector, Vestager has shown that she is not afraid to adopt controversial decisions. Blocking the proposed merger between Siemens and Alstom notably drew the ire of Angela Merkel and Emmanuel Macron, as the deal would have created a European champion in the rail industry (a key political demand in Germany and France). 

These numerous interventions all but guarantee that Vestager will not be pushing for light touch regulation in her new role as Vice-President in charge of digital policy. Vestager is also unlikely to put a halt to some of the “Big Tech” investigations that she herself launched during her previous spell at DG Competition. Finally, given her evident political capital in Brussels, it’s a safe bet that she will be given significant leeway to push forward landmark initiatives of her choosing. 

Vestager the prophet

Beneath these attempts to rein-in “Big Tech” lies a deeper agenda that is symptomatic of the EU’s current zeitgeist. Over the past couple of years, the EU has been steadily blazing a trail in digital market regulation (although much less so in digital market entrepreneurship and innovation). Underlying this push is a worldview that sees consumers and small startups as the uninformed victims of gigantic tech firms. True to form, the EU’s solution to this problem is more regulation and government intervention. This is unlikely to change given the Commission’s new (old) leadership.

If digital paternalism is the dogma, then Margrethe Vestager is its prophet. As Thibault Schrepel has shown, her speeches routinely call for digital firms to act “fairly”, and for policymakers to curb their “power”. According to her, it is our democracy that is at stake. In her own words, “you can’t sensibly talk about democracy today, without appreciating the enormous power of digital technology”. And yet, if history tells us one thing, it is that heavy-handed government intervention is anathema to liberal democracy. 

The Commission’s Google decisions neatly illustrate this worldview. For instance, in Google Shopping, the Commission concluded that Google was coercing consumers into using its own services, to the detriment of competition. But the Google Shopping decision focused entirely on competitors, and offered no evidence showing actual harm to consumers (see here). Could it be that users choose Google’s products because they actually prefer them? Rightly or wrongly, the Commission went to great lengths to dismiss evidence that arguably pointed in this direction (see here, §506-538).

Other European forays into the digital space are similarly paternalistic. The General Data Protection Regulation (GDPR) assumes that consumers are ill-equipped to decide what personal information they share with online platforms. Queue a deluge of time-consuming consent forms and cookie-related pop-ups. The jury is still out on whether the GDPR has improved users’ privacy. But it has been extremely costly for businesses — American S&P 500 companies and UK FTSE 350 companies alone spent an estimated total of $9 billion to comply with the GDPR — and has at least temporarily slowed venture capital investment in Europe. 

Likewise, the recently adopted Regulation on platform-to-business relations operates under the assumption that small firms routinely fall prey to powerful digital platforms: 

Given that increasing dependence, the providers of those services [i.e. digital platforms] often have superior bargaining power, which enables them to, in effect, behave unilaterally in a way that can be unfair and that can be harmful to the legitimate interests of their businesses users and, indirectly, also of consumers in the Union. For instance, they might unilaterally impose on business users practices which grossly deviate from good commercial conduct, or are contrary to good faith and fair dealing. 

But the platform-to-business Regulation conveniently overlooks the fact that economic opportunism is a two-way street. Small startups are equally capable of behaving in ways that greatly harm the reputation and profitability of much larger platforms. The Cambridge Analytica leak springs to mind. And what’s “unfair” to one small business may offer massive benefits to other businesses and consumers

Make what you will about the underlying merits of these individual policies, we should at least recognize that they are part of a greater whole, where Brussels is regulating ever greater aspects of our online lives — and not clearly for the benefit of consumers. 

With Margrethe Vestager now overseeing even more of these regulatory initiatives, readers should expect more of the same. The Mission Letter she received from Ursula von der Leyen is particularly enlightening in that respect: 

I want you to coordinate the work on upgrading our liability and safety rules for digital platforms, services and products as part of a new Digital Services Act…. 

I want you to focus on strengthening competition enforcement in all sectors. 

A hard rain’s a gonna fall… on Big Tech

Today’s announcements all but confirm that the EU will stay its current course in digital markets. This is unfortunate.

Digital firms currently provide consumers with tremendous benefits at no direct charge. A recent study shows that median users would need to be paid €15,875 to give up search engines for a year. They would also require €536 in order to forgo WhatsApp for a month, €97 for Facebook, and €59 to drop digital maps for the same duration. 

By continuing to heap ever more regulations on successful firms, the EU risks killing the goose that laid the golden egg. This is not just a theoretical possibility. The EU’s policies have already put technology firms under huge stress, and it is not clear that this has always been outweighed by benefits to consumers. The GDPR has notably caused numerous foreign firms to stop offering their services in Europe. And the EU’s Google decisions have forced it to start charging manufacturers for some of its apps. Are these really victories for European consumers?

It is also worth asking why there are so few European leaders in the digital economy. Not so long ago, European firms such as Nokia and Ericsson were at the forefront of the digital revolution. Today, with the possible exception of Spotify, the EU has fallen further down the global pecking order in the digital economy. 

The EU knows this, and plans to invest €100 Billion in order to boost European tech startups. But these sums will be all but wasted if excessive regulation threatens the long-term competitiveness of European startups. 

So if more of the same government intervention isn’t the answer, then what is? Recognizing that consumers have agency and are responsible for their own decisions might be a start. If you don’t like Facebook, close your account. Want a search engine that protects your privacy? Try DuckDuckGo. If YouTube and Spotify’s suggestions don’t appeal to you, create your own playlists and turn off the autoplay functions. The digital world has given us more choice than we could ever have dreamt of; but this comes with responsibility. Both Margrethe Vestager and the European institutions have often seemed oblivious to this reality. 

If the EU wants to turn itself into a digital economy powerhouse, it will have to switch towards light-touch regulation that allows firms to experiment with disruptive services, flexible employment options, and novel monetization strategies. But getting there requires a fundamental rethink — one that the EU’s previous leadership refused to contemplate. Margrethe Vestager’s dual role within the next Commission suggests that change isn’t coming any time soon.

Source: Benedict Evans

[N]ew combinations are, as a rule, embodied, as it were, in new firms which generally do not arise out of the old ones but start producing beside them; … in general it is not the owner of stagecoaches who builds railways. – Joseph Schumpeter, January 1934

Elizabeth Warren wants to break up the tech giants — Facebook, Google, Amazon, and Apple — claiming they have too much power and represent a danger to our democracy. As part of our response to her proposal, we shared a couple of headlines from 2007 claiming that MySpace had an unassailable monopoly in the social media market.

Tommaso Valletti, the chief economist of the Directorate-General for Competition (DG COMP) of the European Commission, said, in what we assume was a reference to our posts, “they go on and on with that single example to claim that [Facebook] and [Google] are not a problem 15 years later … That’s not what I would call an empirical regularity.”

We appreciate the invitation to show that prematurely dubbing companies “unassailable monopolies” is indeed an empirical regularity.

It’s Tough to Make Predictions, Especially About the Future of Competition in Tech

No one is immune to this phenomenon. Antitrust regulators often take a static view of competition, failing to anticipate dynamic technological forces that will upend market structure and competition.

Scientists and academics make a different kind of error. They are driven by the need to satisfy their curiosity rather than shareholders. Upon inventing a new technology or discovering a new scientific truth, academics often fail to see the commercial implications of their findings.

Maybe the titans of industry don’t make these kinds of mistakes because they have skin in the game? The profit and loss statement is certainly a merciless master. But it does not give CEOs the power of premonition. Corporate executives hailed as visionaries in one era often become blinded by their success, failing to see impending threats to their company’s core value propositions.

Furthermore, it’s often hard as outside observers to tell after the fact whether business leaders just didn’t see a tidal wave of disruption coming or, worse, they did see it coming and were unable to steer their bureaucratic, slow-moving ships to safety. Either way, the outcome is the same.

Here’s the pattern we observe over and over: extreme success in one context makes it difficult to predict how and when the next paradigm shift will occur in the market. Incumbents become less innovative as they get lulled into stagnation by high profit margins in established lines of business. (This is essentially the thesis of Clay Christensen’s The Innovator’s Dilemma).

Even if the anti-tech populists are powerless to make predictions, history does offer us some guidance about the future. We have seen time and again that apparently unassailable monopolists are quite effectively assailed by technological forces beyond their control.

PCs

Source: Horace Dediu

Jan 1977: Commodore PET released

Jun 1977: Apple II released

Aug 1977: TRS-80 released

Feb 1978: “I.B.M. Says F.T.C. Has Ended Its Typewriter Monopoly Study” (NYT)

Mobile

Source: Comscore

Mar 2000: Palm Pilot IPO’s at $53 billion

Sep 2006: “Everyone’s always asking me when Apple will come out with a cellphone. My answer is, ‘Probably never.’” – David Pogue (NYT)

Apr 2007: “There’s no chance that the iPhone is going to get any significant market share.” Ballmer (USA TODAY)

Jun 2007: iPhone released

Nov 2007: “Nokia: One Billion Customers—Can Anyone Catch the Cell Phone King?” (Forbes)

Sep 2013: “Microsoft CEO Ballmer Bids Emotional Farewell to Wall Street” (Reuters)

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

Search

Source: Distilled

Mar 1998: “How Yahoo! Won the Search Wars” (Fortune)

Once upon a time, Yahoo! was an Internet search site with mediocre technology. Now it has a market cap of $2.8 billion. Some people say it’s the next America Online.

Sep 1998: Google founded

Instant Messaging

Sep 2000: “AOL Quietly Linking AIM, ICQ” (ZDNet)

AOL’s dominance of instant messaging technology, the kind of real-time e-mail that also lets users know when others are online, has emerged as a major concern of regulators scrutinizing the company’s planned merger with Time Warner Inc. (twx). Competitors to Instant Messenger, such as Microsoft Corp. (msft) and Yahoo! Inc. (yhoo), have been pressing the Federal Communications Commission to force AOL to make its services compatible with competitors’.

Dec 2000: “AOL’s Instant Messaging Monopoly?” (Wired)

Dec 2015: Report for the European Parliament

There have been isolated examples, as in the case of obligations of the merged AOL / Time Warner to make AOL Instant Messenger interoperable with competing messaging services. These obligations on AOL are widely viewed as having been a dismal failure.

Oct 2017: AOL shuts down AIM

Jan 2019: “Zuckerberg Plans to Integrate WhatsApp, Instagram and Facebook Messenger” (NYT)

Retail

Source: Seeking Alpha

May 1997: Amazon IPO

Mar 1998: American Booksellers Association files antitrust suit against Borders, B&N

Feb 2005: Amazon Prime launches

Jul 2006: “Breaking the Chain: The Antitrust Case Against Wal-Mart” (Harper’s)

Feb 2011: “Borders Files for Bankruptcy” (NYT)

Social

Feb 2004: Facebook founded

Jan 2007: “MySpace Is a Natural Monopoly” (TechNewsWorld)

Seventy percent of Yahoo 360 users, for example, also use other social networking sites — MySpace in particular. Ditto for Facebook, Windows Live Spaces and Friendster … This presents an obvious, long-term business challenge to the competitors. If they cannot build up a large base of unique users, they will always be on MySpace’s periphery.

Feb 2007: “Will Myspace Ever Lose Its Monopoly?” (Guardian)

Jun 2011: “Myspace Sold for $35m in Spectacular Fall from $12bn Heyday” (Guardian)

Music

Source: RIAA

Dec 2003: “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.” – Steve Jobs (Rolling Stone)

Apr 2006: Spotify founded

Jul 2009: “Apple’s iPhone and iPod Monopolies Must Go” (PC World)

Jun 2015: Apple Music announced

Video

Source: OnlineMBAPrograms

Apr 2003: Netflix reaches one million subscribers for its DVD-by-mail service

Mar 2005: FTC blocks Blockbuster/Hollywood Video merger

Sep 2006: Amazon launches Prime Video

Jan 2007: Netflix streaming launches

Oct 2007: Hulu launches

May 2010: Hollywood Video’s parent company files for bankruptcy

Sep 2010: Blockbuster files for bankruptcy

The Only Winning Move Is Not to Play

Predicting the future of competition in the tech industry is such a fraught endeavor that even articles about how hard it is to make predictions include incorrect predictions. The authors just cannot help themselves. A March 2012 BBC article “The Future of Technology… Who Knows?” derided the naysayers who predicted doom for Apple’s retail store strategy. Its kicker?

And that is why when you read that the Blackberry is doomed, or that Microsoft will never make an impression on mobile phones, or that Apple will soon dominate the connected TV market, you need to take it all with a pinch of salt.

But Blackberry was doomed and Microsoft never made an impression on mobile phones. (Half credit for Apple TV, which currently has a 15% market share).

Nobel Prize-winning economist Paul Krugman wrote a piece for Red Herring magazine (seriously) in June 1998 with the title “Why most economists’ predictions are wrong.” Headline-be-damned, near the end of the article he made the following prediction:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Robert Metcalfe himself predicted in a 1995 column that the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” After pledging to “eat his words” if the prediction did not come true, “in front of an audience, he put that particular column into a blender, poured in some water, and proceeded to eat the resulting frappe with a spoon.”

A Change Is Gonna Come

Benedict Evans, a venture capitalist at Andreessen Horowitz, has the best summary of why competition in tech is especially difficult to predict:

IBM, Microsoft and Nokia were not beaten by companies doing what they did, but better. They were beaten by companies that moved the playing field and made their core competitive assets irrelevant. The same will apply to Facebook (and Google, Amazon and Apple).

Elsewhere, Evans tried to reassure his audience that we will not be stuck with the current crop of tech giants forever:

With each cycle in tech, companies find ways to build a moat and make a monopoly. Then people look at the moat and think it’s invulnerable. They’re generally right. IBM still dominates mainframes and Microsoft still dominates PC operating systems and productivity software. But… It’s not that someone works out how to cross the moat. It’s that the castle becomes irrelevant. IBM didn’t lose mainframes and Microsoft didn’t lose PC operating systems. Instead, those stopped being ways to dominate tech. PCs made IBM just another big tech company. Mobile and the web made Microsoft just another big tech company. This will happen to Google or Amazon as well. Unless you think tech progress is over and there’ll be no more cycles … It is deeply counter-intuitive to say ‘something we cannot predict is certain to happen’. But this is nonetheless what’s happened to overturn pretty much every tech monopoly so far.

If this time is different — or if there are more false negatives than false positives in the monopoly prediction game — then the advocates for breaking up Big Tech should try to make that argument instead of falling back on “big is bad” rhetoric. As for us, we’ll bet that we have not yet reached the end of history — tech progress is far from over.

 

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.