Archives For

Ursula von der Leyen has just announced the composition of the next European Commission. For tech firms, the headline is that Margrethe Vestager will not only retain her job as the head of DG Competition, she will also oversee the EU’s entire digital markets policy in her new role as Vice-President in charge of digital policy. Her promotion within the Commission as well as her track record at DG Competition both suggest that the digital economy will continue to be the fulcrum of European competition and regulatory intervention for the next five years.

The regulation (or not) of digital markets is an extremely important topic. Not only do we spend vast swaths of both our professional and personal lives online, but firms operating in digital markets will likely employ an ever-increasing share of the labor force in the near future

Likely recognizing the growing importance of the digital economy, the previous EU Commission intervened heavily in the digital sphere over the past five years. This resulted in a series of high-profile regulations (including the GDPR, the platform-to-business regulation, and the reform of EU copyright) and competition law decisions (most notably the Google cases). 

Lauded by supporters of the administrative state, these interventions have drawn flak from numerous corners. This includes foreign politicians (especially  Americans) who see in these measures an attempt to protect the EU’s tech industry from its foreign rivals, as well as free market enthusiasts who argue that the old continent has moved further in the direction of digital paternalism. 

Vestager’s increased role within the new Commission, the EU’s heavy regulation of digital markets over the past five years, and early pronouncements from Ursula von der Leyen all suggest that the EU is in for five more years of significant government intervention in the digital sphere.

Vestager the slayer of Big Tech

During her five years as Commissioner for competition, Margrethe Vestager has repeatedly been called the most powerful woman in Brussels (see here and here), and it is easy to see why. Yielding the heavy hammer of European competition and state aid enforcement, she has relentlessly attacked the world’s largest firms, especially American’s so-called “Tech Giants”. 

The record-breaking fines imposed on Google were probably her most high-profile victory. When Vestager entered office, in 2014, the EU’s case against Google had all but stalled. The Commission and Google had spent the best part of four years haggling over a potential remedy that was ultimately thrown out. Grabbing the bull by the horns, Margrethe Vestager made the case her own. 

Five years, three infringement decisions, and 8.25 billion euros later, Google probably wishes it had managed to keep the 2014 settlement alive. While Vestager’s supporters claim that justice was served, Barack Obama and Donald Trump, among others, branded her a protectionist (although, as Geoffrey Manne and I have noted, the evidence for this is decidedly mixed). Critics also argued that her decisions would harm innovation and penalize consumers (see here and here). Regardless, the case propelled Vestager into the public eye. It turned her into one of the most important political forces in Brussels. Cynics might even suggest that this was her plan all along.

But Google is not the only tech firm to have squared off with Vestager. Under her watch, Qualcomm was slapped with a total of €1.239 Billion in fines. The Commission also opened an investigation into Amazon’s operation of its online marketplace. If previous cases are anything to go by, the probe will most probably end with a headline-grabbing fine. The Commission even launched a probe into Facebook’s planned Libra cryptocurrency, even though it has yet to be launched, and recent talk suggests it may never be. Finally, in the area of state aid enforcement, the Commission ordered Ireland to recover €13 Billion in allegedly undue tax benefits from Apple.   

Margrethe Vestager also initiated a large-scale consultation on competition in the digital economy. The ensuing report concluded that the answer was more competition enforcement. Its findings will likely be cited by the Commission as further justification to ramp up its already significant competition investigations in the digital sphere.

Outside of the tech sector, Vestager has shown that she is not afraid to adopt controversial decisions. Blocking the proposed merger between Siemens and Alstom notably drew the ire of Angela Merkel and Emmanuel Macron, as the deal would have created a European champion in the rail industry (a key political demand in Germany and France). 

These numerous interventions all but guarantee that Vestager will not be pushing for light touch regulation in her new role as Vice-President in charge of digital policy. Vestager is also unlikely to put a halt to some of the “Big Tech” investigations that she herself launched during her previous spell at DG Competition. Finally, given her evident political capital in Brussels, it’s a safe bet that she will be given significant leeway to push forward landmark initiatives of her choosing. 

Vestager the prophet

Beneath these attempts to rein-in “Big Tech” lies a deeper agenda that is symptomatic of the EU’s current zeitgeist. Over the past couple of years, the EU has been steadily blazing a trail in digital market regulation (although much less so in digital market entrepreneurship and innovation). Underlying this push is a worldview that sees consumers and small startups as the uninformed victims of gigantic tech firms. True to form, the EU’s solution to this problem is more regulation and government intervention. This is unlikely to change given the Commission’s new (old) leadership.

If digital paternalism is the dogma, then Margrethe Vestager is its prophet. As Thibault Schrepel has shown, her speeches routinely call for digital firms to act “fairly”, and for policymakers to curb their “power”. According to her, it is our democracy that is at stake. In her own words, “you can’t sensibly talk about democracy today, without appreciating the enormous power of digital technology”. And yet, if history tells us one thing, it is that heavy-handed government intervention is anathema to liberal democracy. 

The Commission’s Google decisions neatly illustrate this worldview. For instance, in Google Shopping, the Commission concluded that Google was coercing consumers into using its own services, to the detriment of competition. But the Google Shopping decision focused entirely on competitors, and offered no evidence showing actual harm to consumers (see here). Could it be that users choose Google’s products because they actually prefer them? Rightly or wrongly, the Commission went to great lengths to dismiss evidence that arguably pointed in this direction (see here, §506-538).

Other European forays into the digital space are similarly paternalistic. The General Data Protection Regulation (GDPR) assumes that consumers are ill-equipped to decide what personal information they share with online platforms. Queue a deluge of time-consuming consent forms and cookie-related pop-ups. The jury is still out on whether the GDPR has improved users’ privacy. But it has been extremely costly for businesses — American S&P 500 companies and UK FTSE 350 companies alone spent an estimated total of $9 billion to comply with the GDPR — and has at least temporarily slowed venture capital investment in Europe. 

Likewise, the recently adopted Regulation on platform-to-business relations operates under the assumption that small firms routinely fall prey to powerful digital platforms: 

Given that increasing dependence, the providers of those services [i.e. digital platforms] often have superior bargaining power, which enables them to, in effect, behave unilaterally in a way that can be unfair and that can be harmful to the legitimate interests of their businesses users and, indirectly, also of consumers in the Union. For instance, they might unilaterally impose on business users practices which grossly deviate from good commercial conduct, or are contrary to good faith and fair dealing. 

But the platform-to-business Regulation conveniently overlooks the fact that economic opportunism is a two-way street. Small startups are equally capable of behaving in ways that greatly harm the reputation and profitability of much larger platforms. The Cambridge Analytica leak springs to mind. And what’s “unfair” to one small business may offer massive benefits to other businesses and consumers

Make what you will about the underlying merits of these individual policies, we should at least recognize that they are part of a greater whole, where Brussels is regulating ever greater aspects of our online lives — and not clearly for the benefit of consumers. 

With Margrethe Vestager now overseeing even more of these regulatory initiatives, readers should expect more of the same. The Mission Letter she received from Ursula von der Leyen is particularly enlightening in that respect: 

I want you to coordinate the work on upgrading our liability and safety rules for digital platforms, services and products as part of a new Digital Services Act…. 

I want you to focus on strengthening competition enforcement in all sectors. 

A hard rain’s a gonna fall… on Big Tech

Today’s announcements all but confirm that the EU will stay its current course in digital markets. This is unfortunate.

Digital firms currently provide consumers with tremendous benefits at no direct charge. A recent study shows that median users would need to be paid €15,875 to give up search engines for a year. They would also require €536 in order to forgo WhatsApp for a month, €97 for Facebook, and €59 to drop digital maps for the same duration. 

By continuing to heap ever more regulations on successful firms, the EU risks killing the goose that laid the golden egg. This is not just a theoretical possibility. The EU’s policies have already put technology firms under huge stress, and it is not clear that this has always been outweighed by benefits to consumers. The GDPR has notably caused numerous foreign firms to stop offering their services in Europe. And the EU’s Google decisions have forced it to start charging manufacturers for some of its apps. Are these really victories for European consumers?

It is also worth asking why there are so few European leaders in the digital economy. Not so long ago, European firms such as Nokia and Ericsson were at the forefront of the digital revolution. Today, with the possible exception of Spotify, the EU has fallen further down the global pecking order in the digital economy. 

The EU knows this, and plans to invest €100 Billion in order to boost European tech startups. But these sums will be all but wasted if excessive regulation threatens the long-term competitiveness of European startups. 

So if more of the same government intervention isn’t the answer, then what is? Recognizing that consumers have agency and are responsible for their own decisions might be a start. If you don’t like Facebook, close your account. Want a search engine that protects your privacy? Try DuckDuckGo. If YouTube and Spotify’s suggestions don’t appeal to you, create your own playlists and turn off the autoplay functions. The digital world has given us more choice than we could ever have dreamt of; but this comes with responsibility. Both Margrethe Vestager and the European institutions have often seemed oblivious to this reality. 

If the EU wants to turn itself into a digital economy powerhouse, it will have to switch towards light-touch regulation that allows firms to experiment with disruptive services, flexible employment options, and novel monetization strategies. But getting there requires a fundamental rethink — one that the EU’s previous leadership refused to contemplate. Margrethe Vestager’s dual role within the next Commission suggests that change isn’t coming any time soon.

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

Last month, the European Commission slapped another fine upon Google for infringing European competition rules (€1.49 billion this time). This brings Google’s contribution to the EU budget to a dizzying total of €8.25 billion (to put this into perspective, the total EU budget for 2019 is €165.8 billion). Given this massive number, and the geographic location of Google’s headquarters, it is perhaps not surprising that some high-profile commentators, including former President Obama and President Trump, have raised concerns about potential protectionism on the Commission’s part.

In a new ICLE Issue Brief, we question whether there is any merit to these claims of protectionism. We show that, since the entry into force of Regulation 1/2003 (the main piece of legislation that implements the competition provisions of the EU treaties), US firms have borne the lion’s share of monetary penalties imposed by the Commission for breaches of competition law.

For instance, US companies have been fined a total of €10.91 billion by the European Commission, compared to €1.17 billion for their European counterparts:

Although this discrepancy seems to point towards protectionism, we believe that the case is not so clear-cut. The large fines paid by US firms are notably driven by a small subset of decisions in the tech sector, where the plaintiffs were also American companies. Tech markets also exhibit various features which tend to inflate the amount of fines.

Despite the plausibility of these potential alternative explanations, there may still be some legitimacy to the allegations of protectionism. The European Commission is, by design, a political body. One may thus question the extent to which Europe’s paucity of tech sector giants is driving the Commission’s ideological preference for tech-sector intervention and the protection of the industry’s small competitors.

Click here to read the full article.

The dust has barely settled on the European Commission’s record-breaking €4.3 Billion Google Android fine, but already the European Commission is gearing up for its next high-profile case. Last month, Margrethe Vestager dropped a competition bombshell: the European watchdog is looking into the behavior of Amazon. Should the Commission decide to move further with the investigation, Amazon will likely join other US tech firms such as Microsoft, Intel, Qualcomm and, of course, Google, who have all been on the receiving end of European competition enforcement.

The Commission’s move – though informal at this stage – is not surprising. Over the last couples of years, Amazon has become one of the world’s largest and most controversial companies. The animosity against it is exemplified in a paper by Lina Khan, which uses the example of Amazon to highlight the numerous ills that allegedly plague modern antitrust law. The paper is widely regarded as the starting point of the so-called “hipster antitrust” movement.

But is there anything particularly noxious about Amazon’s behavior, or is it just the latest victim of a European crusade against American tech companies?

Where things stand so far

As is often the case in such matters, publicly available information regarding the Commission’s “probe” (the European watchdog is yet to open a formal investigation) is particularly thin. What we know so far comes from a number of declarations made by Margrethe Vestager (here and here) and a leaked questionnaire that was sent to Amazon’s rivals. Going on this limited information, it appears that the Commission is preoccupied about the manner in which Amazon uses the data that it gathers from its online merchants. In Vestager’s own words:

The question here is about the data, because if you as Amazon get the data from the smaller merchants that you host […] do you then also use this data to do your own calculations? What is the new big thing, what is it that people want, what kind of offers do they like to receive, what makes them buy things.

These concerns relate to the fact that Amazon acts as both a retailer in its own right and a platform for other retailers, which allegedly constitutes a “conflict of interest”. As a retailer, Amazon sells a wide range of goods directly to consumers. Meanwhile, its marketplace platform enables third party merchants to offer their goods in exchange for referral fees when items are sold (these fees typically range from 8% to 15%, depending on the type of good). Merchants can either execute theses orders themselves or opt for fulfilment by Amazon, in which case it handles storage and shipping. In addition to its role as a platform operator,  As of 2017, more than 50% of units sold on the Amazon marketplace where fulfilled by third-party sellers, although Amazon derived three times more revenue from its own sales than from those of third parties (note that Amazon Web Services is still by far its largest source of profits).

Mirroring concerns raised by Khan, the Commission worries that Amazon uses the data it gathers from third party retailers on its platform to outcompete them. More specifically, the concern is that Amazon might use this data to identify and enter the most profitable segments of its online platform, excluding other retailers in the process (or deterring them from joining the platform in the first place). Although there is some empirical evidence to support such claims, it is far from clear that this is in any way harmful to competition or consumers. Indeed, the authors of the paper that found evidence in support of the claims note:

Amazon is less likely to enter product spaces that require greater seller efforts to grow, suggesting that complementors’ platform‐specific investments influence platform owners’ entry decisions. While Amazon’s entry discourages affected third‐party sellers from subsequently pursuing growth on the platform, it increases product demand and reduces shipping costs for consumers.

Thou shalt not punish efficient behavior

The question is whether Amazon using data on rivals’ sales to outcompete them should raise competition concerns? After all, this is a standard practice in the brick-and-mortar industry, where most large retailers use house brands to go after successful, high-margin third-party brands. Some, such as Costco, even eliminate some third-party products from their shelves once they have a successful own-brand product. Granted, as Khan observes, Amazon may be doing this more effectively because it has access to vastly superior data. But does that somehow make Amazon’s practice harmful to social social welfare? Absent further evidence, I believe not.

The basic problem is the following. Assume that Amazon does indeed have a monopoly in the market for online retail platforms (or, in other words, that the Amazon marketplace is a bottleneck for online retailers). Why would it move into direct retail competition against its third party sellers if it is less efficient than them? Amazon would either have to sell at a loss or hope that consumers saw something in its products that warrants a higher price. A more profitable alternative would be to stay put and increase its fees. It could thereby capture all the profits of its independent retailers. Not that Amazon would necessarily want to do so, as this could potentially deter other retailers from joining its platform. The upshot is that Amazon has little incentive to exclude more efficient retailers.

Astute readers, will have observed that this is simply a restatement of the Chicago school’s Single Monopoly Theory, which broadly holds that, absent efficiencies, a monopolist in one line of commerce cannot increase its profits by entering the competitive market for a complementary good. Although the theory has drawn some criticism, it remains a crucial starting point with which enforcers must contend before they conclude that a monopolist’s behavior is anticompetitive.

So why does Amazon move into retail segments that are already occupied by its rivals? The most likely explanation is simply that it can source and sell these goods more efficiently than them, and that these efficiencies cannot be achieved through contracts with the said rivals. Once we accept the possibility that Amazon is simply more efficient, the picture changes dramatically. The sooner it overthrows less efficient rivals the better. Doing so creates valuable surplus that can flow to either itself or its consumers. This is true regardless of whether Amazon has a marketplace monopoly or not. Even if it does have a monopoly (which is doubtful given competition from the likes of Zalando, AliExpress, Google Search and eBay), at least some of these efficiencies will likely be passed on to consumers. Such a scenario is also perfectly compatible with increased profits for Amazon. The real test is whether output increases when Amazon enters segments that were previously occupied by rivals.

Of course, the usual critiques voiced against the “Single Monopoly Profit” theory apply here. It is plausible that, by excluding its retail rivals, Amazon is simply seeking to protect its alleged platform monopoly. However, the anecdotal evidence that has been raised thus far does not support this conclusion.

But what about innovation?

Possibly sensing the weakness of the “inefficiency” line of arguments against Amazon, critics will likely put put forward a second theory of harm. The claim is that by capturing the rents of potentially innovative retailers, Amazon may hamper their incentives to innovate and will therefore harm consumer choice. Margrethe Vestager intimated this much in a Bloomberg interview. Though this framing might seem tempting at first, it falters under close inspection.

The effects of Amazon’s behavior could first be framed in terms of appropriability — that is: the extent to which an innovator captures the social benefits of its innovation. The higher its share of those benefits, the larger its incentives to innovate. By forcing out its retail rivals, it is plausible that Amazon is reducing the returns which they earn on their potential innovations.

Another potential framing is that of holdup theory. Applied to this case, one could argue that rival retailers made sunk investments (potentially innovation-related) to join the Amazon platform, and that Amazon is behaving opportunistically by capturing their surplus. With hindsight, merchants might thus have opted to stay out of the Amazon marketplace.

Unfortunately for Amazon’s critics, there are numerous objections to these two framings. For a start, the business implication of both the appropriability and holdup theories is that firms can and should take sensible steps to protect their investments. The recent empirical paper mentioned above stresses that these actions are critical for the sake of Amazon’s retailers.

Potential solutions abound. Retailers could in principle enter into long-term exclusivity agreements with their suppliers (which would keep Amazon out of the market if there are no alternative suppliers). Alternatively, they could sign non-compete clauses with Amazon, exchange assets, or even outright merge. In fact, there is at least some evidence of this last possibility occurring, as Amazon has acquired some of its online retailers. The fact that some retailers have not opted for these safety measures (or other methods of appropriability) suggests that they either don’t perceive a threat or are unwilling to make the necessary investments. It might also be due to bad business judgement on their part).

Which brings us to the big question. Should competition law step into the breach in those cases where firms have refused to take even basic steps to protect their investments? The answer is probably no.

For a start, condoning this poor judgement encourages firms to rely on competition enforcement rather than private solutions  to solve appropriability and holdup issues. This is best understood with reference to moral hazard. By insuring firms against the capture of their profits, competition authorities disincentivize all forms of risk-mitigation on the part of those firms. This will ultimately raise enforcement costs (as firms become increasingly reliant on the antitrust system for protection).

It is also informationally much more burdensome, as authorities will systematically have to rule on the appropriate share of profits between parties to a case.

Finally, overprotecting these investments would go against the philosophy of the European Court of Justice’s Huawei ruling.  Albeit in the specific context of injunctions relating to SEPs, the Court conditioned competition liability on firms showing that they have taken a series of reasonable steps to sort out their disputes privately.

Concluding remarks

This is not to say that competition intervention should categorically be proscribed. But rather that the capture of a retailer’s investments by Amazon is an insufficient condition for enforcement actions. Instead, the Commission should question whether Amazon’s actions are truly detrimental to consumer welfare and output. Absent strong evidence that an excluded retailer offered superior products, or that Amazon’s move was merely a strategic play to prevent entry, competition authorities should let the chips fall where they may.

As things stand, there is simply no evidence to indicate that anything out of the ordinary is occurring on the Amazon marketplace. By shining the spotlight on Amazon, the Commission is putting itself under tremendous political pressure to move forward with a formal investigation (all the more so, given the looming European Parliament elections). This is regrettable, as there are surely more pressing matters for the European regulator to deal with. The Commission would thus do well to recall the words of Shakespeare in the Merchant of Venice: “All that glisters is not gold”. Applied in competition circles this translates to “all that is big is not inefficient”.

The gist of these arguments is simple. The Amazon / Whole Foods merger would lead to the exclusion of competitors, with Amazon leveraging its swaths of data and pricing below costs. All of this begs a simple question: have these prophecies come to pass?

The problem with antitrust populism is not just that it leads to unfounded predictions regarding the negative effects of a given business practice. It also ignores the significant gains which consumers may reap from these practices. The Amazon / Whole foods offers a case in point.

Continue Reading...

Our story begins on the morning of January 9, 2007. Few people knew it at the time, but the world of wireless communications was about to change forever. Steve Jobs walked on stage wearing his usual turtleneck, and proceeded to reveal the iPhone. The rest, as they say, is history. The iPhone moved the wireless communications industry towards a new paradigm. No more physical keyboards, clamshell bodies, and protruding antennae. All of these were replaced by a beautiful black design, a huge touchscreen (3.5” was big for that time), a rear-facing camera, and (a little bit later) a revolutionary new way to consume applications: the App Store. Sales soared and Apple’s stock started an upward trajectory that would see it become one of the world’s most valuable companies.

The story could very well have ended there. If it had, we might all be using iPhones today. However, years before, Google had commenced its own march into the wireless communications space by purchasing a small startup called Android. A first phone had initially been slated for release in late 2007. But Apple’s iPhone announcement sent Google back to the drawing board. It took Google and its partners until 2010 to come up with a competitive answer – the Google Nexus One produced by HTC.

Understanding the strategy that Google put in place during this three year timespan is essential to understanding the European Commission’s Google Android decision.

How to beat one of the great innovations?

In order to overthrow — or even merely just compete with — the iPhone, Google faced the same dilemma that most second-movers have to contend with: imitate or differentiate. Its solution was a mix of both. It took the touchscreen, camera, and applications, but departed on one key aspect. Whereas Apple controls the iPhone from end-to-end, Google opted for a licensed, open-source operating system that substitutes a more-decentralized approach for Apple’s so-called “walled garden.”

Google and a number of partners founded the Open Handset Alliance (“OHA”) in November 2007. This loose association of network operators, software companies and handset manufacturers became the driving force behind the Android OS. Through the OHA, Google and its partners have worked to develop minimal specifications for OHA-compliant Android devices in order to ensure that all levels of the device ecosystem — from device makers to app developers — function well together. As its initial press release boasts, through the OHA:

Handset manufacturers and wireless operators will be free to customize Android in order to bring to market innovative new products faster and at a much lower cost. Developers will have complete access to handset capabilities and tools that will enable them to build more compelling and user-friendly services, bringing the Internet developer model to the mobile space. And consumers worldwide will have access to less expensive mobile devices that feature more compelling services, rich Internet applications and easier-to-use interfaces — ultimately creating a superior mobile experience.

The open source route has a number of advantages — notably the improved division of labor — but it is not without challenges. One key difficulty lies in coordinating and incentivizing the dozens of firms that make up the alliance. Google must not only keep the diverse Android ecosystem directed toward a common, compatible goal, it also has to monetize a product that, by its very nature, is given away free of charge. It is Google’s answers to these two problems that set off the Commission’s investigation.

The first problem is a direct consequence of Android’s decentralization. Whereas there are only a small number of iPhones (the couple of models which Apple markets at any given time) running the same operating system, Android comes in a jaw-dropping array of flavors. Some devices are produced by Google itself, others are the fruit of high-end manufacturers such as Samsung and LG, there are also so-called “flagship killers” like OnePlus, and budget phones from the likes of Motorola and Honor (one of Huawei’s brands). The differences don’t stop there. Manufacturers, like Samsung, Xiaomi and LG (to name but a few) have tinkered with the basic Android setup. Samsung phones heavily incorporate its Bixby virtual assistant, while Xiaomi packs in a novel user interface. The upshot is that the Android marketplace is tremendously diverse.

Managing this variety is challenging, to say the least (preventing projects from unravelling into a myriad of forks is always an issue for open source projects). Google and the OHA have come up with an elegant solution. The alliance penalizes so-called “incompatible” devices — that is, handsets whose software or hardware stray too far from a predetermined series of specifications. When this is the case, Google may refuse to license its proprietary applications (most notably the Play Store). This minimum level of uniformity ensures that apps will run smoothly on all devices. It also provides users with a consistent experience (thereby protecting the Android brand) and reduces the cost of developing applications for Android. Unsurprisingly, Android developers have lauded these “anti-fragmentation” measures, branding the Commission’s case a disaster.

A second important problem stems from the fact that the Android OS is an open source project. Device manufacturers can thus license the software free of charge. This is no small advantage. It shaves precious dollars from the price of Android smartphones, thus opening-up the budget end of the market. Although there are numerous factors at play, it should be noted that a top of the range Samsung Galaxy S9+ is roughly 30% cheaper ($819) than its Apple counterpart, the iPhone X ($1165).

Offering a competitive operating system free of charge might provide a fantastic deal for consumers, but it poses obvious business challenges. How can Google and other members of the OHA earn a return on the significant amounts of money poured into developing, improving, and marketing and Android devices? As is often the case with open source projects, they essentially rely on complementarities. Google produces the Android OS in the hope that it will boost users’ consumption of its profitable, ad-supported services (Google Search in particular). This is sometimes referred to as a loss leader or complementary goods strategy.

Google uses two important sets of contractual provisions to cement this loss leader strategy. First, it seemingly bundles a number of proprietary applications together. Manufacturers must pre-load the Google Search and Chrome apps in order to obtain the Play Store app (the lynchpin on which the Android ecosystem sits). Second, Google has concluded a number of “revenue sharing” deals with manufacturers and network operators. These companies receive monetary compensation when the Google Search is displayed prominently on a user’s home screen. In effect, they are receiving a cut of the marginal revenue that the use of this search bar generates for Google. Both of these measures ultimately nudge users — but do not force them, as neither prevents users from installing competing apps — into using Google’s most profitable services.

Readers would be forgiven for thinking that this is a win-win situation. Users get a competitive product free of charge, while Google and other members of the OHA earn enough money to compete against Apple.

The Commission is of another mind, however.

Commission’s hubris

The European Commission believes that Google is hurting competition. Though the text of the decision is not yet available, the thrust of its argument is that Google’s anti-fragmentation measures prevent software developers from launching competing OSs, while the bundling and revenue sharing both thwart rival search engines.

This analysis runs counter to some rather obvious facts:

  • For a start, the Android ecosystem is vibrant. Numerous firms have launched forked versions of Android, both with and without Google’s apps. Amazon’s Fire line of devices is a notable example.
  • Second, although Google’s behavior does have an effect on the search engine market, there is nothing anticompetitive about it. Yahoo could very well have avoided its high-profile failure if, way back in 2005, it had understood the importance of the mobile internet. At the time, it still had a 30% market share, compared to Google’s 36%. Firms that fail to seize upon business opportunities will fall out of the market. This is not a bug; it is possibly the most important feature of market economies. It reveals the products that consumers prefer and stops resources from being allocated to less valuable propositions.
  • Last but not least, Google’s behavior does not prevent other search engines from placing their own search bars or virtual assistants on smartphones. This is essentially what Samsung has done by ditching Google’s assistant in favor of its Bixby service. In other words, Google is merely competing with other firms to place key apps on or near the home screen of devices.

Even if the Commission’s reasoning where somehow correct, the competition watchdog is using a sledgehammer to crack a nut. The potential repercussions for Android, the software industry, and European competition law are great:

  • For a start, the Commission risks significantly weakening Android’s competitive position relative to Apple. Android is a complex ecosystem. The idea that it is possible to bring incremental changes to its strategy without threatening the viability of the whole is a sign of the Commission’s hubris.
  • More broadly, the harsh treatment of Google could have significant incentive effects for other tech platforms. As others have already pointed out, the Commission’s decision rests on the idea that dominant firms should not be allowed to favor their own services compared to those of rivals. Taken a face value, this anti-discrimination policy will push firms to design closed platforms. If rivals are excluded from the very start, there is no one against whom to discriminate. Antitrust watchdogs are thus kept at bay (and thus the Commission is acting against Google’s marginal preference for its own services, rather than Apple’s far-more-substantial preferencing of its own services). Moving to a world of only walled gardens might harm users and innovators alike.

Over the next couple of days and weeks, many will jump to the Commission’s defense. They will see its action as a necessary step against the abstract “power” of Silicon Valley’s tech giants. Rivals will feel vindicated. But when all is done and dusted, there seems to be little doubt that the decision is misguided. The Commission will have struck a blow to the heart of the most competitive offering in the smartphone space. And consumers will be the biggest losers.

This is not what the competition laws were intended to achieve.