In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.
Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .
Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.
Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.
PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).
In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.
In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.
Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.
It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.
But, what if vertical efficiencies were not the primary reason for the acquisitions?
Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.
Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.
Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.
Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases.
The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.
Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.
These days, lacking a coherent legal theory presents no challenge to the would-be antitrust crusader. In a previous post, we noted how Shaoul Sussman’s predatory pricing claims against Amazon lacked a serious legal foundation. Sussman has returned with a new post, trying to build out his fledgling theory, but fares little better under even casual scrutiny.
According to Sussman, Amazon’s allegedly anticompetitive
conduct not only cemented its role as the primary destination for consumers that shop online but also helped it solidify its power over brands.
Further, the company
was willing to go to great lengths to ensure brand availability and inventory, including turning to the grey market, recruiting unauthorized sellers, and even selling diverted goods and counterfeits to its customers.
Sussman is trying to make out a fairly convoluted predatory pricing case, but once again without ever truly connecting the dots in a way that develops a cognizable antitrust claim. According to Sussman:
Amazon sold products as a first-party to consumers on its platform at below average variable cost and  Amazon recently began to recoup its losses by shifting the bulk of the transactions that occur on the website to its marketplace, where millions of third-party sellers pay hefty fees that enable Amazon to take a deep cut of every transaction.
Sussman now bases this claim on an allegation that Amazon relied on “grey market” sellers on its platform, the presence of which forces legitimate brands onto the Amazon Marketplace. Moreover, Sussman claims that — somehow — these brands coming on board on Amazon’s terms forces those brands raise prices elsewhere, and the net effect of this process at scale is that prices across the economy have risen.
As we detail below, Sussman’s chimerical argument depends on conflating unrelated concepts and relies on non-public anecdotal accounts to piece together an argument that, even if you squint at it, doesn’t make out a viable theory of harm.
Conflating legal reselling and illegal counterfeit selling as the “grey market”
The biggest problem with Sussman’s new theory is that he conflates pro-consumer unauthorized reselling and anti-consumer illegal counterfeiting, erroneously labeling both the “grey market”:
Amazon had an ace up its sleeve. My sources indicate that the company deliberately turned to and empowered the “grey market“ — where both genuine, authentic goods and knockoffs are purchased and resold outside of brands’ intended distribution pipes — to dominate certain brands.
By definition, grey market goods are — as the link provided by Sussman states — “goods sold outside the authorized distribution channels by entities which may have no relationship with the producer of the goods.” Yet Sussman suggests this also encompasses counterfeit goods. This conflation is no minor problem for his argument. In general, the grey market is legal and beneficial for consumers. Brands such as Nike may try to limit the distribution of their products to channels the company controls, but they cannot legally prevent third parties from purchasing Nike products and reselling them on Amazon (or anywhere else).
This legal activity can increase consumer choice and can lead to lower prices, even though Sussman’s framing omits these key possibilities:
In the course of my conversations with former Amazon employees, some reported that Amazon actively sought out and recruited unauthorized sellers as both third-party sellers and first-party suppliers. Being unauthorized, these sellers were not bound by the brands’ policies and therefore outside the scope of their supervision.
In other words, Amazon actively courted third-party sellers who could bring legitimate goods, priced competitively, onto its platform. Perhaps this gives Amazon “leverage” over brands that would otherwise like to control the activities of legal resellers, but it’s exceedingly strange to try to frame this as nefarious or anticompetitive behavior.
Of course, we shouldn’t ignore the fact that there are also potential consumer gains when Amazon tries to restrict grey market activity by partnering with brands. But it is up to Amazon and the brands to determine through a contracting process when it makes the most sense to partner and control the grey market, or when consumers are better served by allowing unauthorized resellers. The point is: there is simply no reason to assume that either of these approaches is inherently problematic.
Yet, even when Amazon tries to restrict its platform to authorized resellers, it exposes itself to a whole different set of complaints. In 2018, the company made a deal with Apple to bring the iPhone maker onto its marketplace platform. In exchange for Apple selling its products directly on Amazon, the latter agreed to remove unauthorized Apple resellers from the platform. Sussman portrays this as a welcome development in line with the policy changes he recommends.
But news reports last month indicate the FTC is reviewing this deal for potential antitrust violations. One is reminded of Ronald Coase’s famous lament that he “had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down they said it was predatory pricing, and when they stayed the same they said it was tacit collusion.” It seems the same is true for Amazon and its relationship with the grey market.
Amazon’s incentive to remove counterfeits
What is illegal — and explicitly against Amazon’s marketplace rules — is selling counterfeit goods. Counterfeit goods destroy consumer trust in the Amazon ecosystem, which is why the company actively polices its listings for abuses. And as Sussman himself notes, when there is an illegal counterfeit listing, “Brands can then file a trademark infringement lawsuit against the unauthorized seller in order to force Amazon to suspend it.”
Sussman’s attempt to hang counterfeiting problems around Amazon’s neck belies the actual truth about counterfeiting: probably the most cost-effective way to stop counterfeiting is simply to prohibit all third-party sellers. Yet, a serious cost-benefit analysis of Amazon’s platforms could hardly support such an action (and would harm the small sellers that antitrust activists seem most concerned about).
But, more to the point, if Amazon’s strategy is to encourage piracy, it’s doing a terrible job. It engages in litigation against known pirates, and earlier this year it rolled out a suite of tools (called Project Zero) meant to help brand owners report and remove known counterfeits. As part of this program, according to Amazon, “brands provide key data points about themselves (e.g., trademarks, logos, etc.) and we scan over 5 billion daily listing update attempts, looking for suspected counterfeits.” And when a brand identifies a counterfeit listing, they can remove it using a self-service tool (without needing approval from Amazon).
Any large platform that tries to make it easy for independent retailers to reach customers is going to run into a counterfeit problem eventually. In his rush to discover some theory of predatory pricing to stick on Amazon, Sussman ignores the tradeoffs implicit in running a large platform that essentially democratizes retail:
Indeed, the democratizing effect of online platforms (and of technology writ large) should not be underestimated. While many are quick to disparage Amazon’s effect on local communities, these arguments fail to recognize that by reducing the costs associated with physical distance between sellers and consumers, e-commerce enables even the smallest merchant on Main Street, and the entrepreneur in her garage, to compete in the global marketplace.
In short, Amazon Marketplace is designed to make it as easy as possible for anyone to sell their products to Amazon customers. As the WSJ reported:
Counterfeiters, though, have been able to exploit Amazon’s drive to increase the site’s selection and offer lower prices. The company has made the process to list products on its website simple—sellers can register with little more than a business name, email and address, phone number, credit card, ID and bank account—but that also has allowed impostors to create ersatz versions of hot-selling items, according to small brands and seller consultants.
The existence of counterfeits is a direct result of policies designed to lower prices and increase consumer choice. Thus, we would expect some number of counterfeits to exist as a result of running a relatively open platform. The question is not whether counterfeits exist, but — at least in terms of Sussman’s attempt to use antitrust law — whether there is any reason to think that Amazon’s conduct with respect to counterfeits is actually anticompetitive. But, even if we assume for the moment that there is some plausible way to draw a competition claim out of the existence of counterfeit goods on the platform, his theory still falls apart.
There is both theoretical and empiricalevidence for why Amazon is likely not engaged in the conduct Sussman describes. As a platform owner involved in a repeated game with customers, sellers, and developers, Amazon has an incentive to increase trust within the ecosystem. Counterfeit goods directly destroy that trust and likely decrease sales in the long run. If individuals can’t depend on the quality of goods on Amazon, they can easily defect to Walmart, eBay, or any number of smaller independent sellers. That’s why Amazon enters into agreements with companies like Apple to ensure there are only legitimate products offered. That’s also why Amazon actively sues counterfeiters in partnership with its sellers and brands, and also why Project Zero is a priority for the company.
Sussman relies on private, anecdotal claims while engaging in speculation that is entirely unsupported by public data
Much of Sussman’s evidence is “[b]ased on conversations [he] held with former employees, sellers, and brands following the publication of [his] paper”, which — to put it mildly — makes it difficult for anyone to take seriously, let alone address head on. Here’s one example:
One third-party seller, who asked to remain anonymous, was willing to turn over his books for inspection in order to illustrate the magnitude of the increase in consumer prices. Together, we analyzed a single product, of which tens of thousands of units have been sold since 2015. The minimum advertised price for this single product, at any and all outlets, has increased more than 30 percent in the past four years. Despite this fact, this seller’s margins on this product are tighter than ever due to Amazon’s fee increases.
Needless to say, sales data showing the minimum advertised price for a single product “has increased more than 30 percent in the past four years” is not sufficient to prove, well, anything. At minimum, showing an increase in prices above costs would require data from a large and representative sample of sellers. All we have to go on from the article is a vague anecdote representing — maybe — one data point.
Not only is Sussman’s own data impossible to evaluate, but he bases his allegations on speculation that is demonstrably false. For instance, he asserts that Amazon used its leverage over brands in a way that caused retail prices to rise throughout the economy. But his starting point assumption is flatly contradicted by reality:
To remedy this, Amazon once again exploited brands’ MAP policies. As mentioned, MAP policies effectively dictate the minimum advertised price of a given product across the entire retail industry. Traditionally, this meant that the price of a typical product in a brick and mortar store would be lower than the price online, where consumers are charged an additional shipping fee at checkout.
Sussman presents no evidence for the claim that “the price of a typical product in a brick and mortar store would be lower than the price online.” The widespread phenomenon of showrooming — when a customer examines a product at a brick-and-mortar store but then buys it for a lower price online — belies the notion that prices are higher online. One recent study by Nielsen found that “nearly 75% of grocery shoppers have used a physical store to ‘showroom’ before purchasing online.”
In fact, the company’s downward pressure on prices is so large that researchers now speculate that Amazon and other internet retailers are partially responsible for the low and stagnant inflation in the US over the last decade (dubbing this the “Amazon effect”). It is also curious that Sussman cites shipping fees as the reason prices are higher online while ignoring all the overhead costs of running a brick-and-mortar store which online retailers don’t incur. The assumption that prices are lower in brick-and-mortar stores doesn’t pass the laugh test.
Sussman can keep trying to tell a predatory pricing story about Amazon, but the more convoluted his theories get — and the less based in empirical reality they are — the less convincing they become. There is a predatory pricing law on the books, but it’s hard to bring a case because, as it turns out, it’s actually really hard to profitably operate as a predatory pricer. Speculating over complicated new theories might be entertaining, but it would be dangerous and irresponsible if these sorts of poorly supported theories were incorporated into public policy.
Ursula von der Leyen has just announced the composition of the next European Commission. For tech firms, the headline is that Margrethe Vestager will not only retain her job as the head of DG Competition, she will also oversee the EU’s entire digital markets policy in her new role as Vice-President in charge of digital policy. Her promotion within the Commission as well as her track record at DG Competition both suggest that the digital economy will continue to be the fulcrum of European competition and regulatory intervention for the next five years.
The regulation (or not) of digital markets is an extremely important topic. Not only do we spend vast swaths of both our professional and personal lives online, but firms operating in digital markets will likely employ an ever-increasing share of the labor force in the near future.
Likely recognizing the growing importance of the digital economy, the previous EU Commission intervened heavily in the digital sphere over the past five years. This resulted in a series of high-profile regulations (including the GDPR, the platform-to-business regulation, and the reform of EU copyright) and competition law decisions (most notably the Google cases).
Lauded by supporters of the administrative state, these interventions have drawn flak from numerous corners. This includes foreign politicians (especially Americans) who see in these measures an attempt to protect the EU’s tech industry from its foreign rivals, as well as free market enthusiasts who argue that the old continent has moved further in the direction of digital paternalism.
Vestager’s increased role within the new Commission, the EU’s heavy regulation of digital markets over the past five years, and early pronouncements from Ursula von der Leyen all suggest that the EU is in for five more years of significant government intervention in the digital sphere.
Vestager the slayer of Big Tech
During her five years as Commissioner for competition, Margrethe Vestager has repeatedly been called the most powerful woman in Brussels (see here and here), and it is easy to see why. Yielding the heavy hammer of European competition and state aid enforcement, she has relentlessly attacked the world’s largest firms, especially American’s so-called “Tech Giants”.
The record-breaking fines imposed on Google were probably her most high-profile victory. When Vestager entered office, in 2014, the EU’s case against Google had all but stalled. The Commission and Google had spent the best part of four years haggling over a potential remedy that was ultimately thrown out. Grabbing the bull by the horns, Margrethe Vestager made the case her own.
Five years, three infringement decisions, and €8.25 billion euros later, Google probably wishes it had managed to keep the 2014 settlement alive. While Vestager’s supporters claim that justice was served, Barack Obama and Donald Trump, among others, branded her a protectionist (although, as Geoffrey Manne and I have noted, the evidence for this is decidedly mixed). Critics also argued that her decisions would harm innovation and penalize consumers (see here and here). Regardless, the case propelled Vestager into the public eye. It turned her into one of the most important political forces in Brussels. Cynics might even suggest that this was her plan all along.
But Google is not the only tech firm to have squared off with Vestager. Under her watch, Qualcomm was slapped with a total of €1.239 Billion in fines. The Commission also opened an investigation into Amazon’s operation of its online marketplace. If previous cases are anything to go by, the probe will most probably end with a headline-grabbing fine. The Commission even launched a probe into Facebook’s planned Libra cryptocurrency, even though it has yet to be launched, and recent talk suggests it may never be. Finally, in the area of state aid enforcement, the Commission ordered Ireland to recover €13 Billion in allegedly undue tax benefits from Apple.
Margrethe Vestager also initiated a large-scale consultation on competition in the digital economy. The ensuing report concluded that the answer was more competition enforcement. Its findings will likely be cited by the Commission as further justification to ramp up its already significant competition investigations in the digital sphere.
Outside of the tech sector, Vestager has shown that she is not afraid to adopt controversial decisions. Blocking the proposed merger between Siemens and Alstom notably drew the ire of Angela Merkel and Emmanuel Macron, as the deal would have created a European champion in the rail industry (a key political demand in Germany and France).
These numerous interventions all but guarantee that Vestager will not be pushing for light touch regulation in her new role as Vice-President in charge of digital policy. Vestager is also unlikely to put a halt to some of the “Big Tech” investigations that she herself launched during her previous spell at DG Competition. Finally, given her evident political capital in Brussels, it’s a safe bet that she will be given significant leeway to push forward landmark initiatives of her choosing.
Vestager the prophet
Beneath these attempts to rein-in “Big Tech” lies a deeper agenda that is symptomatic of the EU’s current zeitgeist. Over the past couple of years, the EU has been steadily blazing a trail in digital market regulation (although much less so in digital market entrepreneurship and innovation). Underlying this push is a worldview that sees consumers and small startups as the uninformed victims of gigantic tech firms. True to form, the EU’s solution to this problem is more regulation and government intervention. This is unlikely to change given the Commission’s new (old) leadership.
If digital paternalism is the dogma, then Margrethe Vestager is its prophet. As Thibault Schrepel has shown, her speeches routinely call for digital firms to act “fairly”, and for policymakers to curb their “power”. According to her, it is our democracy that is at stake. In her own words, “you can’t sensibly talk about democracy today, without appreciating the enormous power of digital technology”. And yet, if history tells us one thing, it is that heavy-handed government intervention is anathema to liberal democracy.
The Commission’s Google decisions neatly illustrate this worldview. For instance, in Google Shopping, the Commission concluded that Google was coercing consumers into using its own services, to the detriment of competition. But the Google Shopping decision focused entirely on competitors, and offered no evidence showing actual harm to consumers (see here). Could it be that users choose Google’s products because they actually prefer them? Rightly or wrongly, the Commission went to great lengths to dismiss evidence that arguably pointed in this direction (see here, §506-538).
Other European forays into the digital space are similarly paternalistic. The General Data Protection Regulation (GDPR) assumes that consumers are ill-equipped to decide what personal information they share with online platforms. Queue a deluge of time-consuming consent forms and cookie-related pop-ups. The jury is still out on whether the GDPR has improved users’ privacy. But it has been extremely costly for businesses — American S&P 500 companies and UK FTSE 350 companies alone spent an estimated total of $9 billion to comply with the GDPR — and has at least temporarily slowed venture capital investment in Europe.
Likewise, the recently adopted Regulation on platform-to-business relations operates under the assumption that small firms routinely fall prey to powerful digital platforms:
Given that increasing dependence, the providers of those services [i.e. digital platforms] often have superior bargaining power, which enables them to, in effect, behave unilaterally in a way that can be unfair and that can be harmful to the legitimate interests of their businesses users and, indirectly, also of consumers in the Union. For instance, they might unilaterally impose on business users practices which grossly deviate from good commercial conduct, or are contrary to good faith and fair dealing.
Make what you will about the underlying merits of these individual policies, we should at least recognize that they are part of a greater whole, where Brussels is regulating ever greater aspects of our online lives — and not clearly for the benefit of consumers.
With Margrethe Vestager now overseeing even more of these regulatory initiatives, readers should expect more of the same. The Mission Letter she received from Ursula von der Leyen is particularly enlightening in that respect:
I want you to coordinate the work on upgrading our liability and safety rules for digital platforms, services and products as part of a new Digital Services Act….
I want you to focus on strengthening competition enforcement in all sectors.
A hard rain’s a gonna fall… on Big Tech
Today’s announcements all but confirm that the EU will stay its current course in digital markets. This is unfortunate.
Digital firms currently provide consumers with tremendous benefits at no direct charge. A recent study shows that median users would need to be paid €15,875 to give up search engines for a year. They would also require €536 in order to forgo WhatsApp for a month, €97 for Facebook, and €59 to drop digital maps for the same duration.
By continuing to heap ever more regulations on successful firms, the EU risks killing the goose that laid the golden egg. This is not just a theoretical possibility. The EU’s policies have already put technology firms under huge stress, and it is not clear that this has always been outweighed by benefits to consumers. The GDPR has notably caused numerous foreign firms to stop offering their services in Europe. And the EU’s Google decisions have forced it to start charging manufacturers for some of its apps. Are these really victories for European consumers?
It is also worth asking why there are so few European leaders in the digital economy. Not so long ago, European firms such as Nokia and Ericsson were at the forefront of the digital revolution. Today, with the possible exception of Spotify, the EU has fallen further down the global pecking order in the digital economy.
The EU knows this, and plans to invest €100 Billion in order to boost European tech startups. But these sums will be all but wasted if excessive regulation threatens the long-term competitiveness of European startups.
So if more of the same government intervention isn’t the answer, then what is? Recognizing that consumers have agency and are responsible for their own decisions might be a start. If you don’t like Facebook, close your account. Want a search engine that protects your privacy? Try DuckDuckGo. If YouTube and Spotify’s suggestions don’t appeal to you, create your own playlists and turn off the autoplay functions. The digital world has given us more choice than we could ever have dreamt of; but this comes with responsibility. Both Margrethe Vestager and the European institutions have often seemed oblivious to this reality.
If the EU wants to turn itself into a digital economy powerhouse, it will have to switch towards light-touch regulation that allows firms to experiment with disruptive services, flexible employment options, and novel monetization strategies. But getting there requires a fundamental rethink — one that the EU’s previous leadership refused to contemplate. Margrethe Vestager’s dual role within the next Commission suggests that change isn’t coming any time soon.
[This post is the seventh in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]
[This post is authored by Alec Stapp, Research Fellow at the International Center for Law & Economics]
Should we break up Microsoft?
In all the talk of breaking up “Big Tech,” no one seems to mention the biggest tech company of them all. Microsoft’s market cap is currently higher than those of Apple, Google, Amazon, and Facebook. If big is bad, then, at the moment, Microsoft is the worst.
Apart from size, antitrust activists also claim that the structure and behavior of the Big Four — Facebook, Google, Apple, and Amazon — is why they deserve to be broken up. But they never include Microsoft, which is curious given that most of their critiques also apply to the largest tech giant:
Microsoft is big (current market cap exceeds $1 trillion)
Microsoft is dominant in narrowly-defined markets (e.g., desktop operating systems)
Microsoft is simultaneously operating and competing on a platform (i.e., the Microsoft Store)
Microsoft is a conglomerate capable of leveraging dominance from one market into another (e.g., Windows, Office 365, Azure)
Microsoft has its own “kill zone” for startups (196 acquisitions since 1994)
Microsoft operates a search engine that preferences its own content over third-party content (i.e., Bing)
Microsoft operates a platform that moderates user-generated content (i.e., LinkedIn)
To be clear, this is not to say that an antitrust case against Microsoft is as strong as the case against the others. Rather, it is to say that the cases against the Big Four on these dimensions are as weak as the case against Microsoft, as I will show below.
Big is bad
Tim Wu published a book last year arguing for more vigorous antitrust enforcement — including against Big Tech — called “The Curse of Bigness.” As you can tell by the title, he argues, in essence, for a return to the bygone era of “big is bad” presumptions. In his book, Wu mentions “Microsoft” 29 times, but only in the context of its 1990s antitrust case. On the other hand, Wu has explicitly called for antitrust investigations of Amazon, Facebook, and Google. It’s unclear why big should be considered bad when it comes to the latter group but not when it comes to Microsoft. Maybe bigness isn’t actually a curse, after all.
As the saying goes in antitrust, “Big is not bad; big behaving badly is bad.” This aphorism arose to counter erroneous reasoning during the era of structure-conduct-performance when big was presumed to mean bad. Thanks to an improved theoretical and empirical understanding of the nature of the competitive process, there is now a consensus that firms can grow large either via superior efficiency or by engaging in anticompetitive behavior. Size alone does not tell us how a firm grew big — so it is not a relevant metric.
Microsoft is also dominant in the “professional networking platform” market after its acquisition of LinkedIn in 2016. And the legacy tech giant is still the clear leader in the “paid productivity software” market. (Microsoft’s Office 365 revenue is roughly 10x Google’s G Suite revenue).
The problem here is obvious. These are overly-narrow market definitions for conducting an antitrust analysis. Is it true that Facebook’s platforms are the only service that can connect you with your friends? Should we really restrict the productivity market to “paid”-only options (as the EU similarly did in its Android decision) when there are so many free options available? These questions are laughable. Proper market definition requires considering whether a hypothetical monopolist could profitably impose a small but significant and non-transitory increase in price (SSNIP). If not (which is likely the case in the narrow markets above), then we should employ a broader market definition in each case.
Simultaneously operating and competing on a platform
Elizabeth Warren likes to say that if you own a platform, then you shouldn’t both be an umpire and have a team in the game. Let’s put aside the problems with that flawed analogy for now. What she means is that you shouldn’t both run the platform and sell products, services, or apps on that platform (because it’s inherently unfair to the other sellers).
Warren’s solution to this “problem” would be to create a regulated class of businesses called “platform utilities” which are “companies with an annual global revenue of $25 billion or more and that offer to the public an online marketplace, an exchange, or a platform for connecting third parties.” Microsoft’s revenue last quarter was $32.5 billion, so it easily meets the first threshold. And Windows obviously qualifies as “a platform for connecting third parties.”
Just as in mobile operating systems, desktop operating systems are compatible with third-party applications. These third-party apps can be free (e.g., iTunes) or paid (e.g., Adobe Photoshop). Of course, Microsoft also makes apps for Windows (e.g., Word, PowerPoint, Excel, etc.). But the more you think about the technical details, the blurrier the line between the operating system and applications becomes. Is the browser an add-on to the OS or a part of it (as Microsoft Edge appears to be)? The most deeply-embedded applications in an OS are simply called “features.”
Even though Warren hasn’t explicitly mentioned that her plan would cover Microsoft, it almost certainly would. Previously, she left Apple out of the Medium post announcing her policy, only to later tell a journalist that the iPhone maker would also be prohibited from producing its own apps. But what Warren fails to include in her announcement that she would break up Apple is that trying to police the line between a first-party platform and third-party applications would be a nightmare for companies and regulators, likely leading to less innovation and higher prices for consumers (as they attempt to rebuild their previous bundles).
Leveraging dominance from one market into another
The core critique in Lina Khan’s “Amazon’s Antitrust Paradox” is that the very structure of Amazon itself is what leads to its anticompetitive behavior. Khan argues (in spite of the data) that Amazon uses profits in some lines of business to subsidize predatory pricing in other lines of businesses. Furthermore, she claims that Amazon uses data from its Amazon Web Services unit to spy on competitors and snuff them out before they become a threat.
Of course, this is similar to the theory of harm in Microsoft’s 1990s antitrust case, that the desktop giant was leveraging its monopoly from the operating system market into the browser market. Why don’t we hear the same concern today about Microsoft? Like both Amazon and Google, you could uncharitably describe Microsoft as extending its tentacles into as many sectors of the economy as possible. Here are some of the markets in which Microsoft competes (and note how the Big Four also compete in many of these same markets):
What these potential antitrust harms leave out are the clear consumer benefits from bundling and vertical integration. Microsoft’s relationships with customers in one market might make it the most efficient vendor in related — but separate — markets. It is unsurprising, for example, that Windows customers would also frequently be Office customers. Furthermore, the zero marginal cost nature of software makes it an ideal product for bundling, which redounds to the benefit of consumers.
The “kill zone” for startups
In a recent article for The New York Times, Tim Wu and Stuart A. Thompson criticize Facebook and Google for the number of acquisitions they have made. They point out that “Google has acquired at least 270 companies over nearly two decades” and “Facebook has acquired at least 92 companies since 2007”, arguing that allowing such a large number of acquisitions to occur is conclusive evidence of regulatory failure.
Microsoft has made 196 acquisitions since 1994, but they receive no mention in the NYT article (or in most of the discussion around supposed “kill zones”). But the acquisitions by Microsoft or Facebook or Google are, in general, not problematic. They provide a crucial channel for liquidity in the venture capital and startup communities (the other channel being IPOs). According to the latest data from Orrick and Crunchbase, between 2010 and 2018, there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion.
By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. Making it harder for a startup to be acquired would not result in more venture capital investment (and therefore not in more IPOs), according to recent research by Gordon M. Phillips and Alexei Zhdanov. The researchers show that “the passage of a pro-takeover law in a country is associated with more subsequent VC deals in that country, while the enactment of a business combination antitakeover law in the U.S. has a negative effect on subsequent VC investment.”
As investor and serial entrepreneur Leonard Speiser said recently, “If the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.”
Search engine bias
Google is often accused of biasing its search results to favor its own products and services. The argument goes that if we broke them up, a thousand search engines would bloom and competition among them would lead to less-biased search results. While it is a very difficult — if not impossible — empirical question to determine what a “neutral” search engine would return, one attempt by Josh Wright found that “own-content bias is actually an infrequent phenomenon, and Google references its own content more favorably than other search engines far less frequently than does Bing.”
The report goes on to note that “Google references own content in its first results position when no other engine does in just 6.7% of queries; Bing does so over twice as often (14.3%).” Arguably, users of a particular search engine might be more interested in seeing content from that company because they have a preexisting relationship. But regardless of how we interpret these results, it’s clear this not a frequent phenomenon.
So why is Microsoft being left out of the antitrust debate now?
One potential reason why Google, Facebook, and Amazon have been singled out for criticism of practices that seem common in the tech industry (and are often pro-consumer) may be due to the prevailing business model in the journalism industry. Google and Facebook are by far the largest competitors in the digital advertising market, and Amazon is expected to be the third-largest player by next year, according to eMarketer. As Ramsi Woodcock pointed out, news publications are also competing for advertising dollars, the type of conflict of interest that usually would warrant disclosure if, say, a journalist held stock in a company they were covering.
Or perhaps Microsoft has successfully avoided receiving the same level of antitrust scrutiny as the Big Four because it is neither primarily consumer-facing like Apple or Amazon nor does it operate a platform with a significant amount of political speech via user-generated content (UGC) like Facebook or Google (YouTube). Yes, Microsoft moderates content on LinkedIn, but the public does not get outraged when deplatforming merely prevents someone from spamming their colleagues with requests “to add you to my professional network.”
Microsoft’s core areas are in the enterprise market, which allows it to sidestep the current debates about the supposed censorship of conservatives or unfair platform competition. To be clear, consumer-facing companies or platforms with user-generated content do not uniquely merit antitrust scrutiny. On the contrary, the benefits to consumers from these platforms are manifest. If this theory about why Microsoft has escaped scrutiny is correct, it means the public discussion thus far about Big Tech and antitrust has been driven by perception, not substance.
[This post is the sixth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]
[This post is authored by Thibault Schrepel, Faculty Associate at the Berkman Center at Harvard University and Assistant Professor in European Economic Law at Utrecht University School of Law.]
The pretense of ignorance
Over the last few years, I have published a series of antitrust conversations with Nobel laureates in economics. I have discussed big tech dominance with most of them, and although they have different perspectives, all of them agreed on one thing: they do not know what the effect of breaking up big tech would be. In fact, I have never spoken with any economist who was able to show me convincing empirical evidence that breaking up big tech would on net be good for consumers. The same goes for political scientists; I have never read any article that, taking everything into consideration, proves empirically that breaking up tech companies would be good for protecting democracies, if that is the objective (please note that I am not even discussing the fact that using antitrust law to do that would violate the rule of law, for more on the subject, click here).
This reminds me of Friedrich Hayek’s Nobel memorial lecture, in which he discussed the “pretense of knowledge.” He argued that some issues will always remain too complex for humans (even helped by quantum computers and the most advanced AI; that’s right!). Breaking up big tech is one such issue; it is simply impossible simultaneously to consider the micro and macro-economic impacts of such an enormous undertaking, which would affect, literally, billions of people. Not to mention the political, sociological and legal issues, all of which combined are beyond human understanding.
Ignorance + fear = fame
In the absence of clear-cut conclusions, here is why (I think), some officials are arguing for breaking up big tech. First, it may be possible that some of them actually believe that it would be great. But I am sure we agree that beliefs should not be a valid basis for such actions. More realistically, the answer can be found in the work of another Nobel laureate, James Buchanan, and in particular his 1978 lecture in Vienna entitled “Politics Without Romance.”
In his lecture and the paper that emerged from it, Buchanan argued that while markets fail, so do governments. The latter is especially relevant insofar as top officials entrusted with public power may, occasionally at least, use that power to benefit their personal interests rather than the public interest. Thus, the presumption that government-imposed corrections for market failures always accomplish the desired objectives must be rejected. Taking that into consideration, it follows that the expected effectiveness of public action should always be established as precisely and scientifically as possible before taking action. Integrating these insights from Hayek and Buchanan, we must conclude that it is not possible to know whether the effects of breaking up big tech would on net be positive.
The question then is why, in the absence of positive empirical evidence, are some officials arguing for breaking up tech giants then? Well, because defending such actions may help them achieve their personal goals. Often, it is more important for public officials to show their muscle and take action, rather showing great care about reaching a positive net result for society. This is especially true when it is practically impossible to evaluate the outcome due to the scale and complexity of the changes that ensue. That enables these officials to take credit for being bold while avoiding blame for the harms.
But for such a call to be profitable for the public officials, they first must legitimize the potential action in the eyes of the majority of the public. Until now, most consumers evidently like the services of tech giants, which is why it is crucial for the top officials engaged in such a strategy to demonize those companies and further explain to consumers why they are wrong to enjoy them. Only then does defending the breakup of tech giants becomes politically valuable.
Some data, one trend
In a recent paper entitled “Antitrust Without Romance,” I have analyzed the speeches of the five current FTC commissioners, as well as the speeches of the current and three previous EU Competition Commissioners. What I found is an increasing trend to demonize big tech companies. In other words, public officials increasingly seek to prepare the general public for the idea that breaking up tech giants would be great.
In Europe, current Competition Commissioner Margrethe Vestager has sought to establish an opposition between the people (referred under the pronoun “us”) and tech companies (referred under the pronoun “them”) in more than 80% of her speeches. She further describes these companies as engaging in manipulation of the public and unleashing violence. She says they, “distort or fabricate information, manipulate people’s views and degrade public debate” and help “harmful, untrue information spread faster than ever, unleashing violence and undermining democracy.” Furthermore, she says they cause, “danger of death.” On this basis, she mentions the possibility of breaking them up (for more data about her speeches, see this link).
In the US, we did not observe a similar trend. Assistant Attorney General Makan Delrahim, who has responsibility for antitrust enforcement at the Department of Justice, describes the relationship between people and companies as being in opposition in fewer than 10% of his speeches. The same goes for most of the FTC commissioners (to see all the data about their speeches, see this link). The exceptions are FTC Chairman Joseph J. Simons, who describes companies’ behavior as “bad” from time to time (and underlines that consumers “deserve” better) and Commissioner Rohit Chopra, who describes the relationship between companies and the people as being in opposition to one another in 30% of his speeches. Chopra also frequently labels companies as “bad.” These are minor signs of big tech demonization compared to what is currently done by European officials. But, unfortunately, part of the US doctrine (which does not hide political objectives) pushes for demonizing big tech companies. One may have reason to fear that such a trend will grow in the US as it has in Europe, especially considering the upcoming presidential campaign in which far-right and far-left politicians seem to agree about the need to break up big tech.
And yet, let’s remember that no-one has any documented, tangible, and reproducible evidence that breaking up tech giants would be good for consumers, or societies at large, or, in fact, for anyone (even dolphins, okay). It might be a good idea; it might be a bad idea. Who knows? But the lack of evidence either way militates against taking such action. Meanwhile, there is strong evidence that these discussions are fueled by a handful of individuals wishing to benefit from such a call for action. They do so, first, by depicting tech giants as representing the new elite in opposition to the people and they then portray themselves as the only saviors capable of taking action.
Epilogue: who knows, life is not a Tarantino movie
For the last 30 years, antitrust law has been largely immune to strategic takeover by political interests. It may now be returning to a previous era in which it was the instrument of a few. This transformation is already happening in Europe (it is expected to hit case law there quite soon) and is getting real in the US, where groups display political goals and make antitrust law a Trojan horse for their personal interests.The only semblance of evidence they bring is a few allegedly harmful micro-practices (see Amazon’s Antitrust Paradox), which they use as a basis for defending the urgent need of macro, structural measures, such as breaking up tech companies. This is disproportionate, but most of all and in the absence of better knowledge, purely opportunistic and potentially foolish. Who knows at this point whether antitrust law will come out intact of this populist and moralist episode? And who knows what the next idea of those who want to use antitrust law for purely political purposes will be. Life is not a Tarantino movie; it may end up badly.
If a firm is too big, it will be because it is “a merger for monopoly”;
If the firms aren’t that big, it will be for “coordinated effects”;
If a firm is small, then it will be because it will “eliminate a maverick”.
It’s a version of Ronald Coase’s complaint about antitrust, asrelated by William Landes:
Ronald said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down, they said it was predatory pricing, and when they stayed the same, they said it was tacit collusion.
Of all the reasons to block a merger, the maverick notion is the weakest, and it’s well past time to ditch it.
TheHorizontal Merger Guidelines define a “maverick” as “a firm that plays a disruptive role in the market to the benefit of customers.” According to the Guidelines, this includes firms:
With a new technology or business model that threatens to disrupt market conditions;
With an incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices;
That resist otherwise prevailing industry norms to cooperate on price setting or other terms of competition; and/or
With an ability and incentive to expand production rapidly using available capacity to “discipline prices.”
There appears to be no formal model of maverick behavior that does not rely on some a priori assumption that the firm is a maverick.
For example, John Kwoka’s 1989model assumes the maverick firm has different beliefs about how competing firms would react if the maverick varies its output or price. Louis Kaplow and Carl Shapiro developed a simplemodel in which the firm with the smallest market share may play the role of a maverick. They note, however, that this raises the question—in a model in which every firm faces the same cost and demand conditions—why would there be any variation in market shares? The common solution, according to Kaplow and Shapiro, is cost asymmetries among firms. If that is the case, then “maverick” activity is merely a function of cost, rather than some uniquely maverick-like behavior.
The idea of the maverick firm requires that the firm play a critical role in the market. The maverick must be the firm that outflanks coordinated action or acts as a bulwark against unilateral action. By this loosey goosey definition of maverick, a single firm can make the difference between success or failure of anticompetitive behavior by its competitors. Thus, the ability and incentive to expand production rapidly is a necessary condition for a firm to be considered a maverick. For example, Kaplow and Shapiroexplain:
Of particular note is the temptation of one relatively small firm to decline to participate in the collusive arrangement or secretly to cut prices to serve, say, 4% rather than 2% of the market. As long as price cuts by a small firm are less likely to be accurately observed or inferred by the other firms than are price cuts by larger firms, the presence of small firms that are capable of expanding significantly is especially disruptive to effective collusion.
A “maverick” firm’s ability to “discipline prices” depends crucially on its ability to expand output in the face of increased demand for its products. Similarly, the other non-maverick firms can be “disciplined” by the maverick only in the face of a credible threat of (1) a noticeable drop in market share that (2) leads to lower profits.
Relying on its disruptive pricing plans, its improved high-speed HSPA+ network, and a variety of other initiatives, T-Mobile aimed to grow its nationwide share to 17 percent within the next several years, and to substantially increase its presence in the enterprise and government market. AT&T’s acquisition of T-Mobile would eliminate the important price, quality, product variety, and innovation competition that an independent T-Mobile brings to the marketplace.
At the time of the proposed merger, T-Mobileaccounted for 11% of U.S. wireless subscribers. At the end of 2016, its market share had hit 17%. About half of the increase can be attributed to its 2012 merger with MetroPCS. Over the same period, Verizon’s market share increased from 33% to 35% and AT&T market share remained stable at 32%. It appears that T-Mobile’s so-called maverick behavior did more to disrupt the market shares of smaller competitors Sprint and Leap (which was acquired by AT&T). Thus, it is not clear, ex post, that T-Mobile posed any threat to AT&T or Verizon’s market shares.
Geoffrey Manne raised somequestions about the government’s maverick theory which also highlights a fundamental problem with the willy nilly way in which firms are given the maverick label:
. . . it’s just not enough that a firm may be offering products at a lower price—there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.
While T-Mobile had a reputation for lower mobile prices, in 2011, the firm waslagging behind Verizon, Sprint, and AT&T in the rollout of 4G technology. In other words, T-Mobile was offering an inferior product at a lower price. That’s not a maverick, that’s product differentiation with hedonic pricing.
More recently, in his opposition to the proposed T-Mobile/Sprint merger, Gene Kimmelman from Public Knowledgeasserts that both firms are mavericks and their combination would cause their maverick magic to disappear:
Sprint, also, can be seen as a maverick. It has offered “unlimited” plans and simplified its rate plans, for instance, driving the rest of the industry forward to more consumer-friendly options. As Sprint CEO Marcelo Claure stated, “Sprint and T-Mobile have similar DNA and have eliminated confusing rate plans, converging into one rate plan: Unlimited.” Whether both or just one of the companies can be seen as a “maverick” today, in either case the newly combined company would simply have the same structural incentives as the larger carriers both Sprint and T-Mobile today work so hard to differentiate themselves from.
Kimmelman provides no mechanism by which the magic would go missing, but instead offers a version of an adversity-builds-character argument:
Allowing T-Mobile to grow to approximately the same size as AT&T, rather than forcing it to fight for customers, will eliminate the combined company’s need to disrupt the market and create an incentive to maintain the existing market structure.
For 30 years, the notion of the maverick firm has been a concept in search of a model. If the concept cannot be modeled decades after being introduced, maybe the maverick can’t be modeled.
What’s left are ad hoc assertions mixed with speculative projections in hopes that some sympathetic judge can be swayed. However, some judges seem to be more skeptical than sympathetic, as inH&R Block/TaxACT :
The parties have spilled substantial ink debating TaxACT’s maverick status. The arguments over whether TaxACT is or is not a “maverick” — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis. The government even put forward as supposed evidence a TaxACT promotional press release in which the company described itself as a “maverick.” This type of evidence amounts to little more than a game of semantic gotcha. Here, the record is clear that while TaxACT has been an aggressive and innovative competitor in the market, as defendants admit, TaxACT is not unique in this role. Other competitors, including HRB and Intuit, have also been aggressive and innovative in forcing companies in the DDIY market to respond to new product offerings to the benefit of consumers.
It’s time to send the maverick out of town and into the sunset.
[This post is the third in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]
[This post is authored by John E. Lopatka, Robert Noll Distinguished Professor of Law, School of Law, The Pennsylvania State University]
Big Tech firms stand accused of many evils, and the clamor to break them up is loud. Should we fetch our pitchforks? The antitrust laws are designed to address a range of wrongs and authorize a set of remedies, which include but do not emphasize divestiture. When the harm caused by a Big Tech company is of a kind the antitrust laws are intended to prevent, an appropriate antitrust remedy can be devised. In such a case, it makes sense to use antitrust: If antitrust and its remedies are adequate to do the job fully, no legislative changes are required. When the harm falls outside the ambit of antitrust and any other pertinent statute, a choice must be made. Antitrust can be expanded; other statutes can be amended or enacted; or any harms that are not perfectly addressed by existing statutory and common law can be left alone, for legal institutions are never perfect, and a disease can be less harmful than a cure.
A comprehensive list of the myriad and changing attacks on Big Tech firms would be difficult to compile. Indeed, the identity of the offenders is not self-evident, though Google (Alphabet), Facebook, Amazon, and Apple have lately attracted the most attention. The principal charges against Big Tech firms seem to be these: 1) compromising consumer privacy; 2) manipulating the news; 3) accumulating undue social and political influence; 4) stifling innovation by acquiring creative upstarts; 5) using market power in one market to injure competitors in adjacent markets; 6) exploiting input suppliers; 7) exploiting their own employees; and 8) damaging communities by location choices.
These charges are not uniform across the Big Tech targets. Some charges have been directed more forcefully against some firms than others. For instance, infringement of consumer privacy has been a focus of attacks on Facebook. Both Facebook and Google have been accused of manipulating the news. And claims about the exploitation of input suppliers and employees and the destruction of communities have largely been directed at Amazon.
What is “Big Tech”?
Despite the variance among firms, the attacks against all of them proceed from the same syllogism: Some tech firms are big; big tech firms do social harm; therefore, big tech firms should be broken up. From an antitrust perspective, something is missing. Start with the definition of a “tech” firm. In the modern economy, every firm relies on sophisticated technology – from an auto repair shop to an airplane manufacturer to a social media website operator. Every firm is a tech firm. But critics have a more limited concept in mind. They are concerned about platforms, or intermediaries, in multi-sided markets. These markets exhibit indirect network effects. In a two-sided market, for instance, each side of the market benefits as the size of the other side grows. Platforms provide value by coordinating the demand and supply of different groups of economic actors where the actors could not efficiently interact by themselves. In short, platforms reduce transaction costs. They have been around for centuries, but their importance has been magnified in recent years by rapid advances in technology. Rational observers can sensibly ask whether platforms are peculiarly capable of causing harm. But critics tend to ignore or at least to discount the value that platforms provide, and doing so presents a distorted image that breeds bad policy.
Assuming we know what a tech firm is, what is “big”? One could measure size by many standards. Most critics do not bother to define “big,” though at least Senator Elizabeth Warren has proposed defining one category of bigness as firms with annual global revenue of $25 billion or more and a second category as those with annual global revenue of between $90 million and $25 billion. The proper standard for determining whether tech firms are objectionably large is not self-evident. Indeed, a size threshold embodied in any legal policy will almost always be somewhat arbitrary. That by itself is not a failing of a policy prescription. But why use a size screen at all? A few answers are possible. Large firms may do more harm than small firms when harm is proportionate to size. Size may matter because government intervention is costly and less sensitive to firm size than is harm, implying that only harm caused by large firms is large enough to outweigh the costs of enforcement. And most important, the size of a firm may be related to the kind of harm the firm is accused of doing. Perhaps only a firm of a certain size can inflict a particular kind of injury. A clear standard of size and its justification ought to precede any policy prescription.
What’s the (antitrust) beef?
The social harms that Big Tech firms are accused of doing are a hodgepodge. Some are familiar to antitrust scholars as either current or past objects of antitrust concern; others are not. Antitrust protects against a certain kind of economic harm: The loss of economic welfare caused by a restriction on competition. Though the terms are sometimes used in different ways, the core concept is reasonably clear and well accepted. In most cases, economic welfare is synonymous with consumer welfare. Economic welfare, though, is a broader concept. For example, economic welfare is reduced when buyers exercise market power to the detriment of sellers and by productive inefficiencies. But despite the claim of some Big Tech critics, when consumer welfare is at stake, it is not measured exclusively by the price consumers pay. Economists often explicitly refer to quality-adjusted prices and implicitly have the qualification in mind in any analysis of price. Holding quality constant makes quantitative models easier to construct, but a loss of quality is a matter of conventional antitrust concern. The federal antitrust agencies’ horizontal merger guidelines recognize that “reduced product quality, reduced product variety, reduced service, [and] diminished innovation” are all cognizable adverse effects. The scope of antitrust is not as constricted as some critics assert. Still, it has limits.
Leveraging market power is standard antitrust fare, though it is not nearly as prevalent as once thought. Horizontal mergers that reduce economic welfare are an antitrust staple. The acquisition and use of monopsony power to the detriment of input suppliers is familiar antitrust ground. If Big Tech firms have committed antitrust violations of this ilk, the offenses can be remedied under the antitrust laws.
Other complaints against the Big Tech firms do not fit comfortably or at all within the ambit of antitrust. Antitrust does not concern itself with political or social influence. Influence is a function of size, but not relative to any antitrust market. Firms that have more resources than other firms may have more influence, but the deployment of those resources across the economy is irrelevant. The use of antitrust to attack conglomerate mergers was an inglorious period in antitrust history. Injuries to communities or to employees are not a proper antitrust concern when they result from increased efficiency. Acquisitions might stifle innovation, which is a proper antitrust concern, but they might spur innovation by inducing firms to create value and thereby become attractive acquisition targets or by facilitating integration. Whether the consumer interest in informational privacy has much to do with competition is difficult to say. Privacy in this context means the collection and use of data. In a multi-sided market, one group of participants may value not only the size but also the composition and information about another group. Competition among platforms might or might not occur on the dimension of privacy. For any platform, however, a reduction in the amount of valuable data it can collect from one side and provide to another side will reduce the price it can charge the second side, which can flow back and injure the first side. In all, antitrust falters when it is asked to do what it cannot do well, and whether other laws should be brought to bear depends on a cost/benefit calculus.
Does Big Tech’s conduct merit antitrust action?
When antitrust is used, it unquestionably requires a causal connection between conduct and harm. Conduct must restrain competition, and the restraint must cause cognizable harm. Most of the attacks against Big Tech firms if pursued under the antitrust laws would proceed as monopolization claims. A firm must have monopoly power in a relevant market; the firm must engage in anticompetitive conduct, typically conduct that excludes rivals without increasing efficiency; and the firm must have or retain its monopoly power because of the anticompetitive conduct.
Put aside the flaccid assumption that all the targeted Big Tech platforms have monopoly power in relevant markets. Maybe they do, maybe they don’t, but an assumption is unwarranted. Focus instead on the conduct element of monopolization. Most of the complaints about Big Tech firms concern their use of whatever power they have. Use isn’t enough. Each of the firms named above has achieved its prominence by extraordinary innovation, shrewd planning, and effective execution in an unforgiving business climate, one in which more platforms have failed than have succeeded. This does not look like promising ground for antitrust.
Of course, even firms that generally compete lawfully can stray. But to repeat, monopolists do not monopolize unless their unlawful conduct is causally connected to their market power. The complaints against the Big Tech firms are notably weak on allegations of anticompetitive conduct that resulted in the acquisition or maintenance of their market positions. Some critics have assailed Facebook’s acquisitions of WhatsApp and Instagram. Even assuming these firms competed with Facebook in well-defined antitrust markets, the claim that Facebook’s dominance in its core business was created or maintained by these acquisitions is a stretch.
The difficulty fashioning remedies
The causal connection between conduct and monopoly power becomes particularly important when remedies are fashioned for monopolization. Microsoft, the first major monopolization case against a high tech platform, is instructive. DOJ in its complaint sought only conduct remedies for Microsoft’s alleged unlawful maintenance of a monopoly in personal computer operating systems. The trial court found that Microsoft had illegally maintained its monopoly by squelching Netscape’s Navigator and Sun’s Java technologies, and by the end of trial DOJ sought and the court ordered structural relief in the form of “vertical” divestiture, separating Microsoft’s operating system business from its applications business. Some commentators at the time argued for various kinds of “horizontal” divestiture, which would have created competing operating system platforms. The appellate court set aside the order, emphasizing that an antitrust remedy must bear a close causal connection to proven anticompetitive conduct. Structural remedies are drastic, and a plaintiff must meet a heightened standard of proof of causation to justify any kind of divestiture in a monopolization case. On remand, DOJ abandoned its request for divestiture. The evidence that Microsoft maintained its market position by inhibiting the growth of middleware was sufficient to support liability, but not structural relief.
The court’s trepidation was well-founded. Divestiture makes sense when monopoly power results from acquisitions, because the mergers expose joints at which the firm might be separated without rending fully integrated operations. But imposing divestiture on a monopolist for engaging in single-firm exclusionary conduct threatens to destroy the integration that is the essence of any firm and is almost always disproportional to the offense. Even if conduct remedies can be more costly to enforce than structural relief, the additional cost is usually less than the cost to the economy of forgone efficiency.
The proposals to break up the Big Tech firms are ill-defined. Based on what has been reported, no structural relief could be justified as antitrust relief. Whatever conduct might have been unlawful was overwhelmingly unilateral. The few acquisitions that have occurred didn’t appreciably create or preserve monopoly power, and divestiture wouldn’t do much to correct the misbehavior critics see anyway. Big Tech firms could be restructured through new legislation, but that would be a mistake. High tech platform markets typically yield dominant firms, though heterogeneous demand often creates space for competitors. Markets are better at achieving efficient structures than are government planners. Legislative efforts at restructuring are likely to invite circumvention or lock in inefficiency.
Regulate “Big Tech” instead?
In truth, many critics are willing to put up with dominant tech platforms but want them regulated. If we learned any lesson from the era of pervasive economic regulation of public utilities, it is that regulation is costly and often yields minimal benefits. George Stigler and Claire Friedland demonstrated 57 years ago that electric utility regulation had little impact. The era of regulation was followed by an era of deregulation. Yet the desire to regulate remains strong, and as Stigler and Friedland observed, “if wishes were horses, one would buy stock in a harness factory.” And just how would Big Tech platform regulators regulate? Senator Warren offers a glimpse of the kind of regulation that critics might impose: “Platform utilities would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users.” This kind of standard has some meaning in the context of a standard-setting organization dealing with patent holders. What it would mean in the context of a social media platform, for example, is anyone’s guess. Would it prevent biasing of information for political purposes, and what government official should be entrusted with that determination? What is certain is that it would invite government intervention into markets that are working well, if not perfectly. It would invite public officials to tradeoff economic welfare for a host of values embedded in the concept of fairness. Federal agencies charged with promoting the “public interest” have a difficult enough time reaching conclusions where competition is one of several specific values to be considered. Regulation designed to address all the evils high tech platforms are thought to perpetrate would make traditional economic or public-interest regulation look like child’s play.
Big Tech firms have generated immense value. They may do real harm. From all that can now be gleaned, any harm has had little to do with antitrust, and it certainly doesn’t justify breaking them up. Nor should they be broken up as an exercise in central economic planning. If abuses can be identified, such as undesirable invasions of privacy, focused legislation may be in order, but even then only if the government action is predictably less costly than the abuses.
Thomas Wollmann has a new paper — “Stealth Consolidation: Evidence from an Amendment to the Hart-Scott-Rodino Act” — in American Economic Review: Insights this month. Greg Ip included this research in an article for the WSJ in which he claims that “competition has declined and corporate concentration risen through acquisitions often too small to draw the scrutiny of antitrust watchdogs.” In other words, “stealth consolidation”.
Wollmann’s study uses a difference-in-differences approach to examine the effect on merger activity of the 2001 amendment to the Hart-Scott-Rodino (HSR) Antitrust Improvements Act of 1976 (15 U.S.C. 18a). The amendment abruptly increased the pre-merger notification threshold from $15 million to $50 million in deal size. Strictly on those terms, the paper shows that raising the pre-merger notification threshold increased merger activity.
However, claims about “stealth consolidation” are controversial because they connote nefarious intentions and anticompetitive effects. As Wollmann admits in the paper, due to data limitations, he is unable to show that the new mergers are in fact anticompetitive or that the social costs of these mergers exceed the social benefits. Therefore, more research is needed to determine the optimal threshold for pre-merger notification rules, and claiming that harmful “stealth consolidation” is occurring is currently unwarranted.
Background: The “Unscrambling the Egg” Problem
In general, it is more difficult to unwind a consummated anticompetitive merger than it is to block a prospective anticompetitive merger. As Wollmann notes, for example, “El Paso Natural Gas Co. acquired its only potential rival in a market” and “the government’s challenge lasted 17 years and involved seven trips to the Supreme Court.”
Rolling back an anticompetitive merger is so difficult that it came to be known as “unscrambling the egg.” As William J. Baer, a former director of the Bureau of Competition at the FTC, described it, “there were strong incentives for speedily and surreptitiously consummating suspect mergers and then protracting the ensuing litigation” prior to the implementation of a pre-merger notification rule. These so-called “midnight mergers” were intended to avoid drawing antitrust scrutiny.
In 2001, Congress amended the HSR Act and effectively raised the threshold for premerger notification from $15 million in acquired firm assets to $50 million. This sudden and dramatic change created an opportunity to use a difference-in-differences technique to study the relationship between filing an HSR notification and merger activity.
According to Wollmann, here’s what notifications look like for never-exempt mergers (>$50M):
And here’s what notifications for newly-exempt ($15M < X < $50M) mergers look like:
So what does that mean for merger investigations? Here is the number of investigations into never-exempt mergers:
We see a pretty consistent relationship between number of mergers and number of investigations. More mergers means more investigations.
How about for newly-exempt mergers?
Here, investigations go to zero while merger activity remains relatively stable. In other words, it appears that some mergers that would have been investigated had they required an HSR notification were not investigated.
Wollmann then uses four-digit SIC code industries to sort mergers into horizontal and non-horizontal categories. Here are never-exempt mergers:
He finds that almost all of the increase in merger activity (relative to the counterfactual in which the notification threshold were unchanged) is driven by horizontal mergers. And here are newly-exempt mergers:
Policy Implications & Limitations
The charts show a stark change in investigations and merger activity. The difference-in-differences methodology is solid and the author addresses some potential confounding variables (such as presidential elections). However, the paper leaves the broader implications for public policy unanswered.
Furthermore, given the limits of the data in this analysis, it’s not possible for this approach to explain competitive effects in the relevant antitrust markets, for three reasons:
Four-digit SIC code industries are not antitrust markets
Wollmann chose to classify mergers “as horizontal or non-horizontal based on whether or not the target and acquirer operate in the same four-digit SIC code industry, which is common convention.” But as Werden & Froeb (2018) notes, four-digit SIC code industries are orders of magnitude too large in most cases to be useful for antitrust analysis:
The evidence from cartel cases focused on indictments from 1970–80. Because the Justice Department prosecuted many local cartels, for 52 of the 80 indictments examined, the Commerce Quotient was less than 0.01, i.e., the SIC 4-digit industry was at least 100 times the apparent scope of the affected market. Of the 80 indictments, 19 involved SIC 4-digit industries that had been thought to comport well with markets, so these were the most instructive. For 16 of the 19, the SIC 4-digit industry was at least 10 times the apparent scope of the affected market (i.e., the Commerce Quotient was less than 0.1).
Antitrust authorities do not rely on SIC 4-digit industry codes and instead establish a market definition based on the facts of each case. It is not possible to infer competitive effects from census data as Wollmann attempts to do.
The data cannot distinguish between anticompetitive mergers and procompetitive mergers
As Wollmann himself notes, the results tell us nothing about the relative costs and benefits of the new HSR policy:
Even so, these findings do not on their own advocate for one policy over another. To do so requires equating industry consolidation to a specific amount of economic harm and then comparing the resulting figure to the benefits derived from raising thresholds, which could be large. Even if the agencies ignore the reduced regulatory burden on firms, introducing exemptions can free up agency resources to pursue other cases (or reduce public spending). These and related issues require careful consideration but simply fall outside the scope of the present work.
For instance, firms could be reallocating merger activity to targets below the new threshold to avoid erroneous enforcement or they could be increasing merger activity for small targets due to reduced regulatory costs and uncertainty.
The study is likely underpowered for effects on blocked mergers
While the paper provides convincing evidence that investigations of newly-exempt mergers decreased dramatically following the change in the notification threshold, there is no equally convincing evidence of an effect on blocked mergers. As Wollmann points out, blocked mergers were exceedingly rare both before and after the Amendment (emphasis added):
Over 57,000 mergers comprise the sample, which spans eighteen years. The mean number of mergers each year is 3,180. The DOJ and FTC receive 31,464 notifications over this period, or 1,748 per year. Also, as stated above, blocked mergers are very infrequent: there are on average 13 per year pre-Amendment and 9 per-year post-Amendment.
Since blocked mergers are such a small percentage of total mergers both before and after the Amendment, we likely cannot tell from the data whether actual enforcement action changed significantly due to the change in notification threshold.
Greg Ip’s write-up for the WSJ includes some relevant charts for this issue. Ironically for a piece about the problems of lax merger review, the accompanying graphs show merger enforcement actions slightly increasing at both the FTC and the DOJ since 2001:
Overall, Wollmann’s paper does an effective job showing how changes in premerger notification rules can affect merger activity. However, due to data limitations, we cannot conclude anything about competitive effects or enforcement intensity from this study.
This guest post is by Corbin K. Barthold, Litigation Counsel at Washington Legal Foundation.
Complexity need not follow size. A star is huge but mostly homogenous. “It’s core is so hot,” explains Martin Rees, “that no chemicals can exist (complex molecules get torn apart); it is basically an amorphous gas of atomic nuclei and electrons.”
Nor does complexity always arise from remoteness of space or time. Celestial gyrations can be readily grasped. Thales of Miletus probably predicted a solar eclipse. Newton certainly could have done so. And we’re confident that in 4.5 billion years the Andromeda galaxy will collide with our own.
If the simple can be seen in the large and the distant, equally can the complex be found in the small and the immediate. A double pendulum is chaotic. Likewise the local weather, the fluctuations of a wildlife population, or the dispersion of the milk you pour into your coffee.
Our economy is not like a planetary orbit. It’s more like the weather or the milk. No one knows which companies will become dominant, which products will become popular, or which industries will become defunct. No one can see far ahead. Investing is inherently risky because the future of the economy, or even a single segment of it, is intractably uncertain. Do not hand your savings to any expert who says otherwise. Experts, in fact, often see the least of all.
But if a broker with a “sure thing” stock is a mountebank, what does that make an antitrust scholar with an “optimum structure” for a market?
Not a prophet.
There is so much that we don’t know. Consider, for example, the notion that market concentration is a good measure of market competitiveness. The idea seems intuitive enough, and in many corners it remains an article of faith.
But the markets where this assumption is most plausible—hospital care and air travel come to mind—are heavily shaped by that grand monopolist we call government. Only a large institution can cope with the regulatory burden placed on the healthcare industry. As Tyler Cowen writes, “We get the level of hospital concentration that we have in essence chosen through politics and the law.”
As for air travel: the government promotes concentration by barring foreign airlines from the domestic market. In any case, the state of air travel does not support a straightforward conclusion that concentration equals power. The price of flying has fallen almost continuously since passage of the Airline Deregulation Act in 1978. The major airlines are disciplined by fringe carriers such as JetBlue and Southwest.
It is by no means clear that, aside from cases of government-imposed concentration, a consolidated market is something to fear. Technology lowers costs, lower costs enable scale, and scale tends to promote efficiency. Scale can arise naturally, therefore, from the process of creating better and cheaper products.
Say you’re a nineteenth-century cow farmer, and the railroad reaches you. Your shipping costs go down, and you start to sell to a wider market. As your farm grows, you start to spread your capital expenses over more sales. Your prices drop. Then refrigerated rail cars come along, you start slaughtering your cows on site, and your shipping costs go down again. Your prices drop further. Farms that fail to keep pace with your cost-cutting go bust. The cycle continues until beef is cheap and yours is one of the few cow farms in the area. The market improves as it consolidates.
As the decades pass, this story repeats itself on successively larger stages. The relentless march of technology has enabled the best companies to compete for regional, then national, and now global market share. We should not be surprised to see ever fewer firms offering ever better products and services.
Bear in mind, moreover, that it’s rarely the same company driving each leap forward. As Geoffrey Manne and Alec Stapp recently noted in this space, markets are not linear. Just after you adopt the next big advance in the logistics of beef production, drone delivery will disrupt your delivery network, cultured meat will displace your product, or virtual-reality flavoring will destroy your industry. Or—most likely of all—you’ll be ambushed by something you can’t imagine.
Does market concentration inhibit innovation? It’s possible. “To this day,” write Joshua Wright and Judge Douglas Ginsburg, “the complex relationship between static product market competition and the incentive to innovate is not well understood.”
There’s that word again: complex. When will thumping company A in an antitrust lawsuit increase the net amount of innovation coming from companies A, B, C, and D? Antitrust officials have no clue. They’re as benighted as anyone. These are the people who will squash Blockbuster’s bid to purchase a rival video-rental shop less than two years before Netflix launches a streaming service.
And it’s not as if our most innovative companies are using market concentration as an excuse to relax. If its only concern were maintaining Google’s grip on the market for internet-search advertising, Alphabet would not have spent $16 billion on research and development last year. It spent that much because its long-term survival depends on building the next big market—the one that does not exist yet.
No expert can reliably make the predictions necessary to say when or how a market should look different. And if we empowered some experts to make such predictions anyway, no other experts would be any good at predicting what the empowered experts would predict. Experts trying to give us “well structured” markets will instead give us a costly, politicized, and stochastic antitrust enforcement process.
Here’s a modest proposal. Instead of using the antitrust laws to address the curse of bigness, let’s create the Office of the Double Pendulum. We can place the whole section in a single room at the Justice Department.
All we’ll need is some ping-pong balls, a double pendulum, and a monkey. On each ball will be the name of a major corporation. Once a quarter—or a month; reasonable minds can differ—a ball will be drawn, and the monkey prodded into throwing the pendulum. An even number of twirls saves the company on the ball. An odd number dooms it to being broken up.
This system will punish success just as haphazardly as anything our brightest neo-Brandeisian scholars can devise, while avoiding the ruinously expensive lobbying, rent-seeking, and litigation that arise when scholars succeed in replacing the rule of law with the rule of experts.
All hail the chaos monkey. Unutterably complex. Ineffably simple.
[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.This post originally appeared on the Federalist Society Blog.]
The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.
Apple v. Qualcomm settles — and the DOJ takes notice
The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.
That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).
But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).
Footnote 6 of the DOJ’s Statement reads:
Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).
Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.
The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple
Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:
Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.
Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).
The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:
The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:
Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).
That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.
Even more troubling is what it means for the strength of the FTC’s case
But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.
But what is most troubling is the extent to which it appears to have worked.
Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.
* * *
Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).
The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?
Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.
A curiously small number of agreements
Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.
Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., here, here, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis?
At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?
All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.
Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.
A curiously crabbed selection of licensors
And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.
One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:
And about InterDigital’s:
Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:
The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.
And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).
A curiously circumscribed timeframe
That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporatedinto a widely used standard.
According to the FTC’s expert,
prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….
Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.
The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.
But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013.
At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated
Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:
[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….
[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.
So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.
Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).
The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.
All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.
Connecting the dots and calling into question the strength of the FTC’s case
In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:
Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.
* * *
Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.
Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.
* * *
The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.
It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.
It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.
What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.
Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.
In 2014, Benedict Evans, a venture capitalist at Andreessen Horowitz, wrote “Why Amazon Has No Profits (And Why It Works),” a blog post in which he tried to explain Amazon’s business model. He began with a chart of Amazon’s revenue and net income that has now become (in)famous:
A question inevitably followed in antitrust circles: How can a company that makes so little profit on so much revenue be worth so much money? It must be predatory pricing!
Predatory pricing is a rather rare anticompetitive practice because the “predator” runs the risk of bankrupting itself in the process of trying to drive rivals out of business with below-cost pricing. Furthermore, even if a predator successfully clears the field of competition, in developed markets with deep capital markets, keeping out new entrants is extremely unlikely.
Nonetheless, in those rare cases where plaintiffs can demonstrate that a firm actually has a viable scheme to drive competitors from the market with prices that are “too low” and has the ability to recoup its losses once it has cleared the market of those competitors, plaintiffs (including the DOJ) can prevail in court.
In other words, whoa if true.
Khan’s Predatory Pricing Accusation
In 2017, Lina Khan, then a law student at Yale, published “Amazon’s Antitrust Paradox” in a note for the Yale Law Journal and used Evans’ chart as supporting evidence that Amazon was guilty of predatory pricing. In the abstract she says, “Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead.”
But if Amazon is selling below-cost, where does the money come from to finance those losses?
In her article, Khan hinted at two potential explanations: (1) Amazon is using profits from the cloud computing division (AWS) to cross-subsidize losses in the retail division or (2) Amazon is using money from investors to subsidize short-term losses:
Recently, Amazon has started reporting consistent profits, largely due to the success of Amazon Web Services, its cloud computing business. Its North America retail business runs on much thinner margins, and its international retail business still runs at a loss. But for the vast majority of its twenty years in business, losses—not profits—were the norm. Through 2013, Amazon had generated a positive net income in just over half of its financial reporting quarters. Even in quarters in which it did enter the black, its margins were razor-thin, despite astounding growth.
Just as striking as Amazon’s lack of interest in generating profit has been investors’ willingness to back the company. With the exception of a few quarters in 2014, Amazon’s shareholders have poured money in despite the company’s penchant for losses.
Revising predatory pricing doctrine to reflect the economics of platform markets, where firms can sink money for years given unlimited investor backing, would require abandoning the recoupment requirement in cases of below-cost pricing by dominant platforms.
Below-Cost Pricing Not Subsidized by Investors
But neither explanation withstands scrutiny. First, the money is not from investors. Amazon has not raised equity financing since 2003. Nor is it debt financing: The company’s net debt position has been near-zero or negative for its entire history (excluding the Whole Foods acquisition):
As Priya Anand observed in a recent piece for The Information, since Amazon started breaking out AWS in its financials, operating income for the North America retail business has been significantly positive:
But [Khan] underplays its retail profits in the U.S., where the antitrust debate is focused. As the above chart shows, its North America operation has been profitable for years, and its operating income has been on the rise in recent quarters. While its North America retail operation has thinner margins than AWS, it still generated $2.84 billion in operating income last year, which isn’t exactly a rounding error compared to its $4.33 billion in AWS operating income.
Below-Cost Pricing in Retail Also Known as “Loss Leader” Pricing
Okay, so maybe Amazon isn’t using below-cost pricing in aggregate in its retail division. But it still could be using profits from some retail products to cross-subsidize below-cost pricing for other retail products (e.g., diapers), with the intention of driving competitors out of business to capture monopoly profits. This is essentially what Khan claims happened in the Diapers.com (Quidsi) case. But in the retail industry, diapers are explicitly cited as a loss leader that help retailers to develop a customer relationship with mothers in the hopes of selling them a higher volume of products over time. This is exactly what the founders of Diapers.com told Inc Magazine in a 2012 interview (emphasis added):
We saw brick-and-mortar stores, the Wal-Marts and Targets of the world, using these products to build relationships with mom and the end consumer, bringing them into the store and selling them everything else. So we thought that was an interesting model and maybe we could replicate that online. And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.
An anticompetitive scheme could be built into such bundling, but in many if not the overwhelming majority of these cases, consumers are the beneficiaries of lower prices and expanded output produced by these arrangements. It’s hard to definitively say whether any given firm that discounts its products is actually pricing below average variable cost (“AVC”) without far more granular accounting ledgers than are typically maintained. This is part of the reason why these cases can be so hard to prove.
A successful predatory pricing strategy also requires blocking market entry when the predator eventually raises prices. But the Diapers.com case is an explicit example of repeated entry that would defeat recoupment. In an article for the American Enterprise Institute, Jeffrey Eisenach shares the rest of the story following Amazon’s acquisition of Diapers.com:
Amazon’s conduct did not result in a diaper-retailing monopoly. Far from it. According to Khan, Amazon had about 43 percent of online sales in 2016 — compared with Walmart at 23 percent and Target with 18 percent — and since many people still buy diapers at the grocery store, real shares are far lower.
In the end, Quidsi proved to be a bad investment for Amazon: After spending $545 million to buy the firm and operating it as a stand-alone business for more than six years, it announced in April 2017 it was shutting down all of Quidsi’s operations, Diapers.com included. In the meantime, Quidsi’s founders poured the proceeds of the Amazon sale into a new online retailer — Jet.com — which was purchased by Walmart in 2016 for $3.3 billion. Jet.com cofounder Marc Lore now runs Walmart’s e-commerce operations and has said publicly that his goal is to surpass Amazon as the top online retailer.
Sussman argues that the company has been inflating its free cash flow numbers by excluding “capital leases.” According to Sussman, “If all of those expenses as detailed in its statements are accounted for, Amazon experienced a negative cash outflow of $1.461 billion in 2017.” Even though it’s not dispositive of predatory pricing on its own, Sussman believes that a negative free cash flow implies the company has been selling below-cost to gain market share.
2. Amazon Recoups Losses By Lowering AVC, Not By Raising Prices
Instead of raising prices to recoup losses from pricing below-cost, Sussman argues that Amazon flies under the antitrust radar by keeping consumer prices low and progressively decreasing AVC, ostensibly through using its monopsony power to offload costs on suppliers and partners (although this point is not fully explored in his piece).
But Sussman’s argument contains errors in both legal reasoning as well as its underlying empirical assumptions.
While there are many different ways to calculate the “cost” of a product or service, generally speaking, “below-cost pricing” means the price is less than marginal cost or AVC. Typically, courts tend to rely on AVC when dealing with predatory pricing cases. And as Herbert Hovenkamp has noted, proving that a price falls below the AVC is exceedingly difficult, particularly when dealing with firms in dynamic markets that sell a number of differentiated but complementary goods or services. Amazon, the focus of Sussman’s article, is a useful example here.
When products are complements, or can otherwise be bundled, firms may also be able to offer discounts that are unprofitable when selling single items. In business this is known as the “razor and blades model” (i.e., sell the razor handle below-cost one time and recoup losses on future sales of blades — although it’s not clear if this ever actually happens). Printer manufacturers are also an oft-cited example here, where printers are often sold below AVC in the expectation that the profits will be realized on the ongoing sale of ink. Amazon’s Kindle functions similarly: Amazon sells the Kindle around its AVC, ostensibly on the belief that it will realize a profit on selling e-books in the Kindle store.
Yet, even ignoring this common and broadly inoffensive practice, Sussman’s argument is odd. In essence, he claims that Amazon is concealing some of its costs in the form of capital leases in an effort to conceal its below-AVC pricing while it works to simultaneously lower its real AVC below the prices it charges consumers. At the end of this process, once its real AVC is actually sufficiently below consumers prices, it will (so the argument goes) be in the position of a monopolist reaping monopoly profits.
The problem with this argument should be immediately apparent. For the moment, let’s ignore the classic recoupment problem where new entrants will be drawn into the market to win some of those monopoly prices based on the new AVC that is possible. The real problem with his logic is that Sussman basically suggests that if Amazon sharply lowers AVC — that is it makes production massively more efficient — and then does not drop prices, they are a “predator.” But by pricing below its AVC in the first place, consumers in essence were given a loan by Amazon — they were able to enjoy what Sussman believes are radically low prices while Amazon works to actually make those prices possible through creating production efficiencies. It seems rather strange to punish a firm for loaning consumers a large measure of wealth. Its doubly odd when you then re-factor the recoupment problem back in: as soon as other firms figure out that a lower AVC is possible, they will enter the market and bid away any monopoly profits from Amazon.
Sussman’s Technical Analysis Is Flawed
While there are issues with Sussman’s general theory of harm, there are also some specific problems with his technical analysis of Amazon’s financial statements.
Capital Leases Are a Fixed Cost
First, capital leases should be not be included in cost calculations for a predatory pricing case because they are fixed — not variable — costs. Again, “below-cost” claims in predatory pricing cases generally use AVC (and sometimes marginal cost) as relevant cost measures.
Capital Leases Are Mostly for Server Farms
Second, the usual story is that Amazon uses its wildly-profitable Amazon Web Services (AWS) division to subsidize predatory pricing in its retail division. But Amazon’s “capital leases” — Sussman’s hidden costs in the free cash flow calculations — are mostly for AWS capital expenditures (i.e., server farms).
According to the most recent annual report: “Property and equipment acquired under capital leases was $5.7 billion, $9.6 billion, and $10.6 billion in 2016, 2017, and 2018, with the increase reflecting investments in support of continued business growth primarily due to investments in technology infrastructure for AWS, which investments we expect to continue over time.”
In other words, any adjustments to the free cash flow numbers for capital leases would make Amazon Web Services appear less profitable, and would not have a large effect on the accounting for Amazon’s retail operation (the only division thus far accused of predatory pricing).
Look at Operating Cash Flow Instead of Free Cash Flow
Again, while cash flow measures cannot prove or disprove the existence of predatory pricing, a positive cash flow measure should make us more skeptical of such accusations. In the retail sector, operating cash flow is the appropriate metric to consider. As shown above, Amazon has had positive (and increasing) operating cash flow since 2002.
Your Theory of Harm Is Also Known as “Investment”
Third, in general, Sussman’s novel predatory pricing theory is indistinguishable from pro-competitive behavior in an industry with high fixed costs. From the abstract (emphasis added):
[N]egative cash flow firm[s] … can achieve greater market share through predatory pricing strategies that involve long-term below average variable cost prices … By charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies, these firms are able to rationalize their predatory pricing practices to investors and shareholders.
“’Charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies” is literally what it means to invest in capex and R&D.
Sussman’s paper presents a clever attempt to work around the doctrinal limitations on predatory pricing. But, if courts seriously adopt an approach like this, they will be putting in place a legal apparatus that quite explicitly focuses on discouraging investment. This is one of the last things we should want antitrust law to be doing.
Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.
Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.
And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.
Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.
A Coasian perspective
It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).
Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.
Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.
In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.
And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.
Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.
Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.
A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:
Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.
All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.
Misguided regulatory responses
Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.
Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.
There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.
Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.
The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.
Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.
Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.
And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:
The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.
A better way forward
Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.
Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.