Archives For antitrust and platforms

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

In mid-November, the 50 state attorneys general (AGs) investigating Google’s advertising practices expanded their antitrust probe to include the company’s search and Android businesses. Texas Attorney General Ken Paxton, the lead on the case, was supportive of the development, but made clear that other states would manage the investigations of search and Android separately. While attorneys might see the benefit in splitting up search and advertising investigations, platforms like Google need to be understood as a coherent whole. If the state AGs case is truly concerned with the overall impact on the welfare of consumers, it will need to be firmly grounded in the unique economics of this platform.

Back in September, 50 state AGs, including those in Washington, DC and Puerto Rico, announced an investigation into Google. In opening the case, Paxton said that, “There is nothing wrong with a business becoming the biggest game in town if it does so through free market competition, but we have seen evidence that Google’s business practices may have undermined consumer choice, stifled innovation, violated users’ privacy, and put Google in control of the flow and dissemination of online information.” While the original document demands focused on Google’s “overarching control of online advertising markets and search traffic,” reports since then suggest that the primary investigation centers on online advertising.

Defining the market

Since the market definition is the first and arguably the most important step in an antitrust case, Paxton has tipped his hand and shown that the investigation is converging on the online ad market. Yet, he faltered when he wrote in The Wall Street Journal that, “Each year more than 90% of Google’s $117 billion in revenue comes from online advertising. For reference, the entire market for online advertising is around $130 billion annually.” As Patrick Hedger of the Competitive Enterprise Institute was quick to note, Paxton cited global revenue numbers and domestic advertising statistics. In reality, Google’s share of the online advertising market in the United States is 37 percent and is widely expected to fall.

When Google faced scrutiny by the Federal Trade Commission in 2013, the leaked staff report explained that “the Commission and the Department of Justice have previously found online ‘search advertising’ to be a distinct product market.” This finding, which dates from 2007, simply wouldn’t stand today. Facebook’s ad platform was launched in 2007 and has grown to become a major competitor to Google. Even more recently, Amazon has jumped into the space and independent platforms like Telaria, Rubicon Project, and The Trade Desk have all made inroads. In contrast to the late 2000s, advertisers now use about four different online ad platforms.

Moreover, the relationship between ad prices and industry concentration is complicated. In traditional economic analysis, fewer suppliers of a product generally translates into higher prices. In the online ad market, however, fewer advertisers means that ad buyers can efficiently target people through keywords. Because advertisers have access to superior information, research finds that more concentration tends to lead to lower search engine revenues. 

The addition of new fronts in the state AGs’ investigation could spell disaster for consumers. While search and advertising are distinct markets, it is the act of tying the two together that makes platforms like Google valuable to users and advertisers alike. Demand is tightly integrated between the two sides of the platform. Changes in user and advertiser preferences have far outsized effects on the overall platform value because each side responds to the other. If users experience an increase in price or a reduction in quality, then they will use the platform less or just log off completely. Advertisers see this change in users and react by reducing their demand for ad placements as well. When advertisers drop out, the total amount of content also recedes and users react once again. Economists call these relationships demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazines, newspapers, and social media sites all support the existence of demand interdependencies. 

Economists David Evans and Richard Schmalensee, who were cited extensively in the Supreme Court case Ohio v. American Express, explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments. Understanding these relationships makes the investigation all that more difficult.

The limits of remedies

Most likely, this current investigation will follow the trajectory of Microsoft in the 1990s when states did the legwork for a larger case brought by the Department of Justice (DoJ). The DoJ already has its own investigation into Google and will probably pull together all of the parties for one large suit. Google is also subject to a probe by the House of Representatives Judiciary Committee as well. What is certain is that Google will be saddled with years of regulatory scrutiny, but what remains unclear is what kind of changes the AGs are after.

The investigation might aim to secure behavioral changes, but these often come with a cost in platform industries. The European Commission, for example, got Google to change its practices with its Android operating system for mobile phones. Much like search and advertising, the Android ecosystem is a platform with cross subsidization and demand interdependencies between the various sides of the market. Because the company was ordered to stop tying the Android operating system to apps, manufacturers of phones and tablets now have to pay a licensing fee in Europe if they want Google’s apps and the Play Store. Remedies meant to change one side of the platform resulted in those relationships being unbundled. When regulators force cross subsidization to become explicit prices, consumers are the one who pay.

The absolute worst case scenario would be a break up of Google, which has been a centerpiece of Senator Elizabeth Warren’s presidential platform. As I explained last year, that would be a death warrant for the company:

[T]he value of both Facebook and Google comes in creating the platform, which combines users with advertisers. Before the integration of ad networks, the search engine industry was struggling and it was simply not a major player in the Internet ecosystem. In short, the search engines, while convenient, had no economic value. As Michael Moritz, a major investor of Google, said of those early years, “We really couldn’t figure out the business model. There was a period where things were looking pretty bleak.” But Google didn’t pave the way. Rather, Bill Gross at GoTo.com succeeded in showing everyone how advertising could work to build a business. Google founders Larry Page and Sergey Brin merely adopted the model in 2002 and by the end of the year, the company was profitable for the first time. Marrying the two sides of the platform created value. Tearing them apart will also destroy value.

The state AGs need to resist making this investigation into a political showcase. As Pew noted in documenting the rise of North Carolina Attorney General Josh Stein to national prominence, “What used to be a relatively high-profile position within a state’s boundaries has become a springboard for publicity across the country.” While some might cheer the opening of this investigation, consumer welfare needs to be front and center. To properly understand how consumer welfare might be impacted by an investigation, the state AGs need to take seriously the path already laid out by platform economics. For the sake of consumers, let’s hope they are up to the task. 

[This post is the seventh in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Alec Stapp, Research Fellow at the International Center for Law & Economics]

Should we break up Microsoft? 

In all the talk of breaking up “Big Tech,” no one seems to mention the biggest tech company of them all. Microsoft’s market cap is currently higher than those of Apple, Google, Amazon, and Facebook. If big is bad, then, at the moment, Microsoft is the worst.

Apart from size, antitrust activists also claim that the structure and behavior of the Big Four — Facebook, Google, Apple, and Amazon — is why they deserve to be broken up. But they never include Microsoft, which is curious given that most of their critiques also apply to the largest tech giant:

  1. Microsoft is big (current market cap exceeds $1 trillion)
  2. Microsoft is dominant in narrowly-defined markets (e.g., desktop operating systems)
  3. Microsoft is simultaneously operating and competing on a platform (i.e., the Microsoft Store)
  4. Microsoft is a conglomerate capable of leveraging dominance from one market into another (e.g., Windows, Office 365, Azure)
  5. Microsoft has its own “kill zone” for startups (196 acquisitions since 1994)
  6. Microsoft operates a search engine that preferences its own content over third-party content (i.e., Bing)
  7. Microsoft operates a platform that moderates user-generated content (i.e., LinkedIn)

To be clear, this is not to say that an antitrust case against Microsoft is as strong as the case against the others. Rather, it is to say that the cases against the Big Four on these dimensions are as weak as the case against Microsoft, as I will show below.

Big is bad

Tim Wu published a book last year arguing for more vigorous antitrust enforcement — including against Big Tech — called “The Curse of Bigness.” As you can tell by the title, he argues, in essence, for a return to the bygone era of “big is bad” presumptions. In his book, Wu mentions “Microsoft” 29 times, but only in the context of its 1990s antitrust case. On the other hand, Wu has explicitly called for antitrust investigations of Amazon, Facebook, and Google. It’s unclear why big should be considered bad when it comes to the latter group but not when it comes to Microsoft. Maybe bigness isn’t actually a curse, after all.

As the saying goes in antitrust, “Big is not bad; big behaving badly is bad.” This aphorism arose to counter erroneous reasoning during the era of structure-conduct-performance when big was presumed to mean bad. Thanks to an improved theoretical and empirical understanding of the nature of the competitive process, there is now a consensus that firms can grow large either via superior efficiency or by engaging in anticompetitive behavior. Size alone does not tell us how a firm grew big — so it is not a relevant metric.

Dominance in narrowly-defined markets

Critics of Google say it has a monopoly on search and critics of Facebook say it has a monopoly on social networking. Microsoft is similarly dominant in at least a few narrowly-defined markets, including desktop operating systems (Windows has a 78% market share globally): 

Source: StatCounter

Microsoft is also dominant in the “professional networking platform” market after its acquisition of LinkedIn in 2016. And the legacy tech giant is still the clear leader in the “paid productivity software” market. (Microsoft’s Office 365 revenue is roughly 10x Google’s G Suite revenue).

The problem here is obvious. These are overly-narrow market definitions for conducting an antitrust analysis. Is it true that Facebook’s platforms are the only service that can connect you with your friends? Should we really restrict the productivity market to “paid”-only options (as the EU similarly did in its Android decision) when there are so many free options available? These questions are laughable. Proper market definition requires considering whether a hypothetical monopolist could profitably impose a small but significant and non-transitory increase in price (SSNIP). If not (which is likely the case in the narrow markets above), then we should employ a broader market definition in each case.

Simultaneously operating and competing on a platform

Elizabeth Warren likes to say that if you own a platform, then you shouldn’t both be an umpire and have a team in the game. Let’s put aside the problems with that flawed analogy for now. What she means is that you shouldn’t both run the platform and sell products, services, or apps on that platform (because it’s inherently unfair to the other sellers). 

Warren’s solution to this “problem” would be to create a regulated class of businesses called “platform utilities” which are “companies with an annual global revenue of $25 billion or more and that offer to the public an online marketplace, an exchange, or a platform for connecting third parties.” Microsoft’s revenue last quarter was $32.5 billion, so it easily meets the first threshold. And Windows obviously qualifies as “a platform for connecting third parties.”

Just as in mobile operating systems, desktop operating systems are compatible with third-party applications. These third-party apps can be free (e.g., iTunes) or paid (e.g., Adobe Photoshop). Of course, Microsoft also makes apps for Windows (e.g., Word, PowerPoint, Excel, etc.). But the more you think about the technical details, the blurrier the line between the operating system and applications becomes. Is the browser an add-on to the OS or a part of it (as Microsoft Edge appears to be)? The most deeply-embedded applications in an OS are simply called “features.”

Even though Warren hasn’t explicitly mentioned that her plan would cover Microsoft, it almost certainly would. Previously, she left Apple out of the Medium post announcing her policy, only to later tell a journalist that the iPhone maker would also be prohibited from producing its own apps. But what Warren fails to include in her announcement that she would break up Apple is that trying to police the line between a first-party platform and third-party applications would be a nightmare for companies and regulators, likely leading to less innovation and higher prices for consumers (as they attempt to rebuild their previous bundles).

Leveraging dominance from one market into another

The core critique in Lina Khan’s “Amazon’s Antitrust Paradox” is that the very structure of Amazon itself is what leads to its anticompetitive behavior. Khan argues (in spite of the data) that Amazon uses profits in some lines of business to subsidize predatory pricing in other lines of businesses. Furthermore, she claims that Amazon uses data from its Amazon Web Services unit to spy on competitors and snuff them out before they become a threat.

Of course, this is similar to the theory of harm in Microsoft’s 1990s antitrust case, that the desktop giant was leveraging its monopoly from the operating system market into the browser market. Why don’t we hear the same concern today about Microsoft? Like both Amazon and Google, you could uncharitably describe Microsoft as extending its tentacles into as many sectors of the economy as possible. Here are some of the markets in which Microsoft competes (and note how the Big Four also compete in many of these same markets):

What these potential antitrust harms leave out are the clear consumer benefits from bundling and vertical integration. Microsoft’s relationships with customers in one market might make it the most efficient vendor in related — but separate — markets. It is unsurprising, for example, that Windows customers would also frequently be Office customers. Furthermore, the zero marginal cost nature of software makes it an ideal product for bundling, which redounds to the benefit of consumers.

The “kill zone” for startups

In a recent article for The New York Times, Tim Wu and Stuart A. Thompson criticize Facebook and Google for the number of acquisitions they have made. They point out that “Google has acquired at least 270 companies over nearly two decades” and “Facebook has acquired at least 92 companies since 2007”, arguing that allowing such a large number of acquisitions to occur is conclusive evidence of regulatory failure.

Microsoft has made 196 acquisitions since 1994, but they receive no mention in the NYT article (or in most of the discussion around supposed “kill zones”). But the acquisitions by Microsoft or Facebook or Google are, in general, not problematic. They provide a crucial channel for liquidity in the venture capital and startup communities (the other channel being IPOs). According to the latest data from Orrick and Crunchbase, between 2010 and 2018, there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion

By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. Making it harder for a startup to be acquired would not result in more venture capital investment (and therefore not in more IPOs), according to recent research by Gordon M. Phillips and Alexei Zhdanov. The researchers show that “the passage of a pro-takeover law in a country is associated with more subsequent VC deals in that country, while the enactment of a business combination antitakeover law in the U.S. has a negative effect on subsequent VC investment.”

As investor and serial entrepreneur Leonard Speiser said recently, “If the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.” 

Search engine bias

Google is often accused of biasing its search results to favor its own products and services. The argument goes that if we broke them up, a thousand search engines would bloom and competition among them would lead to less-biased search results. While it is a very difficult — if not impossible — empirical question to determine what a “neutral” search engine would return, one attempt by Josh Wright found that “own-content bias is actually an infrequent phenomenon, and Google references its own content more favorably than other search engines far less frequently than does Bing.” 

The report goes on to note that “Google references own content in its first results position when no other engine does in just 6.7% of queries; Bing does so over twice as often (14.3%).” Arguably, users of a particular search engine might be more interested in seeing content from that company because they have a preexisting relationship. But regardless of how we interpret these results, it’s clear this not a frequent phenomenon.

So why is Microsoft being left out of the antitrust debate now?

One potential reason why Google, Facebook, and Amazon have been singled out for criticism of practices that seem common in the tech industry (and are often pro-consumer) may be due to the prevailing business model in the journalism industry. Google and Facebook are by far the largest competitors in the digital advertising market, and Amazon is expected to be the third-largest player by next year, according to eMarketer. As Ramsi Woodcock pointed out, news publications are also competing for advertising dollars, the type of conflict of interest that usually would warrant disclosure if, say, a journalist held stock in a company they were covering.

Or perhaps Microsoft has successfully avoided receiving the same level of antitrust scrutiny as the Big Four because it is neither primarily consumer-facing like Apple or Amazon nor does it operate a platform with a significant amount of political speech via user-generated content (UGC) like Facebook or Google (YouTube). Yes, Microsoft moderates content on LinkedIn, but the public does not get outraged when deplatforming merely prevents someone from spamming their colleagues with requests “to add you to my professional network.”

Microsoft’s core areas are in the enterprise market, which allows it to sidestep the current debates about the supposed censorship of conservatives or unfair platform competition. To be clear, consumer-facing companies or platforms with user-generated content do not uniquely merit antitrust scrutiny. On the contrary, the benefits to consumers from these platforms are manifest. If this theory about why Microsoft has escaped scrutiny is correct, it means the public discussion thus far about Big Tech and antitrust has been driven by perception, not substance.


[This post is the sixth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Thibault Schrepel, Faculty Associate at the Berkman Center at Harvard University and Assistant Professor in European Economic Law at Utrecht University School of Law.]

The pretense of ignorance

Over the last few years, I have published a series of antitrust conversations with Nobel laureates in economics. I have discussed big tech dominance with most of them, and although they have different perspectives, all of them agreed on one thing: they do not know what the effect of breaking up big tech would be. In fact, I have never spoken with any economist who was able to show me convincing empirical evidence that breaking up big tech would on net be good for consumers. The same goes for political scientists; I have never read any article that, taking everything into consideration, proves empirically that breaking up tech companies would be good for protecting democracies, if that is the objective (please note that I am not even discussing the fact that using antitrust law to do that would violate the rule of law, for more on the subject, click here).

This reminds me of Friedrich Hayek’s Nobel memorial lecture, in which he discussed the “pretense of knowledge.” He argued that some issues will always remain too complex for humans (even helped by quantum computers and the most advanced AI; that’s right!). Breaking up big tech is one such issue; it is simply impossible simultaneously to consider the micro and macro-economic impacts of such an enormous undertaking, which would affect, literally, billions of people. Not to mention the political, sociological and legal issues, all of which combined are beyond human understanding.

Ignorance + fear = fame

In the absence of clear-cut conclusions, here is why (I think), some officials are arguing for breaking up big tech. First, it may be possible that some of them actually believe that it would be great. But I am sure we agree that beliefs should not be a valid basis for such actions. More realistically, the answer can be found in the work of another Nobel laureate, James Buchanan, and in particular his 1978 lecture in Vienna entitled “Politics Without Romance.”

In his lecture and the paper that emerged from it, Buchanan argued that while markets fail, so do governments. The latter is especially relevant insofar as top officials entrusted with public power may, occasionally at least, use that power to benefit their personal interests rather than the public interest. Thus, the presumption that government-imposed corrections for market failures always accomplish the desired objectives must be rejected. Taking that into consideration, it follows that the expected effectiveness of public action should always be established as precisely and scientifically as possible before taking action. Integrating these insights from Hayek and Buchanan, we must conclude that it is not possible to know whether the effects of breaking up big tech would on net be positive.

The question then is why, in the absence of positive empirical evidence, are some officials arguing for breaking up tech giants then? Well, because defending such actions may help them achieve their personal goals. Often, it is more important for public officials to show their muscle and take action, rather showing great care about reaching a positive net result for society. This is especially true when it is practically impossible to evaluate the outcome due to the scale and complexity of the changes that ensue. That enables these officials to take credit for being bold while avoiding blame for the harms.

But for such a call to be profitable for the public officials, they first must legitimize the potential action in the eyes of the majority of the public. Until now, most consumers evidently like the services of tech giants, which is why it is crucial for the top officials engaged in such a strategy to demonize those companies and further explain to consumers why they are wrong to enjoy them. Only then does defending the breakup of tech giants becomes politically valuable.

Some data, one trend

In a recent paper entitled “Antitrust Without Romance,” I have analyzed the speeches of the five current FTC commissioners, as well as the speeches of the current and three previous EU Competition Commissioners. What I found is an increasing trend to demonize big tech companies. In other words, public officials increasingly seek to prepare the general public for the idea that breaking up tech giants would be great.

In Europe, current Competition Commissioner Margrethe Vestager has sought to establish an opposition between the people (referred under the pronoun “us”) and tech companies (referred under the pronoun “them”) in more than 80% of her speeches. She further describes these companies as engaging in manipulation of the public and unleashing violence. She says they, “distort or fabricate information, manipulate people’s views and degrade public debate” and help “harmful, untrue information spread faster than ever, unleashing violence and undermining democracy.” Furthermore, she says they cause, “danger of death.” On this basis, she mentions the possibility of breaking them up (for more data about her speeches, see this link).

In the US, we did not observe a similar trend. Assistant Attorney General Makan Delrahim, who has responsibility for antitrust enforcement at the Department of Justice, describes the relationship between people and companies as being in opposition in fewer than 10% of his speeches. The same goes for most of the FTC commissioners (to see all the data about their speeches, see this link). The exceptions are FTC Chairman Joseph J. Simons, who describes companies’ behavior as “bad” from time to time (and underlines that consumers “deserve” better) and Commissioner Rohit Chopra, who describes the relationship between companies and the people as being in opposition to one another in 30% of his speeches. Chopra also frequently labels companies as “bad.” These are minor signs of big tech demonization compared to what is currently done by European officials. But, unfortunately, part of the US doctrine (which does not hide political objectives) pushes for demonizing big tech companies. One may have reason to fear that such a trend will grow in the US as it has in Europe, especially considering the upcoming presidential campaign in which far-right and far-left politicians seem to agree about the need to break up big tech.

And yet, let’s remember that no-one has any documented, tangible, and reproducible evidence that breaking up tech giants would be good for consumers, or societies at large, or, in fact, for anyone (even dolphins, okay). It might be a good idea; it might be a bad idea. Who knows? But the lack of evidence either way militates against taking such action. Meanwhile, there is strong evidence that these discussions are fueled by a handful of individuals wishing to benefit from such a call for action. They do so, first, by depicting tech giants as representing the new elite in opposition to the people and they then portray themselves as the only saviors capable of taking action.

Epilogue: who knows, life is not a Tarantino movie

For the last 30 years, antitrust law has been largely immune to strategic takeover by political interests. It may now be returning to a previous era in which it was the instrument of a few. This transformation is already happening in Europe (it is expected to hit case law there quite soon) and is getting real in the US, where groups display political goals and make antitrust law a Trojan horse for their personal interests.The only semblance of evidence they bring is a few allegedly harmful micro-practices (see Amazon’s Antitrust Paradox), which they use as a basis for defending the urgent need of macro, structural measures, such as breaking up tech companies. This is disproportionate, but most of all and in the absence of better knowledge, purely opportunistic and potentially foolish. Who knows at this point whether antitrust law will come out intact of this populist and moralist episode? And who knows what the next idea of those who want to use antitrust law for purely political purposes will be. Life is not a Tarantino movie; it may end up badly.

[This post is the fifth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by William Rinehart, Director of Technology and Innovation Policy at American Action Forum.]

Back in May, the New York Times published an op-ed by Chris Hughes, one of the founders of Facebook, in which he called for the break up of his former firm. Hughes joins a growing chorus, including Senator Warren, Roger McNamee and others who have called for the break up of “Big Tech” companies. If Business Insider’s polling is correct, this chorus seems to be quite effective: Nearly 40 percent of Americans now support breaking up Facebook. 

Hughes’ position is perhaps understandable given his other advocacy activities. But it is also worth bearing in mind that he likely was never particularly familiar with or involved in Facebook’s technical backend or business development or sales. Rather, he was important in setting up the public relations and feedback mechanisms. This is relevant because the technical and organizational challenges in breaking up big tech are enormous and underappreciated. 

The Technics of Structural Remedies

As I explained at AAF last year,

Any trust-busting action would also require breaking up the company’s technology stack — a general name for the suite of technologies powering web sites. For example, Facebook developed its technology stack in-house to address the unique problems facing Facebook’s vast troves of data. Facebook created BigPipe to dynamically serve pages faster, Haystack to store billions of photos efficiently, Unicorn for searching the social graph, TAO for storing graph information, Peregrine for querying, and MysteryMachine to help with end-to-end performance analysis. The company also invested billions in data centers to quickly deliver video, and it split the cost of an undersea cable with Microsoft to speed up information travel. Where do you cut these technologies when splitting up the company?

That list, however, leaves out the company’s backend AI platform, known as Horizon. As Christopher Mims reported in the Wall Street Journal, Facebook put serious resources into creating Horizon and it has paid off. About a fourth of the engineers at the company were using this platform in 2017, even though only 30 percent of them were experts in it. The system, as Joaquin Candela explained, is powerful because it was built to be “a very modular layered cake where you can plug in at any level you want.” As Mim was careful to explain, the platform was designed to be “domain-specific,”  or highly modular. In other words, Horizon was meant to be useful across a range of complex problems and different domains. If WhatsApp and Instagram were separated from Facebook, who gets that asset? Does Facebook retain the core tech and then have to sell it at a regulated rate?

Lessons from Attempts to Manage Competition in the Tobacco Industry 

For all of the talk about breaking up Facebook and other tech companies, few really grasp just how lackluster this remedy has been in the past. The classic case to study isn’t AT&T or Standard Oil, but American Tobacco Company

The American Tobacco Company came about after a series of mergers in 1890 orchestrated by J.B. Duke. Then, between 1907 and 1911, the federal government filed and eventually won an antitrust lawsuit, which dissolved the trust into three companies. 

Duke was unique for his time because he worked to merge all of the previous companies into a working coherent firm. The organization that stood trial in 1907 was a modern company, organized around a functional structure. A single purchasing department managed all the leaf purchasing. Tobacco processing plants were dedicated to specific products without any concern for their previous ownership. The American Tobacco Company was rational in a way few other companies were at the time.  

These divisions were pulled apart over eight months. Factories, distribution and storage facilities, back offices and name brands were all separated by government fiat. It was a difficult task. As historian Allan M. Brandt details in “The Cigarette Century,”

It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.

So how did consumers fare after the breakup? Most research suggests that the breakup didn’t substantially change the markets where American Tobacco was involved. Real cigarette prices for consumers were stable, suggesting there wasn’t price competition. The three companies coming out of the suit earned the same profit from 1912 to 1949 as the original American Tobacco Company Trust earned in its heyday from 1898 to 1908. As for the upstream suppliers, the price paid to tobacco farmers didn’t change either. The breakup was a bust.  

The difficulties in breaking up American Tobacco stand in contrast to the methods employed with Standard Oil and AT&T. For them, the split was made along geographic lines. Standard Oil was broken into 34 regional companies. Standard Oil of New Jersey became Exxon, while Standard Oil of California changed its name to Chevron. In the same way, AT&T was broken up in Regional Bell Operating Companies. Facebook doesn’t have geographic lines.

The Lessons of the Past Applied to Facebook

Facebook combines elements of the two primary firm structures and is thus considered a “matrix form” company. While the American Tobacco Company employed a functional organization, the most common form of company organization today is the divisional form. This method of firm rationalization separates the company’s operational functions by product, in order to optimize efficiencies. Under a divisional structure, each product is essentially a company unto itself. Engineering, finance, sales, and customer service are all unified within one division, which sits separate from other divisions within a company. Like countless other tech companies, Facebook merges elements of the two forms. It relies upon flexible teams to solve problems that tend to cross the normal divisional and functional bounds. Communication and coordination is prioritized among teams and Facebook invests heavily to ensure cross-company collaboration. 

Advocates think that undoing the WhatsApp and Instagram mergers will be easy, but there aren’t clean divisional lines within the company. Indeed, Facebook has been working towards a vast reengineering of its backend for some time that, when completed later this year or early 2020, will effectively merge all of the companies into one ecosystem.  Attempting to dismember this ecosystem would almost certainly be disastrous; not just a legal nightmare, but a technical and organizational nightmare as well.

Much like American Tobacco, any attempt to split off WhatsApp and Instagram from Facebook will probably fall flat on its face because government officials will have to create three regulated firms, each with essentially duplicative structures. As a result, the quality of services offered to consumers will likely be inferior to those available from the integrated firm. In other words, this would be a net loss to consumers.

[This post is the first in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]

[This post is authored by Randal C. Picker, James Parker Hall Distinguished Service Professor of Law at The University of Chicago Law School]

The European Commission just announced that it is investigating Amazon. The Commission’s concern is that Amazon is simultaneously acting as a ref and player: Amazon sells goods directly as a first party but also operates a platform on which it hosts goods sold by third parties (resellers) and those goods sometimes compete. And, next step, Amazon is said to choose which markets to enter as a private-label seller at least in part by utilizing information it gleans from the third-party sales it hosts.

Assuming there is a problem …

Were Amazon’s activities thought to be a problem, the natural remedies, whether through antitrust or more direct sector, industry-specific regulation, might be to bar Amazon from both being a direct seller and a platform. India has already passed a statute that effectuates some of those results, though it seems targeted at non-domestic companies.

A broad regulation that barred Amazon from being simultaneously a seller of first-party inventory and of third-party inventory presumably would lead to a dissolution of the company into separate companies in each of those businesses. A different remedy—a classic that goes back at least as far in the United States as the 1887 Commerce Act—would be to impose some sort of nondiscrimination obligation on Amazon and perhaps to couple that with some sort of business-line restriction—a quarantine—that would bar Amazon from entering markets though private labels.

But is there a problem?

Private labels have been around a long time and large retailers have faced buy-vs.-build decisions along the way. Large, sophisticated retailers like A&P in a different era and Walmart and Costco today, just to choose two examples, are constantly rebalancing their inventory between that which they buy from third parties and that which they produce for themselves. As I discuss below, being a platform matters for the buy-vs.-build decision, but it is far from clear that being both a store and a platform simultaneously matters importantly for how we should look at these issues.

Of course, when Amazon opened for business in July 1995 it didn’t quite face these issues immediately. Amazon sold books—it billed itself as “Earth’s Biggest Bookstore”—but there is no private label possibility for books, no effort to substitute into just selling say “The Wit and Wisdom of Jeff Bezos.” You could of course build an ebooks platform—call that a Kindle—but that would be a decade or so down the road. But as Amazon expanded into more pedestrian goods, it would, like other retailers, naturally make decisions about which inventory to source internally and which to buy from third parties.

In September 1999, Amazon opened up what was being described as an online mall. Amazon called it zShops and the idea was clear: many customers came to Amazon to buy things that Amazon wasn’t offering and Amazon would bring that audience and a variety of transaction services to third parties. Third parties would in turn pay Amazon a monthly fee and a variety of transaction fees. Amazon CEO Jeff Bezos noted (as reported in The Wall Street Journal) that those prices had been set in a way to make Amazon generally “neutral” in choosing whether to enter a market through first-party inventory or through third-party inventory.

Note that a traditional retailer and the original Amazon faced a natural question which was which goods to carry in inventory? When Amazon opened its platform, Amazon changed powerfully the question of which goods to stock. Even a Walmart Supercenter has limited physical shelf space and has to take something off of the shelves to stock a new product. By becoming a platform, Amazon largely outsourced the product selection and shelf space allocation question to third parties. The new Amazon resellers would get access to Amazon’s substantial customer base—its audience—and to a variety of transactional services that Amazon would provide them.

An online retailer has some real informational advantages over physical stores, as the online retailer sees every product that customers search for. It is much harder, though not impossible, for a physical store to capture that information. But as Amazon became a platform it would no longer just observe search queries for goods but it would see actual sales by the resellers. And a physical store isn’t a platform in the way that Amazon is as the physical store is constrained by limited shelf space. But the real target here is the marginal information Amazon gets from third-party sales relative to what it would see from product searches at Amazon, its own first-party sales and from clicks on the growing amount of advertising it sells on its website.

All of that might matter for running product and inventory experiments and the corresponding pace of learning what goods customers want at what price. A physical store has to remove some item from its shelves to experiment with a new item and has to buy the item to stock it, though how much of a risk it is taking there will depend on whether the retailer can return unsold goods to the inventory supplier. A platform retailer like Amazon doesn’t have to make those tradeoffs and an online mall could offer almost an infinite inventory of items. A store or product ready for every possible search.

A possible strategy

All of this suggests a possible business strategy for a platform: let third parties run inventory experiments where the platform gets to see the results. Products that don’t sell are failed experiments and the platform doesn’t enter those markets. But when a third-party sells a product in real numbers, start selling that product as first-party inventory. Amazon then would face buy vs. build on that and that should make clear that the private brands question is distinct from the question of whether Amazon can leverage third-party reseller information to their detriment. It can certainly do just that by buying competing goods from a wholesaler and stocking that item as first-party Amazon inventory.

If Amazon is playing this strategy, it seems to be playing it slowly and poorly. Amazon CEO Jeff Bezos includes a letter each year to open Amazon’s annual report to shareholders. In the 2018 letter, Bezos opened by noting that “[s]omething strange and remarkable has happened over the last 20 years.” What was that? In 1999, the relevant number was 3%; five years later, in 2004, it was 25%, then 31% in 2009, 49% in 2014 and 58% in 2018. These were the percentage of physical gross merchandise sales by third-party sellers through Amazon. In 1993, 97% of Amazon’s sales were of its own first-party inventory but the percentage of third-party sales had steadily risen over 20 years and over the last four years of that period, third-party inventory sales exceeded Amazon’s own internal sales. As Bezos noted, Amazon’s first-party sales had grown dramatically—a 25% annual compound growth rate over that period—but in 2018, total third-party sales revenues were $160 billion while Amazon’s own first-party sales were at $117 billion. Bezos had a perspective on all of that—“Third-party sellers are kicking our first party butt. Badly.”—but if you believed the original vision behind creating the Amazon platform, Amazon should be indifferent between first-party sales and third-party sales, as long as all of that happens at Amazon.

This isn’t new

Given all of that, it isn’t crystal clear to me why Amazon gets as much attention as it does. The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws.

As suggested above, I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

In conclusion

There is a great deal more to say about a company as complex as Amazon, but two thoughts in closing. One story here is that Amazon has built a superior business model in combining first-party and third-party inventory sales and that is exactly the kind of business model innovation that we should applaud. Amazon has enjoyed remarkable growth but Walmart is still vastly larger than Amazon (ballpark numbers for 2018 are roughly $510 billion in net sales for Walmart vs. roughly $233 billion for Amazon – including all 3rd party sales, as well as Amazon Web Services). The second story is the remarkable growth of sales by resellers at Amazon.

If Amazon is creating private-label goods based on information it sees on its platform, nothing suggests that it is doing that particularly rapidly. And even if it is entering those markets, it still might do that were we to break up Amazon and separate the platform piece of Amazon (call it Amazon Platform) from the original first-party version of Amazon (say Amazon Classic) as traditional retailers have for a very, very long time been making buy-vs.-build decisions on their first-party inventory and using their internal information to make those decisions.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.

Are current antitrust tools fully adequate to cope with the challenges posed by giant online “digital platforms” (such as Google, Amazon, and Facebook)?  Yes.  Should antitrust rules be expanded to address broader social concerns that transcend consumer welfare and economic efficiency, such as income inequality and allegedly excessive big business influence on the political process?  No.  For more details, see my January 23 Heritage Foundation Legal Memorandum entitled Antitrust and the Winner-Take-All Economy.  That Memo concludes:

[T]he U.S. antitrust laws as currently applied, emphasizing sound economics, are fully capable of preventing truly anticompetitive behavior by major Internet platform companies and other large firms. But using antitrust to attack companies based on non-economic, ill-defined concerns about size, fairness, or political clout is unwarranted, and would be a recipe for reduced innovation and economic stagnation. Recent arguments trotted out to use antitrust in such an expansive manner are baseless, and should be rejected by enforcers and by Congress.