Archives For platforms

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

President Donald Trump has repeatedly called for repeal of Section 230. But while Trump and fellow conservatives decry Big Tech companies for their alleged anti-conservative bias, including at yet more recent hearings, their issue is not actually with Section 230. It’s with the First Amendment. 

Conservatives can’t actually do anything directly about how social media platforms moderate content because it is the First Amendment that grants those platforms a right to editorial discretion. Even FCC Commissioner Brendan Carr, who strongly opposes “Big Tech censorship,” recognizes this

By the same token, even if one were to grant that conservatives are right about the bias of moderators at these large social media platforms, it does not follow that removal of Section 230 immunity would alter that bias. In fact, in a world without Section 230 immunity, there still would be no legal cause of action for political bias. 

The truth is that conservatives use Section 230 immunity for leverage over social media platforms. The hope is that, because social media platforms desire the protections of civil immunity for third-party content, they will follow whatever conditions the government puts on their editorial discretion. But the attempt to end-run the First Amendment’s protections is also unconstitutional.

There is no cause of action for political bias by online platforms if we repeal Section 230

Consider the counterfactual: if there were no Section 230 to immunize them from liability, under what law would platforms face a viable cause of action for political bias? Conservative critics never answer this question. Instead, they focus on the irrelevant distinction between publishers and platforms. Or they talk about how Section 230 is a giveaway to Big Tech. But none consider the actual relationship between Section 230 immunity and alleged political bias.

But let’s imagine we’ve done what President Trump has called for and repealed Section 230. Where does that leave conservatives?

Unfortunately, it leaves them without any cause of action. There is no law passed by Congress or any state legislature, no regulation promulgated by the Federal Communications Commission or the Federal Trade Commission, no common law tort action that can be asserted against online platforms to force them to carry speech they don’t wish to carry. 

The difficulties of pursuing a contract claim for political bias

The best argument for conservatives is that, without Section 230 immunity, online platforms could be more easily held to any contractual restraints in their terms of service. If a platform promises, for instance, that it will moderate speech in a politically neutral way, a user could make the case that the platform violated its terms of service if it acted with political bias in her particular case.

For the vast majority of users, it is unclear whether there are damages from having a post fact-checked or removed. But for users who share in advertising revenue, the concrete injury from a moderation decision is more obvious. PragerU, for example, has (unsuccessfully) sued Google for being put in Restricted Mode on YouTube, which reduces its reach and advertising revenue. 

Even where there is a concrete injury that gets a case into court, that doesn’t necessarily mean there is a valid contract claim. In PragerU’s case against Google, a California court dismissed contract claims because the YouTube terms of service contract was written to allow the platform to retain discretion over what is published. Specifically, the court found that there can be no implied covenant of good faith and fair dealing where “YouTube reserves the right to remove Content without prior notice” and to “discontinue any aspect of the Service at any time.”

Breach-of-contract claims for moderation practices are highly dependent on what is actually promised in the terms of service. For instance, under Facebook’s TOS the company retains the right “to remove or restrict access to content that is in violation” of its community standards. Facebook does provide a process for users to request further review, but retains the right to remove content. The community standards also give Facebook broad discretion to determine, among other things, what counts as hate speech or false news. It is exceedingly unlikely that a court would ever have a basis to find a contract violation by Facebook if the company can reasonably point to a user’s violation of its terms of service. 

For example, in Ebeid v. Facebook, the U.S. Northern District of California dismissed fraud and breach of contract claims, finding the plaintiff failed to allege what contractual provision Facebook breached, that Facebook retained discretion over what ads would be posted, and that the plaintiff suffered no damages because no money was taken to be spent on the ads. The court also dismissed an implied covenant of good faith and fair dealing claim because Facebook retained the right to “remove or disapprove any post or ad at Facebook’s sole discretion.”

While the conservative critique has been that social media platforms do too much moderation—in the form of politically biased removals, fact-checking, and demonetization—others believe platforms do far too little to restrain bad conduct by users. But as long as social media platforms retain editorial discretion in their terms of service and make no other promises that can be relied upon by their users, there is little basis for a contract claim. 

The First Amendment protects the moderation policies of social media platforms, and there is no way around this

With no reasonable cause of action for political bias under the law, conservatives dangle the threat of making changes to Section 230 immunity that could prove costly to the social media platforms in order to extract concessions from the platforms to alter their practices.

This is why there are no serious efforts to actually repeal Section 230, as President Trump has asked for repeatedly. Instead, several bills propose to amend Section 230, while a rulemaking by the FCC seeks to clarify its meaning. 

But none of these proposed bills would directly affect platforms’ ability to make “biased” moderation decisions. Put simply: the First Amendment protects social media platforms’ editorial discretion. They may set rules to use their platforms, just as any private person may set rules for their own property. If I kick someone off my property for saying racist things, the First Amendment (as well as regular property law) protects my right to do so. Only under extremely limited circumstances can the government change this baseline rule and survive constitutional scrutiny.

Social media platforms’ right to editorial discretion is the same as that enjoyed by newspapers. In Miami Herald Publishing Co. v. Tornillo, the Supreme Court found:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Social media platforms, just like any other property owner, have the right to determine what they want displayed on their property. In other words, Facebook, Google, and Twitter have the right to moderate content on news feeds, search results, and timelines. The attempted constitutional end-run—threatening to remove immunity for third-party content unrelated to political bias, like defamation and other tortious acts, unless social media platforms give up their right to editorial discretion over political speech—is just as unconstitutional as directly imposing “fairness” requirements on social media platforms.

The Supreme Court has held that Congress may not leverage a government benefit to regulate a speech interest outside of the benefit’s scope. This is called the unconstitutional conditions doctrine. It basically delineates the level of regulation the government can undertake through subsidizing behavior. The government can’t condition a government benefit on giving up editorial discretion over political speech.

The point of Section 230 immunity is to remedy the moderator’s dilemma set up by Stratton Oakmont v. Prodigy, which held that if a platform chose to moderate third-party speech at all, they would be liable for what was not removed. Section 230 is not about compelling political neutrality on platforms, because it can’t be consistent with the First Amendment. Civil immunity for third-party speech online is an important benefit for social media platforms because it holds they are not liable for the acts of third-parties, with limited exceptions. Without it, platforms would restrict opportunities for third-parties to post out of fear of liability

In sum, the government may not condition enjoyment of a government benefit upon giving up a constitutionally protected right. Section 230 immunity is a clear government benefit. The right to editorial discretion is clearly protected by the First Amendment. Because the entire point of conservative Section 230 reform efforts is to compel social media platforms to carry speech they otherwise desire to remove, it fails this basic test.


Fundamentally, the conservative push to reform Section 230 in response to the alleged anti-conservative bias of major social media platforms is not about policy. Really, it’s about waging a culture war against the perceived “liberal elites” from Silicon Valley, just as there is an ongoing culture war against perceived “liberal elites” in the mainstream media, Hollywood, and academia. But fighting this culture war is not worth giving up conservative principles of free speech, limited government, and free markets.

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

Recently-published emails from 2012 between Mark Zuckerberg and Facebook’s then-Chief Financial Officer David Ebersman, in which Zuckerberg lays out his rationale for buying Instagram, have prompted many to speculate that the deal may not have been cleared had antitrust agencies had had access to Facebook’s internal documents at the time.

The issue is Zuckerberg’s description of Instagram as a nascent competitor and potential threat to Facebook:

These businesses are nascent but the networks established, the brands are already meaningful, and if they grow to a large scale they could be very disruptive to us. Given that we think our own valuation is fairly aggressive and that we’re vulnerable in mobile, I’m curious if we should consider going after one or two of them. 

Ebersman objects that a new rival would just enter the market if Facebook bought Instagram. In response, Zuckerberg wrote:

There are network effects around social products and a finite number of different social mechanics to invent. Once someone wins at a specific mechanic, it’s difficult for others to supplant them without doing something different.

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically. 

The rationale

Writing for Pro Market, Randy Picker argued that these emails hint that the acquisition was essentially about taking out a nascent competitor:

Buying Instagram really was about controlling the window in which the Instagram social mechanic invention posed a risk to Facebook … Facebook well understood the competitive risk posed by Instagram and how purchasing it would control that risk.

This is a plausible interpretation of the internal emails, although there are others. For instance, Zuckerberg also seems to say that the purpose is to use Instagram to improve Facebook to make it good enough to fend off other entrants:

If we incorporate the social mechanics they were using, those new products won’t get much traction since we’ll already have their mechanics deployed at scale. 

If this was the rationale, rather than simply trying to kill a nascent competitor, it would be pro-competitive. It is good for consumers if a product makes itself better to beat its rivals by acquiring undervalued assets to deploy them at greater scale and with superior managerial efficiency, even if the acquirer hopes that in doing so it will prevent rivals from ever gaining significant market share. 

Further, despite popular characterization, on its face the acquisition was not about trying to destroy a consumer option, but only to ensure that Facebook was competitively viable in providing that option. Another reasonable interpretation of the emails is that Facebook was wrestling with the age-old make-or-buy dilemma faced by every firm at some point or another. 

Was the merger anticompetitive?

But let us assume that eliminating competition from Instagram was indeed the merger’s sole rationale. Would that necessarily make it anticompetitive?  

Chief among the objections is that both Facebook and Instagram are networked goods. Their value to each user depends, to a significant extent, on the number (and quality) of other people using the same platform. Many scholars have argued that this can create self-reinforcing dynamics where the strong grow stronger – though such an outcome is certainly not a given, since other factors about the service matter too, and networks can suffer from diseconomies of scale as well, where new users reduce the quality of the network.

This network effects point is central to the reasoning of those who oppose the merger: Facebook purportedly acquired Instagram because Instagram’s network had grown large enough to be a threat. With Instagram out of the picture, Facebook could thus take on the remaining smaller rivals with the advantage of its own much larger installed base of users. 

However, this network tipping argument could cut both ways. It is plausible that the proper counterfactual was not duopoly competition between Facebook and Instagram, but either Facebook or Instagram offering both firms’ features (only later). In other words, a possible framing of the merger is that it merely  accelerated the cross-pollination of social mechanics between Facebook and Instagram. Something that would likely prove beneficial to consumers.

This finds some support in Mark Zuckerberg’s reply to David Ebersman:

Buying them would give us the people and time to integrate their innovations into our core products.

The exchange between Zuckerberg and Ebersman also suggests another pro-competitive justification: bringing Instagram’s “social mechanics” to Facebook’s much larger network of users. We can only speculate about what ‘social mechanics’ Zuckerberg actually had in mind, but at the time Facebook’s photo sharing functionality was largely based around albums of unedited photos, whereas Instagram’s core product was a stream of filtered, cropped single images. 

Zuckerberg’s plan to gradually bring these features to Facebook’s users – as opposed to them having to familiarize themselves with an entirely different platform – would likely cut in favor of the deal being cleared by enforcers.

Another possibility is that it was Instagram’s network of creators – the people who had begun to use Instagram as a new medium, distinct from the generic photo albums Facebook had, and who would eventually grow to be known as ‘influencers’ – who were the valuable thing. Bringing them onto the Facebook platform would undoubtedly increase its value to regular users. For example, Kim Kardashian, one of Instagram’s most popular users, joined the service in February 2012, two months before the deal went through, and she was not the first such person to adopt Instagram in this way. We can see the importance of a service’s most creative users today, as Facebook is actually trying to pay TikTok creators to move to its TikTok clone Reels.

But if this was indeed the rationale, not only is this a sign of a company in the midst of fierce competition – rather than one on the cusp of acquiring a monopoly position – but, more fundamentally, it suggests that Facebook was always going to come out on top. Or at least it thought so.

The benefit of hindsight

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

For instance, critics argue that Instagram no longer competes with Facebook because of the merger. However, it is equally plausible that Instagram only became so successful because of its combination with Facebook (notably thanks to the addition of Facebook’s advertising platform, and the rapid rollout of a stories feature in response to Snapchat’s rise). Indeed, Instagram grew from roughly 24 million at the time of the acquisition to over 1 Billion users in 2018. Likewise, it earned zero revenue at the time of the merger. This might explain why the acquisition was widely derided at the time.

This is critical from an antitrust perspective. Antitrust enforcers adjudicate merger proceedings in the face of extreme uncertainty. All possible outcomes, including the counterfactual setting, have certain probabilities of being true that enforcers and courts have to make educated guesses about, assigning probabilities to potential anticompetitive harms, merger efficiencies, and so on.

Authorities at the time of the merger could not ignore these uncertainties. What was the likelihood that a company with a fraction of Facebook’s users (24 million to Facebook’s 1 billion), and worth $1 billion, could grow to threaten Facebook’s market position? At the time, the answer seemed to be “very unlikely”. Moreover, how could authorities know that Google+ (Facebook’s strongest competitor at the time) would fail? These outcomes were not just hard to ascertain, they were simply unknowable.

Of course, this is preceisly what neo-Brandesian antitrust scholars object to today: among the many seemingly innocuous big tech acquisitions that are permitted each year, there is bound to be at least one acquired firm that might have been a future disruptor. True as this may be, identifying that one successful company among all the others is the antitrust equivalent of finding a needle in a haystack. Instagram simply did not fit that description at the time of the merger. Such a stance also ignores the very real benefits that may arise from such arrangements.

Closing remarks

While it is tempting to reassess the Facebook Instagram merger in light of new revelations, such an undertaking is not without pitfalls. Hindsight bias is perhaps the most obvious, but the difficulties run deeper.

If we think that the Facebook/Instagram merger has been and will continue to be good for consumers, it would be strange to think that we should nevertheless break them up because we discovered that Zuckerberg had intended to do things that would harm consumers. Conversely, if you think a breakup would be good for consumers today, would it change your mind if you discovered that Mark Zuckerberg had the intentions of an angel when he went ahead with the merger in 2012, or that he had angelic intent today?

Ultimately, merger review involves making predictions about the future. While it may be reasonable to take the intentions of the merging parties into consideration when making those predictions (although it’s not obvious that we should), these are not the only or best ways to determine what the future will hold. As Ebersman himself points out in the emails, history is filled with over-optimistic mergers that failed to deliver benefits to the merging parties. That this one succeeded beyond the wildest dreams of everyone involved – except maybe Mark Zuckerberg – does not tell us that competition agencies should have ruled on it differently.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

A boy throws a brick through a bakeshop window. He flees and is never identified. The townspeople gather around the broken glass. “Well,” one of them says to the furious baker, “at least this will generate some business for the windowmaker!”

A reasonable statement? Not really. Although it is indeed a good day for the windowmaker, the money for the new window comes from the baker. Perhaps the baker was planning to use that money to buy a new suit. Now, instead of owning a window and a suit, he owns only a window. The windowmaker’s gain, meanwhile, is simply the tailor’s loss.

This parable of the broken window was conceived by Frédéric Bastiat, a nineteenth-century French economist. He wanted to alert the reader to the importance of opportunity costs—in his words, “that which is not seen.” Time and money spent on one activity cannot be spent on another.

Today Bastiat might tell the parable of the harassed technology company. A tech firm creates a revolutionary new product or service and grows very large. Rivals, lawyers, activists, and politicians call for an antitrust probe. Eventually they get their way. Millions of documents are produced, dozens of depositions are taken, and several hearings are held. In the end no concrete action is taken. “Well,” the critics say, “at least other companies could grow while the firm was sidetracked by the investigation!”

Consider the antitrust case against Microsoft twenty years ago. The case ultimately settled, and Microsoft agreed merely to modify minor aspects of how it sold its products. “It’s worth wondering,” writes Brian McCullough, a generally astute historian of the internet, “how much the flowering of the dot-com era was enabled by the fact that the most dominant, rapacious player in the industry was distracted while the new era was taking shape.” “It’s easy to see,” McCullough says, “that the antitrust trial hobbled Microsoft strategically, and maybe even creatively.”

Should we really be glad that an antitrust dispute “distracted” and “hobbled” Microsoft? What would a focused and unfettered Microsoft have achieved? Maybe nothing; incumbents often grow complacent. Then again, Microsoft might have developed a great search engine or social-media platform. Or it might have invented something that, thanks to the lawsuit, remains absent to this day. What Microsoft would have created in the early 2000s, had it not had to fight the government, is that which is not seen.

But doesn’t obstructing the most successful companies create “room” for new competitors? David Cicilline, the chairman of the House’s antitrust subcommittee, argues that “just pursuing the [Microsoft] enforcement action itself” made “space for an enormous amount of additional innovation and competition.” He contends that the large tech firms seek to buy promising startups before they become full-grown threats, and that such purchases must be blocked.

It’s easy stuff to say. It’s not at all clear that it’s true or that it makes sense. Hindsight bias is rampant. In 2012, for example, Facebook bought Instagram for $1 billion, a purchase that is now cited as a quintessential “killer acquisition.” At the time of the sale, however, Instagram had 27 million users and $0 in revenue. Today it has around a billion users, it is estimated to generate $7 billion in revenue each quarter, and it is worth perhaps $100 billion. It is presumptuous to declare that Instagram, which had only 13 employees in 2012, could have achieved this success on its own.

If distraction is an end in itself, last week’s Big Tech hearing before Cicilline and his subcommittee was a smashing success. Presumably Jeff Bezos, Tim Cook, Sundar Pichai, and Mark Zuckerberg would like to spend the balance of their time developing the next big innovations and staying ahead of smart, capable, ruthless competitors, starting with each other and including foreign firms such as ByteDance and Huawei. Last week they had to put their aspirations aside to prepare for and attend five hours of political theater.

The most common form of exchange at the hearing ran as follows. A representative asks a slanted question. The witness begins to articulate a response. The representative cuts the witness off. The representative gives a prepared speech about how the witness’s answer proved her point.

Lucy Kay McBath, a first-term congresswoman from Georgia, began one such drill with the claim that Facebook’s privacy policy from 2004, when Zuckerberg was 20 and Facebook had under a million users, applies in perpetuity. “We do not and will not use cookies to collect private information from any users,” it said. Has Facebook broken its “promise,” McBath asked, not to use cookies to collect private information? No, Zuckerberg explained (letting the question’s shaky premise slide), Facebook uses only standard log-in cookies.

“So once again, you do not use cookies? Yes or no?” McBath interjected. Having now asked a completely different question, and gotten a response resembling what she wanted—“Yes, we use cookies [on log-in features]”—McBath could launch into her canned condemnation. “The bottom line here,” she said, reading from her page, “is that you broke a commitment to your users. And who can say whether you may or may not do that again in the future?” The representative pressed on with her performance, not noticing or not caring that the person she was pretending to engage with had upset her script.

Many of the antitrust subcommittee’s queries had nothing to do with antitrust. One representative fixated on Amazon’s ties with the Southern Poverty Law Center. Another seemed to want Facebook to interrogate job applicants about their political beliefs. A third asked Zuckerberg to answer for the conduct of Twitter. One representative demanded that social-media posts about unproven Covid-19 treatments be left up, another that they be taken down. Most of the questions that were at least vaguely on topic, meanwhile, were exceedingly weak. The representatives often mistook emails showing that tech CEOs play to win, that they seek to outcompete challengers and rivals, for evidence of anticompetitive harm to consumers. And the panel was often treated like a customer-service hotline. This app developer ran into a difficulty; what say you, Mr. Cook? That third-party seller has a gripe; why won’t you listen to her, Mr. Bezos?

In his opening remarks, Bezos cited a survey that ranked Amazon one of the country’s most trusted institutions. No surprise there. In many places one could have ordered a grocery delivery from Amazon as the hearing started and had the goods put away before it ended. Was Bezos taking a muted dig at Congress? He had every right to—it is one of America’s least trusted institutions. Pichai, for his part, noted that many users would be willing to pay thousands of dollars a year for Google’s free products. Is Congress providing people that kind of value?

The advance of technology will never be an unalloyed blessing. There are legitimate concerns, for instance, about how social-media platforms affect public discourse. “Human beings evolved to gossip, preen, manipulate, and ostracize,” psychologist Jonathan Haidt and technologist Tobias Rose-Stockwell observe. Social media exploits these tendencies, they contend, by rewarding those who trade in the glib put-down, the smug pronouncement, the theatrical smear. Speakers become “cruel and shallow”; “nuance and truth” become “casualties in [a] competition to gain the approval of [an] audience.”

Three things are true at once. First, Haidt and Rose-Stockwell have a point. Second, their point goes only so far. Social media does not force people to behave badly. Assuming otherwise lets individual humans off too easy. Indeed, it deprives them of agency. If you think it is within your power to display grace, love, and transcendence, you owe it to others to think it is within their power as well.

Third, if you really want to see adults act like children, watch a high-profile congressional hearing. A hearing for Attorney General William Barr, held the day before the Big Tech hearing and attended by many of the same representatives, was a classic of the format.

The tech hearing was not as shambolic as the Barr hearing. And the representatives act like sanctimonious halfwits in part to concoct the sick burns that attract clicks on the very platforms built, facilitated, and delivered by the tech companies. For these and other obvious reasons, no one should feel sorry for the four men who spent a Wednesday afternoon serving as props for demagogues. But that doesn’t mean the charade was a productive use of time. There is always that which is not seen.

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.


The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 


Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech. 

In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.  

While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby  undermining long-standing conservative principles and the ability of conservatives to be treated fairly online. 

There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated. 

Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them. 

It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.

Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional. 

But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.

These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation  completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation. 

This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.

Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.

It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about. 

This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content  until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.

Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

Earlier this week, merger talks between Uber and food delivery service Grubhub surfaced. House Antitrust Subcommittee Chairman David N. Cicilline quickly reacted to the news:

Americans are struggling to put food on the table, and locally owned businesses are doing everything possible to keep serving people in our communities, even under great duress. Uber is a notoriously predatory company that has long denied its drivers a living wage. Its attempt to acquire Grubhub—which has a history of exploiting local restaurants through deceptive tactics and extortionate fees—marks a new low in pandemic profiteering. We cannot allow these corporations to monopolize food delivery, especially amid a crisis that is rendering American families and local restaurants more dependent than ever on these very services. This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support.

Pandemic profiteering rolls nicely off the tongue, and we’re sure to see that phrase much more over the next year or so. 

Grubhub shares jumped 29% Tuesday, the day the merger talks came to light, shown in the figure below. The Wall Street Journal reports companies are considering a deal that would value Grubhub stock at around 1.9 Uber shares, or $60-65 dollars a share, based on Thursday’s price.

But is that “pandemic profiteering?”

After Amazon announced its intended acquisition of Whole Foods, the grocer’s stock price soared by 27%. Rep. Cicilline voiced some convoluted concerns about that merger, but said nothing about profiteering at the time. Different times, different messaging.

Rep. Cicilline and others have been calling for a merger moratorium during the pandemic and used the Uber/Grubhub announcement as Exhibit A in his indictment of merger activity.

A moratorium would make things much easier for regulators. No more fighting over relevant markets, no HHI calculations, no experts debating SSNIPs or GUPPIs, no worries over consumer welfare, no failing firm defenses. Just a clear, brightline “NO!”

Even before the pandemic, it was well known that the food delivery industry was due for a shakeout. NPR reports, even as the business is growing, none of the top food-delivery apps are turning a profit, with one analyst concluding consolidation was “inevitable.” Thus, even if a moratorium slowed or stopped the Uber/Grubhub merger, at some point a merger in the industry will happen and the U.S. antitrust authorities will have to evaluate it.

First, we have to ask, “What’s the relevant market?” The government has a history of defining relevant markets so narrowly that just about any merger can be challenged. For example, for the scuttled Whole Foods/Wild Oats merger, the FTC famously narrowed the market to “premium natural and organic supermarkets.” Surely, similar mental gymnastics will be used for any merger involving food delivery services.

While food delivery has grown in popularity over the past few years, delivery represents less than 10% of U.S. food service sales. While Rep. Cicilline may be correct that families and local restaurants are “more dependent than ever” on food delivery, delivery is only a small fraction of a large market. Even a monopoly of food delivery service would not confer market power on the restaurant and food service industry.

No reasonable person would claim an Uber/Grubhub merger would increase market power in the restaurant and food service industry. But, it might convey market power in the food delivery market. Much attention is paid to the “Big Four”–DoorDash, Grubhub, Uber Eats, and Postmates. But, these platform delivery services are part of the larger food service delivery market, of which platforms account for about half of the industry’s revenues. Pizza accounts for the largest share of restaurant-to-consumer delivery.

This raises the big question of what is the relevant market: Is it the entire food delivery sector, or just the platform-to-consumer sector? 

Based on the information in the figure below, defining the market narrowly would place an Uber/Grubhub merger squarely in the “presumed to be likely to enhance market power” category.

  • 2016 HHI: <3,175
  • 2018 HHI: <1,474
  • 2020 HHI: <2,249 pre-merger; <4,153 post-merger

Alternatively, defining the market to encompass all food delivery would cut the platforms’ shares roughly in half and the merger would be unlikely to harm competition, based on HHI. Choosing the relevant market is, well, relevant.

The Second Measure data suggests that concentration in the platform delivery sector decreased with the entry of Uber Eats, but subsequently increased with DoorDash’s rising share–which included the acquisition of Caviar from Square.

(NB: There seems to be a significant mismatch in the delivery revenue data. Statista reports platform delivery revenues increased by about 40% from 2018 to 2020, but Second Measure indicates revenues have more than doubled.) 

Geoffrey Manne, in an earlier post points out “while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.” That may be the case here.

The figure below is a sample of platform delivery shares by city. I added data from an earlier study of 2017 shares. In all but two metro areas, Uber and Grubhub’s combined market share declined from 2017 to 2020. In Boston, the combined shares did not change and in Los Angeles, the combined shares increased by 1%.

(NB: There are some serious problems with this data, notably that it leaves out the restaurant-to-consumer sector and assumes the entire platform-to-consumer sector is comprised of only the “Big Four.”)

Platform-to-consumer delivery is a complex two-sided market in which the platforms link, and compete for, both restaurants and consumers. Platforms compete for restaurants, drivers, and consumers. Restaurants have a choice of using multiple platforms or entering into exclusive arrangements. Many drivers work for multiple platforms, and many consumers use multiple platforms. 

Fundamentally, the rise of platform-to-consumer is an evolution in vertical integration. Restaurants can choose to offer no delivery, use their own in-house delivery drivers, or use a third party delivery service. Every platform faces competition from in-house delivery, placing a limit on their ability to raise prices to restaurants and consumers.

The choice of delivery is not an either-or decision. For example, many pizza restaurants who have their own delivery drivers also use platform delivery service. Their own drivers may serve a limited geographic area, but the platforms allow the restaurant to expand its geographic reach, thereby increasing its sales. Even so, the platforms face competition from in-house delivery.

Mergers or other forms of shake out in the food delivery industry are inevitable. Mergers will raise important questions about relevant product and geographic markets as well as competition in two-sided markets. While there is a real risk of harm to restaurants, drivers, and consumers, there is also a real possibility of welfare enhancing efficiencies. These questions will never be addressed with an across-the-board merger moratorium.

In the wake of the launch of Facebook’s content oversight board, Republican Senator Josh Hawley and FCC Commissioner Brendan Carr, among others, have taken to Twitter to levy criticisms at the firm and, in the process, demonstrate just how far the Right has strayed from its first principles around free speech and private property. For his part, Commissioner Carr’s thread makes the case that the members of the board are highly partisan and mostly left-wing and can’t be trusted with the responsibility of oversight. While Senator Hawley took the approach that the Board’s very existence is just further evidence of the need to break Facebook up. 

Both Hawley and Carr have been lauded in rightwing circles, but in reality their positions contradict conservative notions of the free speech and private property protections given by the First Amendment.  

This blog post serves as a sequel to a post I wrote last year here at TOTM explaining how There’s nothing “conservative” about Trump’s views on free speech and the regulation of social media. As I wrote there:

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Commissioner Carr’s complaint and Senator Hawley’s antitrust approach of breaking up Facebook has much more in common with the views traditionally held by left-wing Democrats on the need for the government to regulate private actors in order to promote speech interests. Originalists and law & economics scholars, on the other hand, have consistently taken the opposite point of view that the First Amendment protects against government infringement of speech interests, including protecting the right to editorial discretion. While there is clearly a conflict of visions in First Amendment jurisprudence, the conservative (and, in my view, correct) point of view should not be jettisoned by Republicans to achieve short-term political gains.

The First Amendment restricts government action, not private action

The First Amendment, by its very text, only applies to government action: “Congress shall make no law . . . abridging the freedom of speech.” This applies to the “State[s]” through the Fourteenth Amendment. There is extreme difficulty in finding any textual hook to say the First Amendment protects against private action, like that of Facebook. 

Originalists have consistently agreed. Most recently, in Manhattan Community Access Corp. v. Halleck, Justice Kavanaugh—on behalf of the conservative bloc and the Court—wrote:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

This was true at the adoption of the First Amendment and remains true today in a high-tech world. Federal district courts have consistently dismissed First Amendment lawsuits against Facebook on the grounds there is no state action. 

For instance, in Nyawba v. Facebook, the plaintiff initiated a civil rights lawsuit against Facebook for restricting his use of the platform. The U.S. District Court for the Southern District of Texas dismissed the case, noting 

Because the First Amendment governs only governmental restrictions on speech, Nyabwa has not stated a cause of action against FaceBook… Like his free speech claims, Nyabwa’s claims for violation of his right of association and violation of his due process rights are claims that may be vindicated against governmental actors pursuant to § 1983, but not a private entity such as FaceBook.

Similarly, in Young v. Facebook, the U.S. District Court for the Northern District of California rejected a claim that Facebook violated the First Amendment by deactivating the plaintiff’s Facebook page. The court declined to subject Facebook to the First Amendment analysis, stating that “because Young has not alleged any action under color of state law, she fails to state a claim under § 1983.”

The First Amendment restricts antitrust actions against Facebook, not Facebook’s editorial discretion over its platform

Far from restricting Facebook, the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.

There is no basis for concluding online platforms do not have editorial discretion under the law. In fact, the position of Facebook here is very similar to the newspaper in Miami Herald Publishing Co. v. Tornillo, in which the Supreme Court considered a state law giving candidates for public office a right to reply in newspapers to editorials written about them. The Florida Supreme Court upheld the statute, finding it furthered the “broad societal interest in the free flow of information to the public.” The U.S. Supreme Court, despite noting the level of concentration in the newspaper industry, nonetheless reversed. The Court explicitly found the newspaper had a First Amendment right to editorial discretion:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Online platforms have the same First Amendment protections for editorial discretion. For instance, in both Search King v. Google and Langdon v. Google, two different federal district courts ruled Google’s search results are subject to First Amendment protections, both citing Tornillo

In Zhang v., another district court went so far as to grant a Chinese search engine the right to editorial discretion in limiting access to democracy movements in China. The court found that the search engine “inevitably make[s] editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information.” Much like the search engine in Zhang, Facebook is clearly making editorial judgments about what information shows up in newsfeed and where to display it. 

None of this changes because the generally applicable law is antitrust rather than some other form of regulation. For instance, in Tornillo, the Supreme Court took pains to distinguish the case from an earlier antitrust case against newspapers, Associated Press v. United States, which found that there was no broad exemption from antitrust under the First Amendment.

The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their `reason’ tells them should not be published.”

In other words, the Tornillo and Associated Press establish the government may not compel speech through regulation, including an antitrust remedy. 

Once it is conceded that there is a speech interest here, the government must justify the use of antitrust law to compel Facebook to display the speech of users in the newsfeeds of others under the strict scrutiny test of the First Amendment. In other words, the use of antitrust law must be narrowly tailored to a compelling government interest. Even taking for granted that there may be a compelling government interest in facilitating a free and open platform (which is by no means certain), it is clear that this would not be narrowly tailored action. 

First, “breaking up” Facebook is clearly overbroad as compared to the goal of promoting free speech on the platform. There is no need to break it up just because it has an Oversight Board that engages in editorial responsibilities. There are many less restrictive means, including market competition, which has greatly expanded consumer choice for communications and connections. Second, antitrust does not even really have a remedy for free speech issues complained of here, as it would require courts to engage in long-term oversight and engage in compelled speech foreclosed by Associated Press

Note that this makes good sense from a law & economics perspective. Platforms like Facebook should be free to regulate the speech on their platforms as they see fit and consumers are free to decide which platforms they wish to use based upon that information. While there are certainly network effects to social media, the plethora of options currently available with low switching costs suggests that there is no basis for antitrust action against Facebook because consumers are unable to speak. In other words, the least restrictive means test of the First Amendment is best fulfilled by market competition in this case.

If there were a basis for antitrust intervention against Facebook, either through merger review or as a standalone monopoly claim, the underlying issue would be harm to competition. While this would have implications for speech concerns (which may be incorporated into an analysis through quality-adjusted price), it is inconceivable how an antitrust remedy could be formed on speech issues consistent with the First Amendment. 


Despite now well-worn complaints by so-called conservatives in and out of the government about the baneful influence of Facebook and other Big Tech companies, the First Amendment forecloses government actions to violate the editorial discretion of these companies. Even if Commissioner Carr is right, this latest call for antitrust enforcement against Facebook by Senator Hawley should be rejected for principled conservative reasons.

The Wall Street Journal reports that Amazon employees have been using data from individual sellers to identify products to compete with with its own ‘private label’ (or own-brand) products, such as AmazonBasics, Presto!, and Pinzon.

It’s implausible that this is an antitrust problem, as some have suggested. It’s extremely common for retailers to sell their own private label products and use data on how other products in their stores have sold to help development and marketing. They account for about 14–17% of overall US retail sales, and for an estimated 19% of Walmart’s and Kroger’s sales and 29% of Costco’s sales of consumer packaged goods. 

And Amazon accounts for 39% of US e-commerce spending, and about 6% of all US retail spending. Any antitrust-based argument against Amazon doing this should also apply to Walmart, Kroger and Costco as well. In other words, the case against Amazon proves too much. Alec Stapp has a good discussion of these and related facts here.

However, it is interesting to think about the underlying incentives facing Amazon here, and in particular why Amazon’s company policy is not to use individual seller data to develop products (rogue employees violating this policy, notwithstanding). One possibility is that it is a way for Amazon to balance its competition with some third parties with protections for others that it sees as valuable to its platform overall.

Amazon does use aggregated seller data to develop and market its products. If two or more merchants are selling a product, Amazon’s employees can see how popular it is. This might seem like a trivial distinction, but it might exist for good reason. It could be because sellers of unique products actually do have the bargaining power to demand that Amazon does not use their data to compete with them, or for public relations reasons, although it’s not clear how successful that has been. 

But another possibility is that it may be a self-imposed restraint. Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to the Journal, they account for less than 1% of Amazon’s product sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, since that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

But to the extent that Amazon competes against innovative third-party sellers (typically manufacturers doing direct sales, as opposed to pure retailers simply re-selling others’ products), there is a possibility that the prospect of having to compete with Amazon may diminish their incentive to develop new products and sell them on Amazon’s platform. 

This is the strongest argument that is made against private label offerings in general. When they involve some level of copying an innovative product, where the innovator has been collecting above-normal profits and those profits are what spur the innovation in the first place, a private label product that comes along and copies the product effectively free rides on the innovation and captures some of its return. That may get us less innovation than society—or a platform trying to host as many innovative products as possible—would like.

While the Journal conflates these two kinds of products, Amazon’s own policies may be tailored specifically to take account of the distinction, and maximise the total value of its marketplace to consumers.

This is nominally the focus of the Journal story: a car trunk organiser company with an (apparently) innovative product says that Amazon moving in to compete with its own AmazonBasics version competed away many of its sales. In this sort of situation, the free-rider problem described above might apply where future innovation is discouraged. Why bother to invent things like this if you’re just going to have your invention ripped off?

Of course, many such innovations are protected by patents. But there may be valuable innovations that are not, and even patented innovations are not perfectly protected given the costs of enforcement. But a platform like Amazon can adopt rules that fine-tune the protections offered by the legal system in an effort to increase the value of the platform for both innovators and consumers alike.

And that may be why Amazon has its rule against using individual seller data to compete: to allow creators of new products to collect more rents from their inventions, with a promise that, unless and until their product is commodified by other means (as indicated by the product being available from multiple other sellers), Amazon won’t compete against such sellers using any special insights it might have from that seller using Amazon’s Marketplace. 

This doesn’t mean Amazon refuses to compete (or refuses to allow others to compete); it has other rules that sometimes determine that boundary, as when it enters into agreements with certain brands to permit sales of the brand on the platform only by sellers authorized by the brand owner. Rather, this rule is a more limited—but perhaps no less important—one that should entice innovators to use Amazon’s platform to sell their products without concern that doing so will create a special risk that Amazon can compete away their returns using information uniquely available to it. In effect, it’s a promise that innovators won’t lose more by choosing to sell on Amazon rather than through other retail channels.. 

Like other platforms, to maximise its profits Amazon needs to strike a balance between being an attractive place for third party merchants to sell their goods, and being attractive to consumers by offering as many inexpensive, innovative, and reliable products as possible. Striking that balance is challenging, but a rule that restrains the platform from using its unique position to expropriate value from innovative sellers helps to protect the income of genuinely innovative third parties, and induces them to sell products consumers want on Amazon, while still allowing Amazon (and third-party sellers) to compete with commodity products. 

The fact that Amazon has strong competition online and offline certainly acts as an important constraint here, too: if Amazon behaved too badly, third parties might not sell on it at all, and Amazon would have none of the seller data that is allegedly so valuable to it.

But even in a world where Amazon had a huge, sticky customer base that meant it was not an option to sell elsewhere—which the Journal article somewhat improbably implies—Amazon would still need third parties to innovate and sell things on its platform. 

What the Journal story really seems to demonstrate is the sort of genuine principal-agent problem that all large businesses face: the company as a whole needs to restrain its private label section in various respects but its agents in the private label section want to break those rules to maximise their personal performance (in this case, by launching a successful new AmazonBasics product). It’s like a rogue trader at a bank who breaks the rules to make herself look good by, she hopes, getting good results.This is just one of many rules that a platform like Amazon has to preserve the value of its platform. It’s probably not the most important one. But understanding why it exists may help us to understand why simple stories of platform predation don’t add up, and help to demonstrate the mechanisms that companies like Amazon use to maximise the total value of their platform, not just one part of it.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Hal Singer, (Managing Director, econONE; Adjunct Professor, Georgeown University, McDonough School of Business).]

In these harrowing times, it is a natural to fixate on the problem of testing—and how the United States got so far behind South Korea on this front—as a means to arrest the spread of Coronavirus. Under this remedy, once testing becomes ubiquitous, the government could track and isolate everyone who has been in recent contact with someone who has been diagnosed with Covid-19. 

A good start, but there are several pitfalls from “contact tracing” or what I call “standalone testing.” First, it creates an outsized role for government and raises privacy concerns relating to how data on our movements and in-person contacts are shared. Second, unless the test results were instantaneously available and continuously updated, data from the tests would not be actionable. A subject could be clear of the virus on Tuesday, get tested on Wednesday, and be exposed to the virus on Friday.

Third, and one easily recognizable to economists, is that standalone testing does not provide any means by which healthy subjects of the test can credibly signal to their peers that they are now safe to be around. Given the skewed nature of economy towards services—from restaurants to gyms and yoga studios to coffee bars—it is vital that we interact physically. To return to work or to enter a restaurant or any other high-density environment, both the healthy subject must convey to her peers that she is healthy, and other co-workers or patrons in a high-density environment must signal their health to the subject. Without this mutual trust, healthy workers will be reluctant to return to the workplace or to integrate back into society. It is not enough for complete strangers to say “I’m safe.” How do I know you are safe?

As law professor Thom Lambert tweeted, this information problem is related to the famous lemons problem identified by Nobel laureate George Akerlof: We “can’t tell ‘quality’ so we assume everyone’s a lemon and act accordingly. We once had that problem with rides from strangers, but entrepreneurship and technology solved the problem.”

Akerlof recognized that markets were prone to failure in the face of “asymmetric information,” or when a seller knows a material fact that the buyer does not. He showed a market for used cars could degenerate into a market exclusively for lemons, because buyers rationally are not willing to pay the full value of a good car and the discount they would impose on all sellers would drive good cars away.

To solve this related problem, we need a way to verify our good health. Borrowing Lambert’s analogy, most Americans (barring hitchhikers) would never jump in a random car without knowledge that the driver worked for a reputable ride-hailing service or licensed taxi. When an Uber driver pulls up to the curb, the rider can feel confident that the driver has been verified (and vice versa) by a third party—in this case, Uber—and if there’s any doubt of the driver’s credentials, the driver typically speaks the passenger’s name when the door is still ajar. Uber also mitigated the lemons problem by allowing passengers and drivers to engage in reciprocal rating.

Similarly, when a passenger shows up at the airport, he presents a ticket, typically in electronic form on his phone, to a TSA officer. The phone is scanned by security, and verification of ticket and TSA PreCheck status is confirmed via rapid communication with the airline. The same verification is repeated at stadium venues across America, thanks in part to technology developed by StubHub.

A similar verification technology could be deployed to solve the trust problem relating to Coronavirus. It is meant to complement standalone testing. Here’s how it might work:

Each household would have a designated testing center in their community and potentially a test kit in their own homes. Testing would done routinely and free of charge, so as to ensure that test results are up to date. (Given the positive externalities associated with mass testing and verification, the optimal price is not positive.) Just as an airline sends confirmation of a ticket purchase, the company responsible for administering the test would report the results within an hour to the subject and it would store for 24 hours in the vendor’s app. In contrast to the invasive role of government in contact tracing, the only role for government here would be to approve of qualified vendors of the testing equipment.

Armed with third-party verification of her health status on her phone, the subject could present these results to a gatekeeper at any facility. Suppose the subject typically takes the metro to work, and stops at her gym before going home. Under this regime, she would present her phone to three gatekeepers (metro, work, gym) to obtain access. Of course, subjects who test positive for Coronavirus would not gain access to these secure sites until the virus left their system and they subsequently test negative. Seems harsh for them, but imposing this restriction isn’t really a degradation in mobility relative to the status quo, under which access is denied to everyone.

When I floated this idea on Twitter a few days ago, it was generally well received, but even supporters spotted potential shortcomings. For example, users could have a fraudulent app on their phones, or otherwise fake a negative result. Yet government sanctioning of a select groups of test vendors should prevent this type of fraud. Private gatekeepers such as restaurants presumably would not have to operate under any mandate; they have a clear incentive not only to restrict access to verified patrons, but also to advertise that they have strict rules on admission. By the same token, if they did, for some reason, allowed people to enter without verification, they could do so. But patrons’ concern for their own health likely would undermine such a permissive policy.

Other skeptics raised privacy concerns. But if a user voluntarily conveys her health status to a gatekeeper, so long as the information stops there, it’s hard to conceive a privacy violation. Another potential violation would be an equipment vendor’s sharing information of a user’s health status with third parties. Of course, the government could impose restrictions on a vendor’s data sharing as a condition of granting a license to test and verify. But given the circumstances, such sharing could support contact tracing, or allow supplies to be mobilized to certain areas where there are outbreaks. 

Still others noted that some Americans lack phones. For these Americans, I’d suggest paper verification would suffice, or even better yet, subsidized phones.

No solution is flawless. And it’s incredible that we even have to think this way. But who could have imagined, even a few weeks ago, that we would be pinned in our basements, afraid to interact with the world in close quarters? Desperate times call for creative and economically sound measures.