Archives For Twitter

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

Twitter has seen a lot of ups and downs since Elon Musk closed on his acquisition of the company in late October and almost immediately set about his initiatives to “reform” the platform’s operations.

One of the stories that has gotten somewhat lost in the ensuing chaos is that, in the short time under Musk, Twitter has made significant inroads—on at least some margins—against the visibility of child sexual abuse material (CSAM) by removing major hashtags that were used to share it, creating a direct reporting option, and removing major purveyors. On the other hand, due to the large reductions in Twitter’s workforce—both voluntary and involuntary—there are now very few human reviewers left to deal with the issue.

Section 230 immunity currently protects online intermediaries from most civil suits for CSAM (a narrow carveout is made under Section 1595 of the Trafficking Victims Protection Act). While the federal government could bring criminal charges if it believes online intermediaries are violating federal CSAM laws, and certain narrow state criminal claims could be brought consistent with federal law, private litigants are largely left without the ability to find redress on their own in the courts.

This, among other reasons, is why there has been a push to amend Section 230 immunity. Our proposal (along with co-author Geoffrey Manne) suggests online intermediaries should have a reasonable duty of care to remove illegal content. But this still requires thinking carefully about what a reasonable duty of care entails.

For instance, one of the big splash moves made by Twitter after Musk’s acquisition was to remove major CSAM distribution hashtags. While this did limit visibility of CSAM for a time, some experts say it doesn’t really solve the problem, as new hashtags will arise. So, would a reasonableness standard require the periodic removal of major hashtags? Perhaps it would. It appears to have been a relatively low-cost way to reduce access to such material, and could theoretically be incorporated into a larger program that uses automated discovery to find and remove future hashtags.

Of course it won’t be perfect, and will be subject to something of a Whac-A-Mole dynamic. But the relevant question isn’t whether it’s a perfect solution, but whether it yields significant benefit relative to its cost, such that it should be regarded as a legally reasonable measure that platforms should broadly implement.

On the flip side, Twitter has lost such a large amount of its workforce that it potentially no longer has enough staff to do the important review of CSAM. As long as Twitter allows adult nudity, and algorithms are unable to effectively distinguish between different types of nudity, human reviewers remain essential. A reasonableness standard might also require sufficient staff and funding dedicated to reviewing posts for CSAM. 

But what does it mean for a platform to behave “reasonably”?

Platforms Should Behave ‘Reasonably’

Rethinking platforms’ safe harbor from liability as governed by a “reasonableness” standard offers a way to more effectively navigate the complexities of these tradeoffs without resorting to the binary of immunity or total liability that typically characterizes discussions of Section 230 reform.

It could be the case that, given the reality that machines can’t distinguish between “good” and “bad” nudity, it is patently unreasonable for an open platform to allow any nudity at all if it is run with the level of staffing that Musk seems to prefer for Twitter.

Consider the situation that MindGeek faced a couple of years ago. It was pressured by financial providers, including PayPal and Visa, to clean up the CSAM and nonconsenual pornography that appeared on its websites. In response, they removed more than 80% of suspected illicit content and required greater authentication for posting.

Notwithstanding efforts to clean up the service, a lawsuit was filed against MindGeek and Visa by victims who asserted that the credit-card company was a knowing conspirator for processing payments to MindGeek’s sites when they were purveying child pornography. Notably, Section 230 issues were dismissed early on in the case, but the remaining claims—rooted in the Racketeer Influenced and Corrupt Organizations Act (RICO) and the Trafficking Victims Protection Act (TVPA)—contained elements that support evaluating the conduct of online intermediaries, including payment providers who support online services, through a reasonableness lens.

In our amicus, we stressed the broader policy implications of failing to appropriately demarcate the bounds of liability. In short, we stressed that deterrence is best encouraged by placing responsibility for control on the party most closely able to monitor the situation—i.e., MindGeek, and not Visa. Underlying this, we believe that an appropriately tuned reasonableness standard should be able to foreclose these sorts of inquiries at early stages of litigation if there is good evidence that an intermediary behaved reasonably under the circumstances.

In this case, we believed the court should have taken seriously the fact that a payment processor needs to balance a number of competing demands— legally, economically, and morally—in a way that enables them to serve their necessary prosocial roles. Here, Visa had to balance its role, on the one hand, as a neutral intermediary responsible for handling millions of daily transactions, with its interests to ensure that it did not facilitate illegal behavior. But it also was operating, essentially, under a veil of ignorance: all of the information it had was derived from news reports, as it was not directly involved in, nor did it have special insight into, the operation of MindGeek’s businesses.

As we stressed in our intermediary-liability paper, there is indeed a valid concern that changes to intermediary-liability policy not invite a flood of ruinous litigation. Instead, there needs to be some ability to determine at the early stages of litigation whether a defendant behaved reasonably under the circumstances. In the MindGeek case, we believed that Visa did.

In essence, much of this approach to intermediary liability boils down to finding socially and economically efficient dividing lines that can broadly demarcate when liability should attach. For example, if Visa is liable as a co-conspirator in MindGeek’s allegedly illegal enterprise for providing a payment network that MindGeek uses by virtue of its relationship with yet other intermediaries (i.e., the banks that actually accept and process the credit-card payments), why isn’t the U.S. Post Office also liable for providing package-delivery services that allow MindGeek to operate? Or its maintenance contractor for cleaning and maintaining its offices?

Twitter implicitly engaged in this sort of analysis when it considered becoming an OnlyFans competitor. Despite having considerable resources—both algorithmic and human—Twitter’s internal team determined they could not “accurately detect child sexual exploitation and non-consensual nudity at scale.” As a result, they abandoned the project. Similarly, Tumblr tried to make many changes, including taking down CSAM hashtags, before finally giving up and removing all pornographic material in order to remain in the App Store for iOS. At root, these firms demonstrated the ability to weigh costs and benefits in ways entirely consistent with a reasonableness analysis. 

Thinking about the MindGeek situation again, it could also be the case that MindGeek did not behave reasonably. Some of MindGeek’s sites encouraged the upload of user-generated pornography. If MindGeek experienced the same limitations in detecting “good” and “bad” pornography (which is likely), it could be that the company behaved recklessly for many years, and only tightened its verification procedures once it was caught. If true, that is behavior that should not be protected by the law with a liability shield, as it is patently unreasonable.

Apple is sometimes derided as an unfair gatekeeper of speech through its App Store. But, ironically, Apple itself has made complex tradeoffs between data security and privacy—through use of encryption, on the one hand, and checking devices for CSAM material, on the other. Prioritizing encryption over scanning devices (especially photos and messages) for CSAM is a choice that could allow for more CSAM to proliferate. But the choice is, again, a difficult one: how much moderation is needed and how do you balance such costs against other values important to users, such as privacy for the vast majority of nonoffending users?

As always, these issues are complex and involve tradeoffs. But it is obvious that more can and needs to be done by online intermediaries to remove CSAM.

But What Is ‘Reasonable’? And How Do We Get There?

The million-dollar legal question is what counts as “reasonable?” We are not unaware of the fact that, particularly when dealing with online platforms that deal with millions of users a day, there is a great deal of surface area exposed to litigation by potentially illicit user-generated conduct. Thus, it is not the case, at least for the foreseeable future, that we need to throw open gates of a full-blown common-law process to determine questions of intermediary liability. What is needed, instead, is a phased-in approach that gets courts in the business of parsing these hard questions and building up a body of principles that, on the one hand, encourage platforms to do more to control illicit content on their services, and on the other, discourages unmeritorious lawsuits by the plaintiffs’ bar.

One of our proposals for Section 230 reform is for a multistakeholder body, overseen by an expert agency like the Federal Trade Commission or National Institute of Standards and Technology, to create certified moderation policies. This would involve online intermediaries working together with a convening federal expert agency to develop a set of best practices for removing CSAM, including thinking through the cost-benefit analysis of more moderation—human or algorithmic—or even wholesale removal of nudity and pornographic content.

Compliance with these standards should, in most cases, operate to foreclose litigation against online service providers at an early stage. If such best practices are followed, a defendant could point to its moderation policies as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor similar to the one currently in place in Section 230.

In litigation, after a defendant answers a complaint with its certified moderation policies, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity. And, periodically, the body overseeing the development of this process would incorporate changes to the best practices standards based on the cases being brought in front of courts.

Online service providers don’t need to be perfect in their content-moderation decisions, but they should behave reasonably. A properly designed duty-of-care standard should be flexible and account for a platform’s scale, the nature and size of its user base, and the costs of compliance, among other considerations. What is appropriate for YouTube, Facebook, or Twitter may not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform.

Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common-law negligence. Allowing courts to apply the flexible common-law duty of reasonable care would also enable jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Conclusion

Twitter and other online intermediaries continue to struggle with the best approach to removing CSAM, nonconsensual pornography, and a whole host of other illicit content. There are no easy answers, but there are strong ethical reasons, as well as legal and market pressures, to do more. Section 230 reform is just one part of a complete regulatory framework, but it is an important part of getting intermediary liability incentives right. A reasonableness approach that would hold online platforms accountable in a cost-beneficial way is likely to be a key part of a positive reform agenda for Section 230.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.

While much of the commentary to date has been on whether Thomas got the legal analysis right, or on the uncomfortable fit of common-carriage law to social media, the deeper question of the First Amendment’s protection of private ordering has received relatively short shrift.

Conservatives’ main argument has been that Big Tech needs to be reined in because it is restricting the speech of private individuals. While conservatives traditionally have defended the state-action doctrine and the right to editorial discretion, they now readily find exceptions to both in order to justify regulating social-media companies. But those two First Amendment doctrines have long enshrined an important general principle: private actors can set the rules for speech on their own property. I intend to analyze this principle from a law & economics perspective and show how it benefits society.

Who Balances the Benefits and Costs of Speech?

Like virtually any other human activity, there are benefits and costs to speech and it is ultimately subjective individual preference that determines the value that speech has. The First Amendment protects speech from governmental regulation, with only limited exceptions, but that does not mean all speech is acceptable or must be tolerated. Under the state-action doctrine, the First Amendment only prevents the government from restricting speech.

Some purported defenders of the principle of free speech no longer appear to see a distinction between restraints on speech imposed by the government and those imposed by private actors. But this is surely mistaken, as no one truly believes all speech protected by the First Amendment should be without consequence. In truth, most regulation of speech has always come by informal means—social mores enforced by dirty looks or responsive speech from others.

Moreover, property rights have long played a crucial role in determining speech rules within any given space. If a man were to come into my house and start calling my wife racial epithets, I would not only ask that person to leave but would exercise my right as a property owner to eject the trespasser—if necessary, calling the police to assist me. I similarly could not expect to go to a restaurant and yell at the top of my lungs about political issues and expect them—even as “common carriers” or places of public accommodation—to allow me to continue.

As Thomas Sowell wrote in Knowledge and Decisions:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them―or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.

Knowledge and Decisions, p. 240

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to play the role. Asking government to use regulation to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—government by its nature can only hand down categorical guidelines: “you must allow x, y, and z speech.”

This is particularly relevant in the sphere of social media. Social-media companies are multi-sided platforms. They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users engaged.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech that are generally viewed negatively by users, even if the First Amendment would foreclose the government from regulating those same types of content. This is a good thing. Social-media companies balance the speech interests of different kinds of users to maximize the value of the platform and, in turn, to maximize benefits to all.

Herein lies the fundamental difference between private action and state action: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, it represents termination of a previously voluntary association. If the government kicks someone out of a public forum for expressing legal speech, that is coercion. The state-action doctrine recognizes this fundamental difference and creates a bright-line rule that courts may police when it comes to speech claims. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

Knowledge and Decisions, p. 244

Markets Produce the Best Moderation Policies

The First Amendment also protects the right of editorial discretion, which means publishers, platforms, and other speakers are free from carrying or transmitting government-compelled speech. Even a newspaper with near-monopoly power cannot be compelled by a right-of-reply statute to carry responses by political candidates to editorials it has published. In other words, not only is private regulation of speech not state action, but in many cases, private regulation is protected by the First Amendment.

There is no reason to think that social-media companies today are in a different position than was the newspaper in Miami Herald v. Tornillo. These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects the moderation decisions of social-media companies, its benefits accrue to society at-large.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them. How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes. Market competition, not government power, has enabled internet users (including conservatives!) to have more avenues than ever to get their message out.

Many conservatives remain unpersuaded by the power of markets in this case. They see multiple platforms all engaging in very similar content-moderation policies when it comes to certain touchpoint issues, and thus allege widespread anti-conservative bias and collusion. Neither of those claims have much factual support, but more importantly, the similarity of content-moderation standards may simply be common responses to similar demand structures—not some nefarious and conspiratorial plot.

In other words, if social-media users demand less of the kinds of content commonly considered to be hate speech, or less misinformation on certain important issues, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that forcing them to carry all “legal” speech—which would include not just misinformation and hate speech, but pornographic material, as well—would better serve social-media users. There are always alternative means to debate contestable issues of the day, even if it may be more costly to access them.

Indeed, that content-moderation policies make it more difficult to communicate some messages is precisely the point of having them. There is a subset of protected speech to which many users do not wish to be subject. Moreover, there is no inherent right to have an audience on a social-media platform.

Conclusion

Much of the First Amendment’s economic value lies in how it defines roles in the market for speech. As a general matter, it is not the government’s place to determine what speech should be allowed in private spaces. Instead, the private ordering of speech emerges through the application of social mores and property rights. This benefits society, as it allows individuals to create voluntary relationships built on marginal decisions about what speech is acceptable when and where, rather than centralized decisions made by a governing few and that are difficult to change over time.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech. 

In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.  

While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby  undermining long-standing conservative principles and the ability of conservatives to be treated fairly online. 

There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated. 

Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them. 

It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.

Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional. 

But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.

These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation  completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation. 

This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.

Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.

It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about. 

This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content  until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.

Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]

Across the globe, millions of people are rapidly coming to terms with the harsh realities of life under lockdown. As governments impose ever-greater social distancing measures, many of the daily comforts we took for granted are no longer available to us. 

And yet, we can all take solace in the knowledge that our current predicament would have been far less tolerable if the COVID-19 outbreak had hit us twenty years ago. Among others, we have Big Tech firms to thank for this silver lining. 

Contrary to the claims of critics, such as Senator Josh Hawley, Big Tech has produced game-changing innovations that dramatically improve our ability to fight COVID-19. 

The previous post in this series showed that innovations produced by Big Tech provide us with critical information, allow us to maintain some level of social interactions (despite living under lockdown), and have enabled companies, universities and schools to continue functioning (albeit at a severely reduced pace).

But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?

One of the most underappreciated ways in which technology (mostly pioneered by Big Tech firms) is helping the world deal with COVID-19 has been a rapid shift towards contactless economic transactions. Not only are consumers turning towards digital goods to fill their spare time, but physical goods (most notably food) are increasingly being exchanged without any direct contact.

These ongoing changes would be impossible without the innovations and infrastructure that have emerged from tech and telecommunications companies over the last couple of decades. 

Of course, the overall picture is still bleak. The shift to contactless transactions has only slightly softened the tremendous blow suffered by the retail and restaurant industries – some predictions suggest their overall revenue could fall by at least 50% in the second quarter of 2020. Nevertheless, as explained below, this situation would likely be significantly worse without the many innovations produced by Big Tech companies. For that we would be thankful.

1. Food and other goods

For a start, the COVID-19 outbreak (and government measures to combat it) has caused many brick & mortar stores and restaurants to shut down. These closures would have been far harder to implement before the advent of online retail and food delivery platforms.

At the time of writing, e-commerce websites already appear to have witnessed a 20-30% increase in sales (other sources report 52% increase, compared to the same time last year). This increase will likely continue in the coming months.

The Amazon Retail platform has been at the forefront of this online shift.

  • Having witnessed a surge in online shopping, Amazon announced that it would be hiring 100.000 distribution workers to cope with the increased demand. Amazon’s staff have also been asked to work overtime in order to meet increased demand (in exchange, Amazon has doubled their pay for overtime hours).
  • To attract these new hires and ensure that existing ones continue working, Amazon simultaneously announced that it would be increasing wages in virus-hit countries (from $15 to $17, in the US) .
  • Amazon also stopped accepting “non-essential” goods in its warehouses, in order to prioritize the sale of household essentials and medical goods that are in high demand.
  • Finally, in Italy, Amazon decided not to stop its operations, despite some employees testing positive for COVID-19. Controversial as this move may be, Amazon’s private interests are aligned with those of society – maintaining the supply of essential goods is now more important than ever. 

And it is not just Amazon that is seeking to fill the breach left temporarily by brick & mortar retail. Other retailers are also stepping up efforts to distribute their goods online.

  • The apps of traditional retail chains have witnessed record daily downloads (thus relying on the smartphone platforms pioneered by Google and Apple).
  •  Walmart has become the go-to choice for online food purchases:

(Source: Bloomberg)

The shift to online shopping mimics what occurred in China, during its own COVID-19 lockdown. 

  • According to an article published in HBR, e-commerce penetration reached 36.6% of retail sales in China (compared to 29.7% in 2019). The same article explains how Alibaba’s technology is enabling traditional retailers to better manage their supply chains, ultimately helping them to sell their goods online.
  • A study by Nielsen ratings found that 67% of retailers would expand online channels. 
  • One large retailer shut many of its physical stores and redeployed many of its employees to serve as online influencers on WeChat, thus attempting to boost online sales.
  • Spurred by compassion and/or a desire to boost its brand abroad, Alibaba and its founder, Jack Ma, have made large efforts to provide critical medical supplies (notably tests kits and surgical masks) to COVID-hit countries such as the US and Belgium.

And it is not just retail that is adapting to the outbreak. Many restaurants are trying to stay afloat by shifting from in-house dining to deliveries. These attempts have been made possible by the emergence of food delivery platforms, such as UberEats and Deliveroo. 

These platforms have taken several steps to facilitate food deliveries during the outbreak.

  • UberEats announced that it would be waiving delivery fees for independent restaurants.
  • Both UberEats and Deliveroo have put in place systems for deliveries to take place without direct physical contact. While not entirely risk-free, meal delivery can provide welcome relief to people experiencing stressful lockdown conditions.

Similarly, the shares of Blue Apron – an online meal-kit delivery service – have surged more than 600% since the start of the outbreak.

In short, COVID-19 has caused a drastic shift towards contactless retail and food delivery services. It is an open question how much of this shift would have been possible without the pioneering business model innovations brought about by Amazon and its online retail platform, as well as modern food delivery platforms, such as UberEats and Deliveroo. At the very least, it seems unlikely that it would have happened as fast.

The entertainment industry is another area where increasing digitization has made lockdowns more bearable. The reason is obvious: locked-down consumers still require some form of amusement. With physical supply chains under tremendous strain, and social gatherings no longer an option, digital media has thus become the default choice for many.

Data published by Verizon shows a sharp increase (in the week running from March 9 to March 16) in the consumption of digital entertainment, especially gaming:

This echoes other sources, which also report that the use of traditional streaming platforms has surged in areas hit by COVID-19.

  • Netflix subscriptions are said to be spiking in locked-down communities. During the first week of March, Netflix installations increased by 77% in Italy and 33% in Spain, compared to the February average. Netflix app downloads increased by 33% in Hong kong and South Korea. The Amazon Prime app saw a similar increase.
  • YouTube has also witnessed a surge in usage. 
  • Live streaming (on platforms such as Periscope, Twitch, YouTube, Facebook, Instagram, etc) has also increased in popularity. It is notably being used for everything from concerts and comedy clubs to religious services, and even zoo visits.
  • Disney Plus has also been highly popular. According to one source, half of US homes with children under the age of 10 purchased a Disney Plus subscription. This trend is expected to continue during the COVID-19 outbreak. Disney even released Frozen II three months ahead of schedule in order to boost new subscriptions.
  • Hollywood studios have started releasing some of their lower-profile titles directly on streaming services.

Traffic has also increased significantly on popular gaming platforms.

These are just a tiny sample of the many ways in which digital entertainment is filling the void left by social gatherings. It is thus central to the lives of people under lockdown.

2. Cashless payments

But all of the services that are listed above rely on cashless payments – be it to limit the risk or contagion or because these transactions take place remotely. Fintech innovations have thus turned out to be one of the foundations that make social distancing policies viable. 

This is particularly evident in the food industry. 

  • Food delivery platforms, like UberEats and Deliveroo, already relied on mobile payments.
  • Costa coffee (a UK equivalent to starbucks) went cashless in an attempt to limit the spread of COVID-19.
  • Domino’s Pizza, among other franchises, announced that it would move to contactless deliveries.
  • President Donald Trump is said to have discussed plans to keep drive-thru restaurants open during the outbreak. This would also certainly imply exclusively digital payments.
  • And although doubts remain concerning the extent to which the SARS-CoV-2 virus may, or may not, be transmitted via banknotes and coins, many other businesses have preemptively ceased to accept cash payments

As the Jodie Kelley – the CEO of the Electronic Transactions Association – put it, in a CNBC interview:

Contactless payments have come up as a new option for consumers who are much more conscious of what they touch. 

This increased demand for cashless payments has been a blessing for Fintech firms. 

  • Though it is too early to gage the magnitude of this shift, early signs – notably from China – suggest that mobile payments have become more common during the outbreak.
  • In China, Alipay announced that it expected to radically expand its services to new sectors – restaurants, cinema bookings, real estate purchases – in an attempt to compete with WeChat.
  • PayPal has also witnessed an uptick in transactions, though this growth might ultimately be weighed-down by declining economic activity.
  • In the past, Facebook had revealed plans to offer mobile payments across its platforms – Facebook, WhatsApp, Instagram & Libra. Those plans may not have been politically viable at the time. The COVID-19 could conceivably change this.

In short, the COVID-19 outbreak has increased our reliance on digital payments, as these can both take place remotely and, potentially, limit contamination via banknotes. None of this would have been possible twenty years ago when industry pioneers, such as PayPal, were in their infancy. 

3. High speed internet access

Similarly, it goes without saying that none of the above would be possible without the tremendous investments that have been made in broadband infrastructure, most notably by internet service providers. Though these companies have often faced strong criticism from the public, they provide the backbone upon which outbreak-stricken economies can function.

By causing so many activities to move online, the COVID-19 outbreak has put broadband networks to the test. So for, broadband infrastructure around the world has been up to the task. This is partly because the spike in usage has occurred in daytime hours (where network’s capacity is less straine), but also because ISPs traditionally rely on a number of tools to limit peak-time usage.

The biggest increases in usage seem to have occurred in daytime hours. As data from OpenVault illustrates:

According to BT, one of the UK’s largest telecoms operators, daytime internet usage is up by 50%, but peaks are still well within record levels (and other UK operators have made similar claims):

Anecdotal data also suggests that, so far, fixed internet providers have not significantly struggled to handle this increased traffic (the same goes for Content Delivery Networks). Not only were these networks already designed to withstand high peaks in demand, but ISPs have, such as Verizon, increased their  capacity to avoid potential issues.

For instance, internet speed tests performed using Ookla suggest that average download speeds only marginally decreased, it at all, in locked-down regions, compared to previous levels:

However, the same data suggests that mobile networks have faced slightly larger decreases in performance, though these do not appear to be severe. For instance, contrary to contemporaneous reports, a mobile network outage that occurred in the UK is unlikely to have been caused by a COVID-related surge. 

The robustness exhibited by broadband networks is notably due to long-running efforts by ISPs (spurred by competition) to improve download speeds and latency. As one article put it:

For now, cable operators’ and telco providers’ networks are seemingly withstanding the increased demands, which is largely due to the upgrades that they’ve done over the past 10 or so years using technologies such as DOCSIS 3.1 or PON.

Pushed in part by Google Fiber’s launch back in 2012, the large cable operators and telcos, such as AT&T, Verizon, Comcast and Charter Communications, have spent years upgrading their networks to 1-Gig speeds. Prior to those upgrades, cable operators in particular struggled with faster upload speeds, and the slowdown of broadband services during peak usage times, such as after school and in the evenings, as neighborhood nodes became overwhelmed.

This is not without policy ramifications.

For a start, these developments might vindicate antitrust enforcers that allowed mergers that led to higher investments, sometimes at the expense of slight reductions in price competition. This is notably the case for so-called 4 to 3 mergers in the wireless telecommunications industry. As an in-depth literature review by ICLE scholars concludes:

Studies of investment also found that markets with three facilities-based operators had significantly higher levels of investment by individual firms.

Similarly, the COVID-19 outbreak has also cast further doubts over the appropriateness of net neutrality regulations. Indeed, an important criticism of such regulations is that they prevent ISPs from using the price mechanism to manage congestion

It is these fears of congestion, likely unfounded (see above), that led the European Union to urge streaming companies to voluntarily reduce the quality of their products. To date, Netflix, Youtube, Amazon Prime, Apple, Facebook and Disney have complied with the EU’s request. 

This may seem like a trivial problem, but it was totally avoidable. As a result of net neutrality regulation, European authorities and content providers have been forced into an awkward position (likely unfounded) that unnecessarily penalizes those consumers and ISPs who do not face congestion issues (conversely, it lets failing ISPs off the hook and disincentivizes further investments on their part). This is all the more unfortunate that, as argued above, streaming services are essential to locked-down consumers. 

Critics may retort that small quality decreases hardly have any impact on consumers. But, if this is indeed the case, then content providers were using up unnecessary amounts of bandwidth before the COVID-19 outbreak (something that is less likely to occur without net neutrality obligations). And if not, then European consumers have indeed been deprived of something they valued. The shoe is thus on the other foot.

These normative considerations aside, the big point is that we can all be thankful to live in an era of high-speed internet.

 4. Concluding remarks 

Big Tech is rapidly emerging as one of the heroes of the COVID-19 crisis. Companies that were once on the receiving end of daily reproaches – by the press, enforcers, and scholars alike – are gaining renewed appreciation from the public. Times have changed since the early days of these companies – where consumers marvelled at the endless possibilities that their technologies offered. Today we are coming to realize how essential tech companies have become to our daily lives, and how they make society more resilient in the face of fat-tailed events, like pandemics.

The move to a contactless, digital, economy is a critical part of what makes contemporary societies better-equipped to deal with COVID-19. As this post has argued, online delivery, digital entertainment, contactless payments and high speed internet all play a critical role. 

To think that we receive some of these services for free…

Last year, Erik Brynjolfsson, Avinash Collins and Felix Eggers published a paper in PNAS, showing that consumers were willing to pay significant sums for online goods they currently receive free of charge. One can only imagine how much larger those sums would be if that same experiment were repeated today.

Even Big Tech’s critics are willing to recognize the huge debt we owe to these companies. As Stephen Levy wrote, in an article titled “Has the Coronavirus Killed the Techlash?”:

Who knew the techlash was susceptible to a virus?

The pandemic does not make any of the complaints about the tech giants less valid. They are still drivers of surveillance capitalism who duck their fair share of taxes and abuse their power in the marketplace. We in the press must still cover them aggressively and skeptically. And we still need a reckoning that protects the privacy of citizens, levels the competitive playing field, and holds these giants to account. But the momentum for that reckoning doesn’t seem sustainable at a moment when, to prop up our diminished lives, we are desperately dependent on what they’ve built. And glad that they built it.

While it is still early to draw policy lessons from the outbreak, one thing seems clear: the COVID-19 pandemic provides yet further evidence that tech policymakers should be extremely careful not to kill the goose that laid the golden egg, by promoting regulations that may thwart innovation (or the opposite).

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, International Center for Law & Economics).]

Republican Senator Josh Hawley infamously argued that Big Tech is overrated. In his words:

My biggest critique of big tech is: what big innovation have they really given us? What is it now that in the last 15, 20 years that people who say they are the brightest minds in the country have given this country? What are their great innovations?

To Senator Hawley these questions seemed rhetorical. Big Tech’s innovations were trivial gadgets: “autoplay” and “snap streaks”, to quote him once more.

But, as any Monty Python connoisseur will tell you, rhetorical questions have a way of being … not so rhetorical. In one of Python’s most famous jokes, members of the “People’s Front of Judea” ask “what have the Romans ever done for us”? To their own surprise, the answer turns out to be a great deal:

This post is the first in a series examining some of the many ways in which Big Tech is making Coronavirus-related lockdowns and social distancing more bearable, and how Big Tech is enabling our economies to continue functioning (albeit at a severely reduced pace) throughout the outbreak. 

Although Big Tech’s contributions are just a small part of a much wider battle, they suggest that the world is drastically better situated to deal with COVID-19 than it would have been twenty years ago – and this is in no small part thanks to Big Tech’s numerous innovations.

Of course, some will say that the world would be even better equipped to handle COVID-19, if Big Tech had only been subject to more (or less) regulation. Whether these critiques are correct, or not, they are not the point of this post. For many, like Senator Hawley, it is apparently undeniable that tech does more harm than good. But, as this post suggests, that is surely not the case. And before we do decide whether and how we want to regulate it in the future, we should be particularly mindful of what aspects of “Big Tech” seem particularly suited to dealing with the current crisis, and ensure that we don’t adopt regulations that thoughtlessly undermine these.

1. Priceless information 

One of the most important ways in which Big Tech firms have supported international attempts to COVID-19 has been their role as  information intermediaries. 

As the title of a New York Times article put it:

When Facebook Is More Trustworthy Than the President: Social media companies are delivering reliable information in the coronavirus crisis. Why can’t they do that all the time?

The author is at least correct on the first part. Big Tech has become a cornucopia of reliable information about the virus:

  • Big Tech firms are partnering with the White House and other agencies to analyze massive COVID-19 datasets in order to help discover novel answers to questions about transmission, medical care, and other interventions. This partnership is possible thanks to the massive investments in AI infrastructure that the leading tech firms have made. 
  • Google Scholar has partnered with renowned medical journals (as well as public authorities) to guide citizens towards cutting edge scholarship relating to COVID-19. This a transformative ressource in a world of lockdows and overburdened healthcare providers.
  • Google has added a number of features to its main search engine – such as a “Coronavirus Knowledge Panel” and SOS alerts – in order to help users deal with the spread of the virus.
  • On Twitter, information and insights about COVID-19 compete in the market for ideas. Numerous news outlets have published lists of recommended people to follow (Fortune, Forbes). 

    Furthermore – to curb some of the unwanted effects of an unrestrained market for ideas – Twitter (and most other digital platforms) links to the websites of public authorities when users search for COVID-related hashtags.
  • This flow of information is a two-way street: Twitter, Facebook and Reddit, among others, enable citizens and experts to weigh in on the right policy approach to COVID-19. 

    Though the results are sometimes far from perfect, these exchanges may prove invaluable in critical times where usual methods of policy-making (such as hearings and conferences) are mostly off the table.
  • Perhaps most importantly, the Internet is a precious source of knowledge about how to deal with an emerging virus, as well as life under lockdown. We often take for granted how much of our lives benefit from extreme specialization. These exchanges are severely restricted under lockdown conditions. Luckily, with the internet and modern search engines (pioneered by Google), most of the world’s information is but a click away.

    For example, Facebook Groups have been employed by users of the social media platform in order to better coordinate necessary activity among community members — like giving blood — while still engaging in social distancing.

In short, search engines and social networks have been beacons of information regarding COVID-19. Their mostly bottom-up approach to knowledge generation (i.e. popular topics emerge organically) is essential in a world of extreme uncertainty. This has ultimately enabled these players to stay ahead of the curve in bringing valuable information to citizens around the world.

2. Social interactions

This is probably the most obvious way in which Big Tech is making life under lockdown more bearable for everyone. 

  • In Italy, Whatsapp messages and calls jumped by 20% following the outbreak of COVID-19. And Microsoft claims that the use of Skype jumped by 100%.
  • Younger users are turning to social networks, like TikTok, to deal with the harsh realities of the pandemic.
  • Strangers are using Facebook groups to support each other through difficult times.
  • And institutions, like the WHO, are piggybacking on this popularity to further raise awareness about COVID-19 via social media. 
  • In South Africa, health authorities even created a whatsapp contact to answer users questions about the virus.
  • Most importantly, social media is a godsend for senior citizens and anyone else who may have to live in almost total isolation for the foreseeable future. For instance, nursing homes are putting communications apps, like Skype and WhatsApp, in the hands of their patients, to keep up their morale (here and here).

And with the economic effects of COVID-19 starting to gather speed, users will more than ever be grateful to receive these services free of charge. Sharing data – often very limited amounts – with a platform is an insignificant price to pay in times of economic hardship. 

3. Working & Learning

It will also be impossible to effectively fight COVID-19 if we cannot maintain the economy afloat. Stock markets have already plunged by record amounts. Surely, these losses would be unfathomably worse if many of us were not lucky enough to be able to work, and from the safety of our own homes. And for those individuals who are unable to work from home, their own exposure is dramatically reduced thanks to a significant proportion of the population that can stay out of public.

Once again, we largely have Big Tech to thank for this. 

  • Downloads of Microsoft Teams and Zoom are surging on both Google and Apple’s app stores. This is hardly surprising. With much of the workforce staying at home, these video-conference applications have become essential. The increased load generated by people working online might even have caused Microsoft Teams to crash in Europe.
  • According to Microsoft, the number of Microsoft Teams meetings increased by 500 percent in China.
  • Sensing that the current crisis may last for a while, some firms have also started to conduct job interviews online; populars apps for doing so include Skype, Zoom and Whatsapp. 
  • Slack has also seen a surge in usage, as firms set themselves up to work remotely. It has started offering free training, to help firms move online.
  • Along similar lines, Google recently announced that its G suite of office applications – which enables users to share and work on documents online – had recently passed 2 Billion users.
  • Some tech firms (including Google, Microsoft and Zoom) have gone a step further and started giving away some of their enterprise productivity software, in order to help businesses move their workflows online.

And Big Tech is also helping universities, schools and parents to continue providing coursework and lectures to their students/children.

  • Zoom and Microsoft Teams have been popular choices for online learning. To facilitate the transition to online learning, Zoom has notably lifted time limits relating to the free version of its app (for schools in the most affected areas).
  • Even in the US, where the virus outbreak is currently smaller than in Europe, thousands of students are already being taught online.
  • Much of the online learning being conducted for primary school children is being done with affordable Chromebooks. And some of these Chromebooks are distributed to underserved schools through grant programs administered by Google.
  • Moreover, at the time of writing, most of the best selling books on Amazon.com are pre-school learning books:

Finally, the advent of online storage services, such as Dropbox and Google Drive, has largely alleviated the need for physical copies of files. In turn, this enables employees to remotely access all the files they need to stay productive. While this may be convenient under normal circumstances, it becomes critical when retrieving a binder in the office is no longer an option.

4. So what has Big Tech ever done for us?

With millions of families around the world currently under forced lockdown, it is becoming increasingly evident that Big Tech’s innovations are anything but trivial. Innovations that seemed like convenient tools only a couple of days ago, are now becoming essential parts of our daily lives (or, at least, we are finally realizing how powerful they truly are). 

The fight against COVID-19 will be hard. We can at least be thankful that we have Big Tech by our side. Paraphrasing the Monty Python crew: 

Q: What has Big Tech ever done for us? 

A: Abundant, free, and easily accessible information. Precious social interactions. Online working and learning.

Q: But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?

For the answer to this question, I invite you to stay tuned for the next post in this series.

By Berin Szoka, Geoffrey Manne & Ryan Radia

As has become customary with just about every new product announcement by Google these days, the company’s introduction on Tuesday of its new “Search, plus Your World” (SPYW) program, which aims to incorporate a user’s Google+ content into her organic search results, has met with cries of antitrust foul play. All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?

The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it’s not even clear Facebook and Twitter really want to be included or what their price and conditions for doing so would be. So in short, it’s hard to see what there is to argue about yet.

Let’s consider both questions in turn.

Should Google Have Included Other Services Prior to SPYW’s Launch?

Google says it’s happy to add non-Google content to SPYW but, as Google fellow Amit Singhal told Danny Sullivan, a leading search engine journalist:

Facebook and Twitter and other services, basically, their terms of service don’t allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we’d look at designing things to see how it would work.

In a follow-up story, Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:

“To start with, we would have a conversation with them,” Schmidt said, about settling any differences.

I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google’s regular crawling, allowed by both Twitter and Facebook, was a form of “automated conversation” giving Google material it could use.

“Anything we do with companies like that, it’s always better to have a conversion,” Schmidt said.

MG Siegler calls this “doublespeak” and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn’t need permission to include public data in SPYW:

Both Twitter and Facebook have data that is available to the public. It’s data that Google crawls. It’s data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.

It’s not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the “People and Places” box, for example.

It’s certainly true that if you search Google for “site:twitter.com” or “site:facebook.com,” you’ll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this dashboard), as Sullivan notes. But the public data isn’t available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a “data firehose” so that Google could display tweets in real-time under the “personalized search” program for public social information that SPYW builds on top of. But Twitter ended the deal last May for reasons neither company has explained.

At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn’t could Google have included this data in SPYW, but rather need they have? If Google’s engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it’s hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general, as with issues surrounding the vertical integration claims against Google, product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it’s difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google’s rivals, result in anticompetitive foreclosure, retard rivals’ incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).

Further, it’s easy to see why Google’s lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for “scraping” a different type of social content, user reviews, most notably by Yelp’s CEO at the contentious Senate antitrust hearing in September. Perhaps one could distinguish that situation from this one, but it’s not obvious where to draw the line between content Google has a duty to include without “making excuses” about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of “damned if you do, damned if you don’t.” It seems only natural for Google to be gun-shy about “scraping” other services’ public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a “conversation.”

And as we noted, integrating non-public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user’s friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.

Now, if Twitter truly wants Google to feature tweets in Google’s personalized search results, why did Twitter end its agreement with Google last year? Google responded to Twitter’s criticism of its SPYW launch last night with a short Google+ statement:

We are a bit surprised by Twitter’s comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their rel=nofollow instructions [by removing Twitter content results from “personalized search” results].

Perhaps Twitter simply got a better deal: Microsoft may have paid Twitter $30 million last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it’s simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there’s nothing illegal about that.

Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an intense war for eyeballs: the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ (800 million v 62 million, but even larger lead in active users), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine “to rule them all“? Or would Facebook prefer to continue developing “social search” in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web’s hottest property).

Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools (Start Google+, for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of “Facebook primacy” ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users’ primary “home” as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through Start Google+). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.

Amidst all the antitrust hand-wringing over SPYW and Google’s decision to “go it alone” for now, it’s worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet’s worth about the issue. It’s simply not clear that Google’s rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn’t be Google. If this all sounds speculative, it is—and that’s precisely the point. No one really knows. So, again, what’s to argue about on Day 3 of the new social search paradigm?

The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a “conversation” about greater cooperation. Google hasn’t made clear on what terms it would include content from other social media platforms. So it’s at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:

  1. Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright argues.
  2. Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
  3. Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.

Door #3 is a legal non-starter. In Aspen Skiing v. Aspen Highlands (1985), the Supreme Court came the closest it has ever come to endorsing the “essential facilities” doctrine by which a competitor has a duty to offer its facilities to competitors. But in Verizon Communications v. Trinko (2004), the Court made clear that even Aspen Skiing is “at or near the outer boundary of § 2 liability.” Part of the basis for the decision in Aspen Skiing was the existence of a prior, profitable relationship between the “essential facility” in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely “profitable” is not the same thing as “best available use of a resource”), the Court in Aspen Skiing seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it. Trinko limited the reach of the doctrine to the extraordinary circumstances of Aspen Skiing, and thus, as the Court affirmed in Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his chapter on search bias in TechFreedom’s free ebook, The Next Digital Decade).

So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would “reasonableness” be defined?

First, this issue parallels claims made more generally about Google’s supposed “search bias.” As Josh Wright has said about those claims, “[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.” Supposing (for the moment) that the second point could be established, it’s hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google’s “organic” search results—would actually “render their efforts to compete for distribution uneconomical,” which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a quarter as much time as Facebook.

Moreover, by these same measures, it’s clear that Facebook and Twitter don’t need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it’s difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook’s detriment? Yes. Just as Facebook’s deal with Microsoft hurts Google. But this is called competition. The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors’ expense.

After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft’s exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.

Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. “Reasonable” is a dangerously slippery concept in such markets, and a recipe for costly errors by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are as much the object of innovation as their technologies. Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky. Continue Reading…