The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:
We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …
A strong, diverse, free press is critical for any successful democracy. …
Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.
This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.
The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.
Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”
In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):
In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.
The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.
Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.
Yun concludes by summarizing the case against this legislation (citations omitted):
Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.
Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.
Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.
There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.
Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.
This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.
In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
It is my endeavor to scrutinize the questionable assessment articulated against default settings in the U.S. Justice Department’s lawsuit against Google. Default, I will argue, is no antitrust fault. Default in the Google case drastically differs from default referred to in the Microsoft case. In Part I, I argue the comparison is odious. Furthermore, in Part II, it will be argued that the implicit prohibition of default settings echoes, as per listings, the explicit prohibition of self-preferencing in search results. Both aspects – default’s implicit prohibition and self-preferencing’s explicit prohibition – are the two legs of a novel and integrated theory of sanctioning corporate favoritism. The coming to the fore of such theory goes against the very essence of the capitalist grain. In Part III, I note the attempt to instill some corporate selflessness is at odds with competition on the merits and the spirit of fundamental economic freedoms.
When Default is No-Fault
The recent complaint filed by the DOJ and 11 state attorneys general claims that Google has abused its dominant position on the search-engine market through several ways, notably making Google the default search engine both in Google Chrome web browser for Android OS and in Apple’s Safari web browser for iOS. Undoubtedly, default setting confers a noticeable advantage for users’ attraction – it is sought and enforced on purpose. Nevertheless, the default setting confers an unassailable position unless the product remains competitive. Furthermore, the default setting can hardly be proven to be anticompetitive in the Google case. Indeed, the DOJ puts considerable effort in the complaint to make the Google case resemble the 20-year-old Microsoft case. Former Federal Trade Commission Chairman William Kovacic commented: “I suppose the Justice Department is telling the court, ‘You do not have to be scared of this case. You’ve done it before […] This is Microsoft part 2.”
However, irrespective of the merits of the Microsoft case two decades ago, the Google default setting case bears minimal resemblance to the Microsoft default setting of Internet Explorer. First, as opposed to the Microsoft case, where default by Microsoft meant pre-installed software (i.e., Internet Explorer), the Google case does not relate to the pre-installment of the Google search engine (since it is just a webpage) but a simple setting. This technical difference is significant: although “sticky”, the default setting, can be outwitted with just one click. It is dissimilar to the default setting, which can only be circumvented by uninstalling software, searching and installing a new one. Moreover, with no certainty that consumers will effectively use Google search engine, default settings come with advertising revenue sharing agreements between Google and device manufacturers, mobile phone carriers, competing browsers and Apple. These mutually beneficial deals represent a significant cost with no technical exclusivity . In other words, the antitrust treatment of a tie-in between software and hardware in the Microsoft case cannot be convincingly extrapolated to the default setting of a “webware” as relevant in the Google case.
Second, the Google case cannot legitimately resort to extrapolating the Microsoft case for another technical (and commercial) aspect: the Microsoft case was a classic tie-in case where the tied product (Internet Explorer) was tied into the main product (Windows). As a traditional tie-in scenario, the tied product (Internet Explorer) was “consistently offered, promoted, and distributed […] as a stand-alone product separate from, and not as a component of, Windows […]”. In contrast, Google has never sold Google Chrome or Android OS. It offered both Google Chrome and Android OS for free, necessarily conditional to Google search engine as default setting. The very fact that Google Chrome or Android OS have never been “stand-alone” products, to use the Microsoft case’s language, together with the absence of software installation, dramatically differentiates the features pertaining to the Google case from those of the Microsoft case. The Google case is not a traditional tie-in case: it is a case against default setting when both products (the primary and related products) are given for free, are not saleable, are neither tangible nor intangible goods but only popular digital services due to significant innovativeness and ease of usage. The Microsoft “complaint challenge[d] only Microsoft’s concerted attempts to maintain its monopoly in operating systems and to achieve dominance in other markets, not by innovation and other competition on the merits, but by tie-ins.” Quite noticeably, the Google case does not mention tie-in ,as per Google Chrome or Android OS.
The complaint only refers to tie-ins concerning Google’s app being pre-installed on Android OS. Therefore, concerning Google’s dominance on the search engine market, it cannot be said that the default setting of Google search in Android OS entails tie-in. Google search engine has no distribution channel (since it is only a website) other than through downstream partnerships (i.e., vertical deals with Android device manufacturers). To sanction default setting on downstream trading partners is tantamount to refusing legitimate means to secure distribution channels of proprietary and zero-priced services. To further this detrimental logic, it would mean that Apple may no longer offer its own apps in its own iPhones or, in offline markets, that a retailer may no longer offer its own (default) bags at the till since it excludes rivals’ sale bags. Products and services naked of any adjacent products and markets (i.e., an iPhone or Android OS with no app or a shopkeeper with no bundled services) would dramatically increase consumers’ search costs while destroying innovators’ essential distribution channels for innovative business models and providing few departures from the status quo as long as consumers will continue to value default products.
Default should not be an antitrust fault: the Google case makes default settings a new line of antitrust injury absent tie-ins. In conclusion, as a free webware, Google search’s default setting cannot be compared to default installation in the Microsoft case since minimal consumer stickiness entails (almost) no switching costs. As free software, Google’s default apps cannot be compared to Microsoft case either since pre-installation is the sine qua non condition of the highly valued services (Android OS) voluntarily chosen by device manufacturers. Default settings on downstream products can only be reasonably considered as antitrust injury when the dominant company is erroneously treated as a de facto essential facility – something evidenced by the similar prohibition of self-preferencing.
When Self-Preference is No Defense
Self-preferencing is to listings what the default setting is to operating systems. They both are ways to market one’s own products (i.e., alternative to marketing toward end-consumers). While default setting may come with both free products and financial payments (Android OS and advertising revenue sharing), self-preferencing may come with foregone advertising revenues in order to promote one’s own products. Both sides can be apprehended as the two sides of the same coin: generating the ad-funded main product’s distribution channels – Google’s search engine. Both are complex advertising channels since both venues favor one’s own products regarding consumers’ attention. Absent both channels, the payments made for default agreements and the foregone advertising revenues in self-preferencing one’s own products would morph into marketing and advertising expenses of Google search engine toward end-consumers.
The DOJ complaint lambasts that “Google’s monopoly in general search services also has given the company extraordinary power as the gateway to the internet, which uses to promote its own web content and increase its profits.” This blame was at the core of the European Commission’s Google Shopping decision in 2017: it essentially holds Google accountable for having, because of its ad-funded business model, promoted its own advertising products and demoted organic links in search results. According to which Google’s search results are no longer relevant and listed on the sole motivation of advertising revenue
But this argument is circular: should these search results become irrelevant, Google’s core business would become less attractive, thereby generating less advertising revenue. This self-inflicted inefficiency would deprive Google of valuable advertising streams and incentivize end-consumers to switch to search engine rivals such as Bing, DuckDuckGo, Amazon (product search), etc. Therefore, an ad-funded company such as Google needs to reasonably arbitrage between advertising objectives and the efficiency of its core activities (here, zero-priced organic search services). To downplay (the ad-funded) self-referencing in order to foster (the zero-priced) organic search quality would disregard the two-sidedness of the Google platform: it would harm advertisers and the viability of the ad-funded business model without providing consumers and innovation protection it aims at providing. The problematic and undesirable concept of “search neutrality” would mean algorithmic micro-management for the sake of an “objective” listing considered acceptable only to the eyes of the regulator.
Furthermore, self-preferencing entails a sort of positive discrimination toward one’s own products. If discrimination has traditionally been antitrust lines of injuries, self-preferencing is an “epithet” outside antitrust remits for good reasons. Indeed, should self-interested (i.e., rationally minded) companies and individuals are legally complied to self-demote their own products and services? If only big (how big?) companies are legally complied to self-demote their products and services, to what extent will exempted companies involved in self-preferencing become liable to do so?
Indeed, many uncertainties, legal and economic ones, may spawn from the emerging prohibition of self-preferencing. More fundamentally, antitrust liability may clash with basic corporate governance principles where self-interestedness allows self-preferencing and command such self-promotion. The limits of antitrust have been reached when two sets of legal regimes, both applicable to companies, suggest contradictory commercial conducts. To what extent may Amazon no longer promote its own series on Amazon Video in a similar manner Netflix does? To what extent can Microsoft no longer promote Bing’s search engine to compete with Google’s search engine effectively? To what extent Uber may no longer promote UberEATS in order to compete with delivery services effectively? Not only the business of business is doing business, but also it is its duty for which shareholders may hold managers to account.
The self is moral; there is a corporate morality of business self-interest. In other words, corporate selflessness runs counter to business ethics since corporate self-interest yields the self’s rivalrous positioning within a competitive order. Absent a corporate self-interest, self-sacrifice may generate value destruction for the sake of some unjustified and ungrounded claims. The emerging prohibition of self-preferencing, similar to the established ban on the default setting on one’s own products into other proprietary products, materializes the corporate self’s losing. Both directions coalesce to instill the legally embedded duty of self-sacrifice for the competitor’s welfare instead of the traditional consumer welfare and the dynamics of innovation, which never unleash absent appropriabilities. In conclusion, to expect firms, however big or small, to act irrespective of their identities (i.e., corporate selflessness) would constitute an antitrust error and would be at odds with capitalism.
Toward an Integrated Theory of Disintegrating Favoritism
The Google lawsuit primarily blames Google for default settings enforced via several deals. The lawsuit also makes self-preferencing anticompetitive conduct under antitrust rules. These two charges are novel and dubious in their remits. They nevertheless represent a fundamental catalyst for the development of a new and problematic unified antitrust theory prohibiting favoritism: companies may no longer favor their products and services, both vertically and horizontally, irrespective of consumer benefits, irrespective of superior efficiency arguments, and irrespective of dynamic capabilities enhancement. Indeed, via an unreasonably expanded vision of leveraging, antitrust enforcement is furtively banning a company to favor its own products and services based on greater consumer choice as a substitute to consumer welfare, based on the protection of the opportunities of rivals to innovate and compete as a substitute to the essence of competition and innovation, and based on limiting the outreach and size of companies as a substitute to the capabilities and efficiencies of these companies. Leveraging becomes suspicious and corporate self-favoritism under accusation. The Google lawsuit materializes this impractical trend, which further enshrines the precautionary approach to antitrust enforcement.
 Jessica Guynn, Google Justice Department antitrust lawsuit explained: this is what it means for you. USA Today, October 20, 2020.
 The software (Internet Explorer) was tied in the hardware (Windows PC).
U.S. v Google LLC, Case A:20, October 20, 2020, 3 (referring to default settings as “especially sticky” with respect to consumers’ willingness to change).
 While the DOJ affirms that “being the preset default general search engine is particularly valuable because consumers rarely change the preset default”, it nevertheless provides no evidence of the breadth of such consumer stickiness. To be sure, search engine’s default status does not necessarily lead to usage as evidenced by the case of South Korea. In this country, despite Google’s preset default settings, the search engine Naver remains dominant in the national search market with over 70% of market shares. The rivalry exerted by Naver on Google demonstrates that limits of consumer stickiness to default settings. See Alesia Krush, Google vs. Naver: Why Can’t Google Dominate Search in Korea? Link-Assistant.Com, available at: https://www.link-assistant.com/blog/google-vs-naver-why-cant-google-dominate-search-in-korea/ . As dominant search engine in Korea, Naver is subject to antitrust investigations with similar leveraging practices as Google in other countries, see Shin Ji-hye, FTC sets up special to probe Naver, Google, The Korea Herald, November 19, 2019, available at : http://www.koreaherald.com/view.php?ud=20191119000798 ; Kim Byung-wook, Complaint against Google to be filed with FTC, The Investor, December 14, 2020, available at : https://www.theinvestor.co.kr/view.php?ud=20201123000984 (reporting a complaint by Naver and other Korean IT companies against Google’s 30% commission policy on Google Play Store’s apps).
 For instance, the then complaint acknowledged that “Microsoft designed Windows 98 so that removal of Internet Explorer by OEMs or end users is operationally more difficult than it was in Windows 95”, in U.S. v Microsoft Corp., Civil Action No 98-1232, May 18, 1998, para.20.
 The DOJ complaint itself quotes “one search competitor” who is reported to have noted consumer stickiness “despite the simplicity of changing a default setting to enable customer choice […]” (para.47). Therefore, default setting for search engine is remarkably simple to bypass but consumers do not often do so, either due to satisfaction with Google search engine and/or due to search and opportunity costs.
 Such outcome would frustrate traditional ways of offering computers and mobile devices as acknowledged by the DOJ itself in the Google complaint: “new computers and new mobile devices generally come with a number of preinstalled apps and out-of-the-box setting. […] Each of these search access points can and almost always does have a preset default general search engine”, at para. 41. Also, it appears that present default general search engine is common commercial practices since, as the DOJ complaint itself notes when discussing Google’s rivals (Microsoft’s Bing and Amazon’s Fire OS), “Amazon preinstalled its own proprietary apps and agreed to make Microsoft’s Bing the preset default general search engine”, in para.130. The complaint fails to identify alternative search engines which are not preset defaults, thus implicitly recognizing this practice as a widespread practice.
 To use Vesterdof’s language, see Bo Vesterdorf, Theories of Self-Preferencing and Duty to Deal – Two Sides of the Same Coin, Competition Law & Policy Debate 1(1) 4, (2015). See also Nicolas Petit, Theories of Self-Preferencing under Article 102 TFEU: A Reply to Bo Vesterdorf, 5-7 (2015).
 Case 39740 Google Search (Shopping). Here the foreclosure effects of self-preferencing are only speculated: « the Commission is not required to prove that the Conduct has the actual effect of decreasing traffic to competing comparison shopping services and increasing traffic to Google’s comparison-shopping service. Rather, it is sufficient for the Commission to demonstrate that the Conduct is capable of having, or likely to have, such effects.” (para.601 of the Decision). See P. Ibáñez Colomo, Indispensability and Abuse of Dominance: From Commercial Solvents to Slovak Telekom and Google Shopping, 10 Journal of European Competition Law & Practice 532 (2019); Aurelien Portuese, When Demotion is Competition: Algorithmic Antitrust Illustrated, Concurrences, no 2, May 2018, 25-37; Aurelien Portuese, Fine is Only One Click Away, Symposium on the Google Shopping Decision, Case Note, 3 Competition and Regulatory Law Review, (2017).
 For a general discussion on law and economics of self-preferencing, see Michael A. Salinger, Self-Preferencing, Global Antitrust Institute Report, 329-368 (2020).
Pablo Ibanez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition (2020) (concluding that self-preferencing is « misleading as a legal category »).
 See, for instances, Pedro Caro de Sousa, What Shall We Do About Self-Preferencing? Competition Policy International, June 2020.
 Milton Friedman, The Social Responsibility of Business is to Increase Its Profits, New York Times, September 13, 1970. This echoes Adam Smith’s famous statement that « It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard for their own self-interest » from the 1776 Wealth of Nations. In Ayn Rand’s philosophy, the only alternative to rational self-interest is to sacrifice one’s own interests either for fellowmen (altruism) or for supernatural forces (mysticism). See Ayn Rand, The Objectivist Ethics, in The Virtue of Selfishness, Signet, (1964).
 Aurelien Portuese, European Competition Enforcement and the Digital Economy : The Birthplace of Precautionary Antitrust, Global Antitrust Institute’s Report on the Digital Economy, 597-651.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored byThomas W. Hazlett, (Hugh H. Macaulay Endowed Professor of Economics, John E. Walker Department of Economics Clemson University)]
(Ed. Note: the following is an excerpt from a piece published by the Chicago Tribune on Oct. 16, 2020. Click here to read the full piece)
No matter your Twitter feed, “vaccines have been one of the greatest public health tools to prevent disease,” as The New York Times explained in January…
Many are terrified that the Food and Drug Administration may hastily authorize injections into hundreds of millions. The FDA and drugmakers are trying to assuage such concerns with enhanced commitments to safety. Nonetheless, fears have been stoked by President Donald Trump’s infomercial-style endorsement of hydroxychloroquine as a COVID-19 remedy, his foolhardy disdain for face masks and campaign rally boasts of a preelection cure.
Yes, politics. But the opposing political push — the demand that new vaccines must be safe at all costs — is itself a dangerous meme, and the strange bedfellow of anti-vaxxer protesters.
Pulitzer Prize-winning journalist Laurie Garrett inadvertently quantifies the problem. In a Sept. 3 article in Foreign Policy, she cited the H1N1 (swine flu) episode in 2009 as “the last mad rush to vaccinate.” Warning that those shots “caused Guillain-Barr (GBS) paralysis in … 6.2 per 10 million patients who received the vaccine,” she argues that phase 3 trials for COVID-19 vaccines, typically involving just 30,000 people, provide little protection. “There’s no way … we can spot a safety hazard that’s in 1 out of a million, much less 1 out of 10 million, vaccine recipients.” The “safety side,” she told a TV interviewer, “looks insane.”
But, in fact, the “insanity” here is not found in the push for speed or in Garrett’s skepticism about Operation Warp Speed. It lies in a lack of balance between the two. An insufficiently vetted vaccine may cost innocent lives, but so will delaying a vaccine that, on net, saves them…
When promising therapies appear, reducing time to market is often worth the risk — as reflected in a raft of pre-COVID-19 policies, including the FDA’s “emergency use authorizations,” “fast track” drug approvals and “compassionate use” permissions for experimental drugs. In phase 3 trials, independent monitors observe results, and trials may be terminated when pre-specified benefits appear. Patients in the control group become eligible for the treatment instead of the placebo. Larger samples would enhance scientific knowledge, but as probabilities shift regulators act on the reality that the ideal can become the enemy of the good.
Over at the Federalist Society’s blog, there has been an ongoing debate about what to do about Section 230. While there has long-been variety in what we call conservatism in the United States, the most prominent strains have agreed on at least the following: Constitutionally limited government, free markets, and prudence in policy-making. You would think all of these values would be important in the Section 230 debate. It seems, however, that some are willing to throw these principles away in pursuit of a temporary political victory over perceived “Big Tech censorship.”
Constitutionally Limited Government: Congress Shall Make No Law
What better example of objective free speech standards could we have than those First Amendment principles decided by justices appointed by an elected president and confirmed by elected members of the Senate, applying the ideals laid down by our Founders? I will take those over the preferences of brilliant computer engineers any day.
In other words, he thinks Section 230 should be amended to only give Big Tech the “subsidy” of immunity if it commits to a First Amendment-like editorial regime. To defend the constitutionality of such “restrictions on Big Tech”, he points to the Turner intermediate scrutiny standard, in which the Supreme Court upheld must-carry provisions against cable networks. In particular, Parshall latches on to the “bottleneck monopoly” language from the case to argue that Big Tech is similarly situated to cable providers at the time of the case.
Turner, however, turned more on the “special characteristics of the cable medium” that gave it the bottleneck power than the market power itself. As stated by the Supreme Court:
When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home. Hence, simply by virtue of its ownership of the essential pathway for cable speech, a cable operator can prevent its subscribers from obtaining access to programming it chooses to exclude. A cable operator, unlike speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.
None of the Big Tech companies has the comparable ability to silence competing speakers with a flick of the switch. In fact, the relationship goes the other way on the Internet. Users can (and do) use multiple Big Tech companies’ services, as well as those of competitors which are not quite as big. Users are the ones who can switch with a click or a swipe. There is no basis for treating Big Tech companies any differently than other First Amendment speakers.
Thus, when Rachel Bovard of the Internet Accountability Project argues that the FCC should remove the ability of tech platforms to engage in viewpoint discrimination, she makes a serious error in arguing it is Section 230 that gives them the right to remove content.
Immediately upon noting that the NTIA petition seeks clarification on the relationship between (c)(1) and (c)(2), Bovard moves right to concern over the removal of content. “Unfortunately, embedded in that section [(c)(2)] is a catch-all phrase, ‘otherwise objectionable,’ that gives tech platforms discretion to censor anything that they deem ‘otherwise objectionable.’ Such broad language lends itself in practice to arbitrariness.”
In order for CDA 230 to “give tech platforms discretion to censor,” they would have to not have that discretion absent CDA 230. Bovard totally misses the point of the First Amendment argument, stating:
Yet DC’s tech establishment frequently rejects this argument, choosing instead to focus on the First Amendment right of corporations to suppress whatever content they so choose, never acknowledging that these choices, when made at scale, have enormous ramifications. . . .
But this argument intentionally sidesteps the fact that Sec. 230 is not required by the First Amendment, and that its application to tech platforms privileges their First Amendment behavior in a unique way among other kinds of media corporations. Newspapers also have a First Amendment right to publish what they choose—but they are subject to defamation and libel laws for content they write, or merely publish. Media companies also make First Amendment decisions subject to a thicket of laws and regulations that do not similarly encumber tech platforms.
There is the merest kernel of truth in the lines quoted above. Newspapers are indeed subject to defamation and libel laws for what they publish. But, as should be obvious, liability for publication entails actually publishing something. And what some conservatives are concerned about is platforms’ ability to not publish something: to take down conservative content.
It might be simpler if the First Amendment treated published speech and unpublished speech the same way. But it doesn’t. One can be liable for what one speaks, writes, or publishes on behalf of others. Indeed, even with the full protection of the First Amendment, there is no question that newspapers can be held responsible for delicts caused by content they publish. But no newspaper has ever been held responsible for anything they didn’t publish.
Free Markets: Competition as the Bulwark Against Abuses, not Regulation
Conservatives have long believed in the importance of property rights, exchange, and the power of the free market to promote economic growth. Competition is seen as the protector of the consumer, not big government regulators. In the latter half of the twentieth century into the twenty-first century, conservatives have fought for capitalism over socialism, free markets over regulation, and competition over cronyism. But in the name of combating anti-conservative bias online, they are willing to throw these principles away.
The bedrock belief in the right of property owners to decide the terms of how they want to engage with others is fundamental to American conservatism. As stated by none other than Bovard (along with co-author Jim Demint in their book Conservative: Knowing What to Keep):
Capitalism is nothing more or less than the extension of individual freedom from the political and cultural realms to the economy. Just as government isn’t supposed to tell you how to pray, or what to think, or what sports teams to follow or books to read, it’s not supposed to tell you what to do with your own money and property.
Conservatives normally believe that it is the free choices of consumers and producers in the marketplace that maximize consumer welfare, rather than the choices of politicians and bureaucrats. Competition, in other words, is what protects us from abuses in the marketplace. Again as Bovard and Demint rightly put it:
Under the free enterprise system, money is not redistributed by a central government bureau. It goes wherever people see value. Those who create value are rewarded which then signals to the rest of the economy to up their game. It’s continuous democracy.
To get around this, both Parshall and Bovard make much of the “market dominance” of tech platforms. The essays take the position that tech platforms have nearly unassailable monopoly power which makes them unaccountable. Bovard claims that “mega-corporations have as much power as the government itself—and in some ways, more power, because theirs is unchecked and unaccountable.” Parshall even connects this to antitrust law, stating:
This brings us to another kind of innovation, one that’s hidden from the public view. It has to do with how Big Tech companies use both algorithms plus human review during content moderation. This review process has resulted in the targeting, suppression, or down-ranking of primarily conservative content. As such, this process, should it continue, should be considered a kind of suppressive “innovation” in a quasi-antitrust analysis.
How the process harms “consumer welfare” is obvious. A more competitive market could produce social media platforms designing more innovational content moderation systems that honor traditional free speech and First Amendment norms while still offering features and connectivity akin to the huge players.
Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.
Neither antitrust nor quasi-antitrust regimes are well-suited to dealing with the perceived harm of anti-conservative bias. However unfulfilling this is to some conservatives, competition and choice are better answers to perceived political bias than the heavy hand of government.
Prudence: Awareness of Unintended Consequences
Another bedrock principle of conservatism is to be aware of unintended consequences when making changes to long-standing laws and policies. In regulatory matters, cost-benefit analysis is employed to evaluate whether policies are improving societal outcomes. Using economic thinking to understand the likely responses to changes in regulation is fundamental to American conservatism. Or as Bovard and Demint’s book title suggests, conservatism is about knowing what to keep.
Bovard has argued that since conservatism is a set of principles, not a dogmatic ideology, it can be in favor of fighting against the collectivism of Big Tech companies imposing their political vision upon the world. Conservatism, in this Kirkian sense, doesn’t require particular policy solutions. But this analysis misses what has worked about Section 230 and how the very tech platforms she decries have greatly benefited society. Prudence means understanding what has worked and only changing what has worked in a way that will improve upon it.
The benefits of Section 230 immunity in promoting platforms for third-party speech are clear. It is not an overstatement to say that Section 230 contains “The Twenty-Six Words that Created the Internet.” It is important to note that Section 230 is not only available to Big Tech companies. It is available to all online platforms who host third-party speech. Any reform efforts at Section 230 must know what to keep.In a sense, Section (c)(1) of Section 230 does, indeed, provide greater protection for published content online than the First Amendment on its own would offer: it extends the First Amendment’s permissible scope of published content for which an online service cannot be held liable to include otherwise actionable third-party content.
But let’s be clear about the extent of this protection. It doesn’t protect anything a platform itself publishes, or even anything in which it has a significant hand in producing. Why don’t offline newspapers enjoy this “handout” (though the online versions clearly do for comments)? Because they don’t need it, and because — yes, it’s true — it comes at a cost. How much third-party content would newspapers publish without significant input from the paper itself if only they were freed from the risk of liability for such content? None? Not much? The New York Times didn’t build and sustain its reputation on the slapdash publication of unedited ramblings by random commentators. But what about classifieds? Sure. There would be more classified ads, presumably. More to the point, newspapers would exert far less oversight over the classified ads, saving themselves the expense of moderating this one, small corner of their output.
There is a cost to traditional newspapers from being denied the extended protections of Section 230. But the effect is less third-party content in parts of the paper that they didn’t wish to have the same level of editorial control. If Section 230 is a “subsidy” as critics put it, then what it is subsidizing is the hosting of third-party speech.
The Internet would look vastly different if it was just the online reproduction of the offline world. If tech platforms were responsible for all third-party speech to the degree that newspapers are for op-eds, then they would likely moderate it to the same degree, making sure there is nothing which could expose them to liability before publishing. This means there would be far less third-party speech on the Internet.
In fact, it could be argued that it is smaller platforms who would be most affected by the repeal of Section 230 immunity. Without it, it is likely that only the biggest tech platforms would have the necessary resources to dedicate to content moderation in order to avoid liability.
Proposed Section 230 reforms will likely have unintended consequences in reducing third-party speech altogether, including conservative speech. For instance, a few bills have proposed only allowing moderation for reasons defined by statute if the platform has an “objectively reasonable belief” that the speech fits under such categories. This would likely open up tech platforms to lawsuits over the meaning of “objectively reasonable belief” that could deter them from wanting to host third-party speech altogether. Similarly, lawsuits for “selective enforcement” of a tech platform’s terms of service could lead them to either host less speech or change their terms of service.
This could actually exacerbate the issue of political bias. Allegedly anti-conservative tech platforms could respond to a “good faith” requirement in enforcing its terms of service by becoming explicitly biased. If the terms of service of a tech platform state grounds which would exclude conservative speech, a requirement of “good faith” enforcement of those terms of service will do nothing to prevent the bias.
Conservatives would do well to return to their first principles in the Section 230 debate. The Constitution’s First Amendment, respect for free markets and property rights, and appreciation for unintended consequences in changing tech platform incentives all caution against the current proposals to condition Section 230 immunity on platforms giving up editorial discretion. Whether or not tech platforms engage in anti-conservative bias, there’s nothing conservative about abdicating these principles for the sake of political expediency.
This week the Senate will hold a hearing into potential anticompetitive conduct by Google in its display advertising business—the “stack” of products that it offers to advertisers seeking to place display ads on third-party websites. It is also widely reported that the Department of Justice is preparing a lawsuit against Google that will likely include allegations of anticompetitive behavior in this market, and is likely to be joined by a number of state attorneys general in that lawsuit. Meanwhile, several papers have been published detailing these allegations.
This aspect of digital advertising can be incredibly complex and difficult to understand. Here we explain how display advertising fits in the broader digital advertising market, describe how display advertising works, consider the main allegations against Google, and explain why Google’s critics are misguided to focus on antitrust as a solution to alleged problems in the market (even if those allegations turn out to be correct).
Display advertising in context
Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.
Display advertising on third-party websites is only a small subsection of the digital advertising market, comprising approximately 15-20% of digital advertising spending in the US. The rest of the digital advertising market is made up of ads on search results pages on sites like Google, Amazon and Kayak, on people’s Instagram and Facebook feeds, listings on sites like Zillow (for houses) or Craigslist, referral fees paid to price comparison websites for things like health insurance, audio and visual ads on services like Spotify and Hulu, and sponsored content from influencers and bloggers who will promote products to their fans.
And digital advertising itself is only one of many channels through which companies can market their products. About 53% of total advertising spending in the United States goes on digital channels, with 30% going on TV advertising and the rest on things like radio ads, billboards and other more traditional forms of advertising. A few people still even read physical newspapers and the ads they contain, although physical newspapers’ bigger money makers have traditionally been classified ads, which have been replaced by less costly and more effective internet classifieds, such as those offered by Craigslist, or targeted ads on Google Maps or Facebook.
Indeed, it should be noted that advertising itself is only part of the larger marketing market of which non-advertising marketing communication—e.g., events, sales promotion, direct marketing, telemarketing, product placement—is as big a part as is advertising (each is roughly $500bn globally); it just hasn’t been as thoroughly disrupted by the Internet yet. But it is a mistake to assume that digital advertising is not a part of this broader market. And of that $1tr global market, Internet advertising in total occupies only about 18%—and thus display advertising only about 3%.
Ad placement is only one part of the cost of digital advertising. An advertiser trying to persuade people to buy its product must also do market research and analytics to find out who its target market is and what they want. Moreover, there are the costs of designing and managing a marketing campaign and additional costs to analyze and evaluate the effectiveness of the campaign.
Nevertheless, one of the most straightforward ways to earn money from a website is to show ads to readers alongside the publisher’s content. To satisfy publishers’ demand for advertising revenues, many services have arisen to automate and simplify the placement of and payment for ad space on publishers’ websites. Google plays a large role in providing these services—what is referred to as “open display” advertising. And it is Google’s substantial role in this space that has sparked speculation and concern among antitrust watchdogs and enforcement authorities.
Before delving into the open display advertising market, a quick note about terms. In these discussions, “advertisers” are businesses that are trying to sell people stuff. Advertisers include large firms such as Best Buy and Disney and small businesses like the local plumber or financial adviser. “Publishers” are websites that carry those ads, and publish content that users want to read. Note that the term “publisher” refers to all websites regardless of the things they’re carrying: a blog about the best way to clean stains out of household appliances is a “publisher” just as much as the New York Times is.
Under this broad definition, Facebook, Instagram, and YouTube are also considered publishers. In their role as publishers, they have a common goal: to provide content that attracts users to their pages who will act on the advertising displayed. “Users” are you and me—the people who want to read publishers’ content, and to whom advertisers want to show ads. Finally, “intermediaries” are the digital businesses, like Google, that sit in between the advertisers and the publishers, allowing them to do business with each other without ever meeting or speaking.
The display advertising market
If you’re an advertiser, display advertising works like this: your company—one that sells shoes, let’s say—wants to reach a certain kind of person and tell her about the company’s shoes. These shoes are comfortable, stylish, and inexpensive. You use a tool like Google Ads (or, if it’s a big company and you want a more expansive campaign over which you have more control, Google Marketing Platform) to design and upload an ad, and tell Google about the people you want to read—their age and location, say, and/or characterizations of their past browsing and searching habits (“interested in sports”).
Using that information, Google finds ad space on websites whose audiences match the people you want to target. This ad space is auctioned off to the highest bidder among the range of companies vying, with your shoe company, to reach users matching the characteristics of the website’s users. Thanks to tracking data, it doesn’t just have to be sports-relevant websites: as a user browses sports-related sites on the web, her browser picks up files (cookies) that will tag her as someone potentially interested in sports apparel for targeting later.
So a user might look at a sports website and then later go to a recipe blog, and there receive the shoes ad on the basis of her earlier browsing. You, the shoe seller, hope that she will either click through and buy (or at least consider buying) the shoes when she sees those ads, but one of the benefits of display advertising over search advertising is that—as with TV ads or billboard ads—just seeing the ad will make her aware of the product and potentially more likely to buy it later. Advertisers thus sometimes pay on the basis of clicks, sometimes on the basis of views, and sometimes on the basis of conversion (when a consumer takes an action of some sort, such as making a purchase or filling out a form).
That’s the advertiser’s perspective. From the publisher’s perspective—the owner of that recipe blog, let’s say—you want to auction ad space off to advertisers like that shoe company. In that case, you go to an ad server—Google’s product is called AdSense—give them a little bit of information about your site, and add some html code to your website. These ad servers gather information about your content (e.g., by looking at keywords you use) and your readers (e.g., by looking at what websites they’ve used in the past to make guesses about what they’ll be interested in) and places relevant ads next to and among your content. If they click, lucky you—you’ll get paid a few cents or dollars.
Apart from privacy concerns about the tracking of users, the really tricky and controversial part here concerns the way scarce advertising space is allocated. Most of the time, it’s done through auctions that happen in real time: each time a user loads a website, an auction is held in a fraction of a second to decide which advertiser gets to display an ad. The longer this process takes, the slower pages load and the more likely users are to get frustrated and go somewhere else.
As well as the service hosting the auction, there are lots of little functions that different companies perform that make the auction and placement process smoother. Some fear that by offering a very popular product integrated end to end, Google’s “stack” of advertising products can bias auctions in favour of its own products. There’s also speculation that Google’s product is so tightly integrated and so effective at using data to match users and advertisers that it is not viable for smaller rivals to compete.
We’ll discuss this speculation and fear in more detail below. But it’s worth bearing in mind that this kind of real-time bidding for ad placement was not always the norm, and is not the only way that websites display ads to their users even today. Big advertisers and websites often deal with each other directly. As with, say, TV advertising, large companies advertising often have a good idea about the people they want to reach. And big publishers (like popular news websites) often have a good idea about who their readers are. For example, big brands often want to push a message to a large number of people across different customer types as part of a broader ad campaign.
Of these kinds of direct sales, sometimes the space is bought outright, in advance, and reserved for those advertisers. In most cases, direct sales are run through limited, intermediated auction services that are not open to the general market. Put together, these kinds of direct ad buys account for close to 70% of total US display advertising spending. The remainder—the stuff that’s left over after these kinds of sales have been done—is typically sold through the real-time, open display auctions described above.
Different adtech products compete on their ability to target customers effectively, to serve ads quickly (since any delay in the auction and ad placement process slows down page load times for users), and to do so inexpensively. All else equal (including the effectiveness of the ad placement), advertisers want to pay the lowest possible price to place an ad. Similarly, publishers want to receive the highest possible price to display an ad. As a result, both advertisers and publishers have a keen interest in reducing the intermediary’s “take” of the ad spending.
This is all a simplification of how the market works. There is not one single auction house for ad space—in practice, many advertisers and publishers end up having to use lots of different auctions to find the best price. As the market evolved to reach this state from the early days of direct ad buys, new functions that added efficiency to the market emerged.
In the early years of ad display auctions, individual processes in the stack were performed by numerous competing companies. Through a process of “vertical integration” some companies, such as Google, brought these different processes under the same roof, with the expectation that integration would streamline the stack and make the selling and placement of ads more efficient and effective. The process of vertical integration in pursuit of efficiency has led to a more consolidated market in which Google is the largest player, offering simple, integrated ad buying products to advertisers and ad selling products to publishers.
Google is by no means the only integrated adtech service provider, however: Facebook, Amazon, Verizon, AT&T/Xandr, theTradeDesk, LumenAd, Taboola and others also provide end-to-end adtech services. But, in the market for open auction placement on third-party websites, Google is the biggest.
The cases against Google
The UK’s Competition and Markets Authority (CMA) carried out a formal study into the digital advertising market between 2019 and 2020, issuing its final report in July of this year. Although also encompassing Google’s Search advertising business and Facebook’s display advertising business (both of which relate to ads on those companies “owned and operated” websites and apps), the CMA study involved the most detailed independent review of Google’s open display advertising business to date.
That study did not lead to any competition enforcement proceedings, but it did conclude that Google’s vertically integrated products led to conflicts of interest that could lead it to behaving in ways that did not benefit the advertisers and publishers that use it. One example was Google’s withholding of certain data from publishers that would make it easier for them to use other ad selling products; another was the practice of setting price floors that allegedly led advertisers to pay more than they would otherwise.
Instead the CMA recommended the setting up of a “Digital Markets Unit” (DMU) that could regulate digital markets in general, and a code of conduct for Google and Facebook (and perhaps other large tech platforms) intended to govern their dealings with smaller customers.
The CMA’s analysis is flawed, however. For instance, it makes big assumptions about the dependency of advertisers on display advertising, largely assuming that they would not switch to other forms of advertising if prices rose, and it is light on economics. But factually it is the most comprehensively researched investigation into digital advertising yet published.
While the Scott Morton and Dinielli paper is extremely broad, it also suffers from a number of problems.
One, because it was released before the CMA’s final report, it is largely based on the interim report released months earlier by the CMA, halfway through the market study in December 2019. This means that several of its claims are out of date. For example, it makes much of the possibility raised by the CMA in its interim report that Google may take a larger cut of advertising spending than its competitors, and claims made in another report that Google introduces “hidden” fees that increases the overall cut it takes from ad auctions.
But in the final report, after further investigation, the CMA concludes that this is not the case. In the final report, the CMA describes its analysis of all Google Ad Manager open auctions related to UK web traffic during the period between 8–14 March 2020 (involving billions of auctions). This, according to the CMA, allowed it to observe any possible “hidden” fees as well. The CMA concludes:
Our analysis found that, in transactions where both Google Ads and Ad Manager (AdX) are used, Google’s overall take rate is approximately 30% of advertisers’ spend. This is broadly in line with (or slightly lower than) our aggregate market-wide fee estimate outlined above. We also calculated the margin between the winning bid and the second highest bid in AdX for Google and non-Google DSPs, to test whether Google was systematically able to win with a lower margin over the second highest bid (which might have indicated that they were able to use their data advantage to extract additional hidden fees). We found that Google’s average winning margin was similar to that of non-Google DSPs. Overall, this evidence does not indicate that Google is currently extracting significant hidden fees. As noted below, however, it retains the ability and incentive to do so. (p. 275, emphasis added)
Scott Morton and Dinielli also misquote and/or misunderstand important sections of the CMA interim report as relating to display advertising when, in fact, they relate to search. For example, Scott Morton and Dinielli write that the “CMA concluded that Google has nearly insurmountable advantages in access to location data, due to the location information [uniquely available to it from other sources].” (p. 15). The CMA never makes any claim of “insurmountable advantage,” however. Rather, to support the claim, Scott Morton and Dinielli cite to a portion of the CMA interim report recounting a suggestion made by Microsoft regarding the “critical” value of location data in providing relevant advertising.
But that portion of the report, as well as the suggestion made by Microsoft, is about search advertising. While location data may also be valuable for display advertising, it is not clear that the GPS-level data that is so valuable in providing mobile search ad listings (for a nearby cafe or restaurant, say) is particularly useful for display advertising, which may be just as well-targeted by less granular, city- or county-level location data, which is readily available from a number of sources. In any case, Scott Morton and Dinielli are simply wrong to use a suggestion offered by Microsoft relating to search advertising to demonstrate the veracity of an assertion about a conclusion drawn by the CMA regarding display advertising.
Scott Morton and Dinielli also confusingly word their own judgements about Google’s conduct in ways that could be misinterpreted as conclusions by the CMA:
The CMA reports that Google has implemented an anticompetitive sales strategy on the publisher ad server end of the intermediation chain. Specifically, after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. (p. 20)
In fact, the CMA does not conclude that Google lowering its prices was an “anticompetitive sales strategy”—it does not use these words at all—and what Scott Morton and Dinielli are referring to is a claim by a rival ad server business, Smart, that Google cutting its prices after acquiring Doubleclick led to Google expanding its market share. Apart from the misleading wording, it is unclear why a competition authority should consider it to be “anticompetitive” when prices are falling and kept low, and—as Smart reported to the CMA—its competitor’s response is to enhance its own offering.
The case that remains
Stripping away the elements of Scott Morton and Dinielli’s case that seem unsubstantiated by a more careful reading of the CMA reports, and with the benefit of the findings in the CMA’s final report, we are left with a case that argues that Google self-preferences to an unreasonable extent, giving itself a product that is as successful as it is in display advertising only because of Google’s unique ability to gain advantage from its other products that have little to do with display advertising. Because of this self-preferencing, they might argue, innovative new entrants cannot compete on an equal footing, so the market loses out on incremental competition because of the advantages Google gets from being the world’s biggest search company, owning YouTube, running Google Maps and Google Cloud, and so on.
The most significant examples of this are Google’s use of data from other products—like location data from Maps or viewing history from YouTube—to target ads more effectively; its ability to enable advertisers placing search ads to easily place display ads through the same interface; its introduction of faster and more efficient auction processes that sidestep the existing tools developed by other third-party ad exchanges; and its design of its own tool (“open bidding”) for aggregating auction bids for advertising space to compete with (rather than incorporate) an alternative tool (“header bidding”) that is arguably faster, but costs more money to use.
These allegations require detailed consideration, and in a future paper we will attempt to assess them in detail. But in thinking about them now it may be useful to consider the remedies that could be imposed to address them, assuming they do diminish the ability of rivals to compete with Google: what possible interventions we could make in order to make the market work better for advertisers, publishers, and users.
We can think of remedies as falling into two broad buckets: remedies that stop Google from doing things that improve the quality of its own offerings, thus making it harder for others to keep up; and remedies that require it to help rivals improve their products in ways otherwise accessible only to Google (e.g., by making Google’s products interoperable with third-party services) without inherently diminishing the quality of Google’s own products.
The first camp of these, what we might call “status quo minus,” includes rules banning Google from using data from its other products or offering single order forms for advertisers, or, in the extreme, a structural remedy that “breaks up” Google by either forcing it to sell off its display ad business altogether or to sell off elements of it.
What is striking about these kinds of interventions is that all of them “work” by making Google worse for those that use it. Restrictions on Google’s ability to use data from other products, for example, will make its service more expensive and less effective for those who use it. Ads will be less well-targeted and therefore less effective. This will lead to lower bids from advertisers. Lower ad prices will be transmitted through the auction process to produce lower payments for publishers. Reduced publisher revenues will mean some content providers exit. Users will thus be confronted with less available content and ads that are less relevant to them and thus, presumably, more annoying. In other words: No one will be better off, and most likely everyone will be worse off.
The reason a “single order form” helps Google is that it is useful to advertisers, the same way it’s useful to be able to buy all your groceries at one store instead of lots of different ones. Similarly, vertical integration in the “ad stack” allows for a faster, cheaper, and simpler product for users on all sides of the market. A different kind of integration that has been criticized by others, where third-party intermediaries can bid more quickly if they host on Google Cloud, benefits publishers and users because it speeds up auction time, allowing websites to load faster. So does Google’s unified alternative to “header bidding,” giving a speed boost that is apparently valuable enough to publishers that they will pay for it.
So who would benefit from stopping Google from doing these things, or even forcing Google to sell its operations in this area? Not advertisers or publishers. Maybe Google’s rival ad intermediaries would; presumably, artificially hamstringing Google’s products would make it easier for them to compete with Google. But if so, it’s difficult to see how this would be an overall improvement. It is even harder to see how this would improve the competitive process—the very goal of antitrust. Rather, any increase in the competitiveness of rivals would result not from making their products better, but from making Google’s product worse. That is a weakening of competition, not its promotion.
On the other hand, interventions that aim to make Google’s products more interoperable at least do not fall prey to this problem. Such “status quo plus” interventions would aim to take the benefits of Google’s products and innovations and allow more companies to use them to improve their own competing products. Not surprisingly, such interventions would be more in line with the conclusions the CMA came to than the divestitures and operating restrictions proposed by Scott Morton and Dinielli, as well as (reportedly) state attorneys general considering a case against Google.
But mandated interoperability raises a host of different concerns: extensive and uncertain rulemaking, ongoing regulatory oversight, and, likely, price controls, all of which would limit Google’s ability to experiment with and improve its products. The history of such mandated duties to deal or compulsory licenses is a troubled one, at best. But even if, for the sake of argument, we concluded that these kinds of remedies were desirable, they are difficult to impose via an antitrust lawsuit of the kind that the Department of Justice is expected to launch. Most importantly, if the conclusion of Google’s critics is that Google’s main offense is offering a product that is just too good to compete with without regulating it like a utility, with all the costs to innovation that that would entail, maybe we ought to think twice about whether an antitrust intervention is really worth it at all.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Noah Phillips (Commissioner of the U.S. Federal Trade Commission).]
Never let a crisis go to waste, or so they say. In the past two weeks, some of the same people who sought to stop mergers and acquisitions during the bull market took the opportunity of the COVID-19 pandemic and the new bear market to call to ban M&A. On Friday, April 24th, Rep. David Cicilline proposed that a merger ban be included in the next COVID-19-related congressional legislative package. By Monday, Senator Elizabeth Warren and Rep. Alexandria Ocasio-Cortez, warning of “predatory” M&A and private equity “vultures”, teamed up with a similar proposal.
The theory that the pandemic requires the government to shut down M&A goes something like this: the antitrust agencies are overwhelmed and cannot do the job of reviewing mergers under the Hart-Scott-Rodino (HSR) Act, which gives the U.S. antitrust agencies advance notice of certain transactions and 30 days to decide whether to seek more information about them. That state of affairs will, in turn, invite a rush of companies looking to merge with minimal oversight, exacerbating the problem by flooding the premerger notification office (PNO) with new filings. Another version holds, along similar lines, that the precipitous decline in the market will precipitate a merger “wave” in which “dominant corporations” and “private equity vultures” will gobble up defenseless small businesses. Net result: anticompetitive transactions go unnoticed and unchallenged. That’s the theory, at least as it has been explained to me. The facts are different.
First, while the restrictions related to COVID-19 require serious adjustments at the antitrust agencies just as they do at workplaces across the country (we’re working from home, dealing with remote technology, and handling kids just like the rest), merger review continues. Since we started teleworking, the FTC has, among other things, challenged Altria’s $12.8 billion investment in JUUL’s e-cigarette business and resolved competitive concerns with GE’s sale of its biopharmaceutical business to Danaher and Ossur’s acquisition of a competing prosthetic limbs manufacturer, College Park. With our colleagues at the Antitrust Division of the Department of Justice, we announced a new e-filing system for HSR filings and temporarily suspended granting early termination. We sought voluntary extensions from companies. But, in less than two weeks, we were able to resume early termination—back to “new normal”, at least. I anticipate there may be additional challenges; and the FTC will assess constraints in real-time to deal with further disruptions. But we have not sacrificed the thoroughness of our investigations; and we will not.
Second, there is no evidence of a merger “wave”, or that the PNO is overwhelmed with HSR filings. To the contrary, according to Bloomberg, monthly M&A volume hit rock bottom in April – the lowest since 2004. As of last week, the PNO estimates nearly 60% reduction in HSR reported transactions during the past month, compared to the historical average. Press reports indicate that M&A activity is down dramatically because of the crisis. Xerox recently announced it was suspending its hostile bid for Hewlett-Packard ($30 billion); private equity firm Sycamore Partners announced it is walking away from its takeover of Victoria’s Secret ($525 million); and Boeing announced it is backing out of its merger with Embraer ($4.2 billion) — just a few examples of companies, large corporations and private equity firms alike, stopping M&A on their own. (The market is funny like that.)
Slowed M&A during a global pandemic and economic crisis is exactly what you would expect. The financial uncertainty facing companies lowers shareholder and board confidence to dive into a new acquisition or sale. Financing is harder to secure. Due diligence is postponed. Management meetings are cancelled. Agreeing on price is another big challenge. The volatility in stock prices makes valuation difficult, and lessens the value of equity used to acquire. Cash is needed elsewhere, like to pay workers and keep operations running. Lack of access to factories and other assets as a result of travel restrictions and stay-at-home orders similarly make valuation harder. Management can’t even get in a room to negotiate and hammer out the deal because of social distancing (driving a hard bargain on Zoom may not be the same).
Experience bears out those expectations. Consider our last bear market, the financial crisis that took place over a decade ago. Publicly available FTC data show the number of HSR reported transactions dropped off a cliff. During fiscal year 2009, the height of the crisis, HSR reported transactions were down nearly 70% compared to just two years earlier, in fiscal year 2007. Not surprising.
Nor should it be surprising that the current crisis, with all its uncertainty and novelty, appears itself to be slowing down M&A.
So, the antitrust agencies are continuing merger review, and adjusting quickly to the new normal. M&A activity is down, dramatically, on its own. That makes the pandemic an odd excuse to stop M&A. Maybe the concern wasn’t really about the pandemic in the first place? The difference in perspective may depend on one’s general view of the value of M&A. If you think mergers are mostly (or all) bad, and you discount the importance of the market for corporate control, the cost to stopping them all is low. If you don’t, the cost is high.
As a general matter, decades of research and experience tell us that the vast majority of mergers are either pro-competitive or competitively-neutral. But M&A, even dramatically-reduced, also has an important role to play in a moment of economic adjustment. It helps allocate assets in an efficient manner, for example giving those with the wherewithal to operate resources (think companies, or plants) an opportunity that others may be unable to utilize. Consumers benefit if a merger leads to the delivery of products or services that one company could not efficiently provide on its own, and from the innovation and lower prices that better management and integration can provide. Workers benefit, too, as they remain employed by going concerns. It serves no good, including for competition, to let companies that might live, die.
M&A is not the only way in which market forces can help. The antitrust agencies have always recognized pro-competitive benefits to collaboration between competitors during times of crisis. In 2005, after hurricanes Katrina and Rita, we implemented an expedited five-day review of joint projects between competitors aimed at relief and construction. In 2017, after hurricanes Harvey and Irma, we advised that hospitals could combine resources to meet the health care needs of affected communities and companies could combine distribution networks to ensure goods and services were available. Most recently, in response to the current COVID-19 emergency, we announced an expedited review process for joint ventures. Collaboration can be concerning, so we’re reviewing; but it can also help.
Our nation is going through an unprecedented national crisis, with a horrible economic component that is putting tens of millions out of work and causing a great deal of suffering. Now is a time of great uncertainty, tragedy, and loss; but also of continued hope and solidarity. While merger review is not the top-of-mind issue for many—and it shouldn’t be—American consumers stand to gain from pro-competitive mergers, during and after the current crisis. Those benefits would be wiped out with a draconian ‘no mergers’ policy during the COVID-19 emergency. Might there be anticompetitive merger activity? Of course, which is why FTC staff are working hard to vet potentially anticompetitive mergers and prevent harm to consumers. Let’s let them keep doing their jobs.
 The views expressed in this blog post are my own and do not necessarily reflect the views of the Federal Trade Commission or any other commissioner. An abbreviated version of this essay was previously published in the New York Times’ DealBook newsletter. Noah Phillips, The case against banning mergers, N.Y. Times, Apr. 27, 2020, available athttps://www.nytimes.com/2020/04/27/business/dealbook/small-business-ppp-loans.html.
 The “Pandemic Anti-Monopoly Act” proposes a merger moratorium on (1) firms with over $100 million in revenue or market capitalization of over $100 million; (2) PE firms and hedge funds (or entities that are majority-owned by them); (3) businesses that have an exclusive patent on products related to the crisis, such as personal protective equipment; and (4) all HSR reportable transactions.
 Hart-Scott-Rodino Antitrust Improvements Act of 1976, 15 U.S.C. § 18a. The antitrust agencies can challenge transactions after they happen, but they are easier to stop beforehand; and Congress designed HSR to give us an opportunity to do so.
 Whatever your view, the point is that the COVID-19 crisis doesn’t make sense as a justification for banning M&A. If ban proponents oppose M&A generally, they should come out and say that. And they should level with the public about just how much they propose to ban. The specifics of the proposals are beyond the scope of this essay, but it’s worth noting that the “large companies [gobbling] up . . . small businesses” of which Sen. Warren warns include any firm with $100 million in annual revenue and anyone making a transaction reportable under HSR. $100 million seems like a lot of money to many of us, but the Ohio State University National Center for the Middle Market defines a mid-sized company as having annual revenues between $10 million and $1 billion. Many if not most of the transactions that would be banned look nothing like the kind of acquisitions ban proponents are describing.
 As far back as the 1980s, the Horizontal Merger Guidelines reflected this idea, stating: “While challenging competitively harmful mergers, the Department [of Justice Antitrust Division] seeks to avoid unnecessary interference with the larger universe of mergers that are either competitively beneficial or neutral.” Horizontal Merger Guidelines (1982); see also Hovenkamp, Appraising Merger Efficiencies, 24 Geo. Mason L. Rev. 703, 704 (2017) (“we tolerate most mergers because of a background, highly generalized belief that most—or at least many—do produce cost savings or improvements in products, services, or distribution”); Andrade, Mitchell & Stafford, New Evidence and Perspectives on Mergers, 15 J. ECON. PERSPECTIVES 103, 117 (2001) (“We are inclined to defend the traditional view that mergers improve efficiency and that the gains to shareholders at merger announcement accurately reflect improved expectations of future cash flow performance.”).
 Jointly with our colleagues at the Antitrust Division of the Department of Justice, we issued a statement last week affirming our commitment to enforcing the antitrust laws against those who seek to exploit the pandemic to engage in anticompetitive conduct in labor markets.
 The legal test to make such a showing for an anti-competitive transaction is high. Known as the “failing firm defense”, it is available only to firms that can demonstrate their fundamental inability to compete effectively in the future. The Horizontal Merger Guidelines set forth three elements to establish the defense: (1) the allegedly failing firm would be unable to meet its financial obligations in the near future; (2) it would not be able to reorganize successfully under Chapter 11; and (3) it has made unsuccessful good-faith efforts to elicit reasonable alternative offers that would keep its tangible and intangible assets in the relevant market and pose a less severe danger to competition than the actual merger. Horizontal Merger Guidelines § 11; see also Citizen Publ’g v. United States, 394 U.S. 131, 137-38 (1969). The proponent of the failing firm defense bears the burden to prove each element, and failure to prove a single element is fatal. In re Otto Bock, FTC No. 171-0231, Docket No. 9378 Commission Opinion (Nov. 2019) at 43; see also Citizen Publ’g, 394 U.S. at 138-39.
Before the coronavirus, there was something I used to worry about. It was called screen time. Perhaps you remember it.
I thought about it. I wrote about it. A lot. I would try different digital detoxes as if they were fad diets, each working for a week or two before I’d be back on that smooth glowing glass.
Now I have thrown off the shackles of screen-time guilt. My television is on. My computer is open. My phone is unlocked, glittering. I want to be covered in screens. If I had a virtual reality headset nearby, I would strap it on.
Bowles isn’t alone. The Washington Post recently documented how social distancing has caused people to “rethink of one of the great villains of modern technology: screens.” Matthew Yglesias of Vox has been critical of tech in the past as well, but recently admitted that these tools are “making our lives much better.” Cal Newport might have called for Twitter to be shut down, but now thinks the service can be useful. These anecdotes speak to a larger trend. According to one national poll, some 88 percent of Americans now have a better appreciation for technology since this pandemic has forced them to rely upon it.
Most psychologists steer clear of using the term addiction because it means a person engages in hazardous use, shows tolerance, and neglects social roles. Because social media, gaming, and cell phone use don’t meet this threshold, the profession tends to describe those who experience negative impacts as engaging in problematic use of the tech, which is only applied to a small minority. According to one estimate, for example, only half of a percent of gamers have patterns of problematic use.
Even though tech use doesn’t meet the criteria for addiction, the term addiction finds purchase in policy discussions and media outlets because it suggests a healthier norm. Computer games have prosocial benefits, yet it is common to hear that the activity is no match for going outside to play. The same kind of argument exists with social media and phone use; face-to-face communication is preferred to tech-enabled communication.
But the coronavirus has inverted the normal conditions. Social distancing doesn’t allow us to connect in person or play outside with friends. Faced with no other alternative, technology has been embraced. Videoconferencing is up, as is social media use. This new norm has brought with it a needed rethink of critiques of tech. Even before this moment, however, the research on tech effects has had its problems.
To begin, even though it has been researched extensively, screen time and social media use aren’t shown to clearly cause harm. Earlier this year, psychologists Candice Odgers and Michaeline Jensen conducted a massive literature review and summarized the research as “a mix of often conflicting small positive, negative and null associations.” The researchers also point out that studies finding a negative relationship between well-being and tech use tend to be correlational, not causational, and thus are “unlikely to be of clinical or practical significance” to parents or therapists.
Through no fault of their own, researchers tend to focus a limited number of relationships when it comes to tech use. But professors Amy Orben and Andrew Przybylski were able to sidestep these problems by getting computers to test every theoretically defensible hypothesis. In a writeup appropriately titled “Beyond Cherry-Picking,” the duo explained why this method is important to policy makers:
Although statistical significance is often used as an indicator that findings are practically significant, the paper moves beyond this surrogate to put its findings in a real-world context. In one dataset, for example, the negative effect of wearing glasses on adolescent well-being is significantly higher than that of social media use. Yet policymakers are currently not contemplating pumping billions into interventions that aim to decrease the use of glasses.
Their academic paper throws cold water on the screen time and tech use debate. Since social media explains only 0.4% of the variation in well-being, much greater welfare gains can be made by concentrating on other policy issues. For example, regularly eating breakfast, getting enough sleep, and avoiding marijuana use play much larger roles in the well-being of adolescents. Social media is only a tiny portion of what determines well-being as the chart below helps to illustrate.
Second, most social media research relies on self-reporting methods, which are systematically biased and often unreliable. Communication professor Michael Scharkow, for example, compared self-reports of Internet use with the computer log files, which show everything that a computer has done and when, and found that “survey data are only moderately correlated with log file data.” A quartet of psychology professors in the UK discovered that self-reported smartphone use and social media addiction scales face similar problems in that they don’t correctly capture reality. Patrick Markey, Professor and Director of the IR Laboratory at Villanova University, summarized the work, “the fear of smartphones and social media was built on a castle made of sand.”
Expert bodies have also been changing their tune as well. The American Academy of Pediatrics took a hardline stance for years, preaching digital abstinence. But the organization has since backpedaled and now says that screens are fine in moderation. The organization now suggests that parents and children should work together to create boundaries.
Once this pandemic is behind us, policymakers and experts should reconsider the screen time debate. We need to move from loaded terms like addiction and embrace a more realistic model of the world. The truth is that everyone’s relationship with technology is complicated. Instead of paternalistic legislation, leaders should place the onus on parents and individuals to figure out what is right for them.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]
Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).
Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.
The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:
And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.
That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.
* * *
Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.
The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient.
Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies:
Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.
Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):
Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:
The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.
These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?
Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.
What is a “killer acquisition”…?
Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.
For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper:
“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.
Moreover, the authors add that:
Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur.
Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:
If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.
…And what isn’t a killer acquisition?
What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project.
Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.
In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.
As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.
The market realities of the ventilator market and its implications for the “killer acquisition” story
1. The mechanical ventilator market is highly competitive
As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive.
Medical ventilators market competition is intense.
The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position.
Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.
2. The value of the merger was too small
A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million.
Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it.
As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.
[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.
The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.
We can apply this reasoning to Covidien’s acquisition of Newport:
Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out).
For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”
If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market).
The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.
Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.
“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”
If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry.
Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.
Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success.
Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.
3. Lessons from Covidien’s ventilator product decisions
The killer acquisition claims are further weakened by at least four other important pieces of information:
Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
Covidien appears to have discontinued production of its own portable ventilator in 2014
The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio
Covidien continued to develop and sell Newport’s ventilators
For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.
However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.
It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted).
Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.
Covidien continued to develop and sell Newport’s other ventilators
Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.
If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them?
At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.
There was little overlap between Covidien’s and Newport’s ventilators
Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators.
This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:
Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.
A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).
In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines).
Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:
[D]esigned to provide support to patients who do not require complex critical care ventilators.
A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:
This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.
The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:
This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.
Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.
In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.
Covidien appears to have discontinued production of its own portable ventilator in 2014
Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.
The product is reported on the company’s 2011, 2012 and 2013 annual reports:
Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….
Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.
(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).
Putting the Newport deal in context
Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices.
That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one.
By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.
Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces
So why was the Aura ventilator discontinued?
Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems.
The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where
mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.
The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns.
Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:
The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).
the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.
And the US Government RFP confirms that this was indeed an important requirement:
The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features:
• Flexibility to accommodate a wide patient population range from neonate to adult.
Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:
Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliver — both in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.
As Jason Crawford, an engineer and tech industry commentator, put it:
Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.
The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:
Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here).
Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly.
In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition.
Ending the Aura project might have been an efficient outcome
As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.
Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.
While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965):
Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.
Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.
“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.
In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.
Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry.
And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.
What is studied is a system which lives in the minds of economists but not on earth.
Numerouscommentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations.
The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence.
Finally, what the New York Times piece does offer is a chilling tale of government failure.
The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US.
The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit.
And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics).]
There has been much (admittedly important) discussion of the economic woes of mass quarantine to thwart the spread and “flatten the curve” of the virus and its health burdens — as well as some extremely interesting discussion of the long-term health woes of quarantine and the resulting economic downturn: see, e.g., previous work by Christopher Ruhm suggesting mortality rates may improve during economic downturns, and this thread on how that might play out differently in the current health crisis.
But there is perhaps insufficient attention being paid to the more immediate problem of medical resource scarcity to treat large, localized populations of acutely sick people — something that will remain a problem for some time in places like New York, no matter how successful we are at flattening the curve.
Yet the fact that we may have failed to prepare adequately for the current emergency does not mean that we can’t improve our ability to respond to the current emergency and build up our ability to respond to subsequent emergencies — both in terms of future, localized outbreaks of COVID-19, as well as for other medical emergencies more broadly.
In what follows I lay out the outlines of a proposal for an OPTN (Organ Procurement and Transplantation Network) analogue for allocating emergency medical resources. In order to make the idea more concrete (and because no doubt there is a limit to the types of medical resources for which such a program would be useful or necessary), let’s call it the VPAN — Ventilator Procurement and Allocation Network.
As quickly as possible in order to address the current crisis — and definitely with enough speed to address the next crisis — we should develop a program to collect relevant data and enable deployment of medical resources where they are most needed, using such data, wherever possible, to enable deployment before shortages become the enormous problem they are today.
Data and information are important tools for mitigating emergencies
Hal’s post, especially in combination with Julian’s, offers a really useful suggestion for using modern information technology to help mitigate one of the biggest problems of the current crisis: The ability to return to economic activity (and a semblance of normalcy) as quickly as possible.
What I like most about his idea (and, again, Julian’s) is its incremental approach: We don’t have to wait until it’s safe for everyone to come outside in order for some people to do so. And, properly collected, assessed, and deployed, information is a key part of making that possible for more and more people every day.
Here I want to build on Hal’s idea to suggest another — perhaps even more immediately crucial — use of data to alleviate the COVID-19 crisis: The allocation of scarce medical resources.
In the current crisis, the “what” of this data is apparent: it is the testing data described by Julian in his post, and implemented in digital form by Hal in his. Thus, whereas Hal’s proposal contemplates using this data solely to allow proprietors (public transportation, restaurants, etc.) to admit entry to users, my proposal contemplates something more expansive: the provision of Hal’s test-verification vendors’ data to a centralized database in order to use it to assess current medical resource needs and to predict future needs.
The apparent ventilator availability crisis
As I have learned at great length from a friend whose spouse is an ICU doctor on the front lines, the current ventilator scarcity in New York City is worrisome (from a personal email, edited slightly for clarity):
When doctors talk about overwhelming a medical system, and talk about making life/death decisions, often they are talking about ventilators. A ventilator costs somewhere between $25K to $50K. Not cheap, but not crazy expensive. Most of the time these go unused, so hospitals have not stocked up on them, even in first-rate medical systems. Certainly not in the US, where equipment has to get used or the hospital does not get reimbursed for the purchase.
With a bad case of this virus you can put somebody — the sickest of the sickest — on one of those for three days and many of them don’t die. That frames a brutal capacity issue in a local area. And that is what has happened in Italy. They did not have enough ventilators in specific cities where the cases spiked. The mortality rates were much higher solely due to lack of these machines. Doctors had to choose who got on the machine and who did not. When you read these stories about a choice of life and death, that could be one reason for it.
Now the brutal part: This is what NYC might face soon. Faster than expected, by the way. Maybe they will ship patients to hospitals in other parts of NY state, and in NJ and CT. Maybe they can send them to the V.A. hospitals. Those are the options for how they hope to avoid this particular capacity issue. Maybe they will flatten the curve just enough with all the social distancing. Hard to know just now. But right now the doctors are pretty scared, and they are planning for the worst.
A 2018 analysis from the Johns Hopkins University Center for Health Security estimated we have around 160,000 ventilators in the U.S. If the “worst-case scenario” were to come to pass in the U.S., “there might not be” enough ventilators, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, told CNN on March 15.
“If you don’t have enough ventilators, that means [obviously] that people who need it will not be able to get it,” Fauci said. He stressed that it was most important to mitigate the virus’ spread before it could overwhelm American health infrastructure.
Reports say that the American Hospital Association believes almost 1 million COVID-19 patients in the country will require a ventilator. Not every patient will require ventilation at the same time, but the numbers are still concerning. Dr. Daniel Horn, a physician at Massachusetts General Hospital in Boston, warned in a March 22 editorial in The New York Times that “There simply will not be enough of these machines, especially in major cities.”
Medical resource scarcity in the current crisis is a drastic problem. And without significant efforts to ameliorate it it is likely to get worse before it gets better.
Using data to allocate scarce resources: The basic outlines of a proposed “Ventilator Procurement and Allocation Network”
But that doesn’t mean that the scarce resources we do have can’t be better allocated. As the PBS story quoted above notes, there are some 160,000 ventilators in the US. While that may not be enough in the aggregate, it’s considerably more than are currently needed in, say, New York City — and a great number of them are surely not currently being used, nor likely immediately to need to be used.
The basic outline of the idea for redistributing these resources is fairly simple:
First, register all of the US’s existing ventilators in a centralized database.
Second (using a system like the one Hal describes), collect and update in real time the relevant test results, contact tracing, demographic, and other epidemiological data and input it into a database.
Third, analyze this data using one or more compartmental models (or more targeted, virus-specific models) — (NB: I am the furthest thing from an epidemiologist, so I make no claims about how best to do this; the link above, e.g., is merely meant to be illustrative and not a recommendation) — to predict the demand for ventilators at various geographic levels, ranging from specific hospitals to counties or states. In much the same way, allocation of organs in the OPTN is based on a set of “allocation calculators” (which in turn are intended to implement the “Final Rule” adopted by HHS to govern transplant organ allocation decisions).
Fourth, ask facilities in low-expected-demand areas to send their unused (or excess above the level required to address “normal” demand) ventilators to those in high-expected-demand areas, with the expectation that they will be consistently reallocated across all hospitals and emergency care facilities according to the agreed-upon criteria. Of course, the allocation “algorithm” would be more complicated than this (as is the HHS Final Rule for organ allocation). But in principle this would be the primary basis for allocation.
Not surprisingly, some guidelines for the allocation of ventilators in such emergencies already exist — like New York’s Ventilator Allocation Guidelines for triaging ventilators during an influenza pandemic. But such guidelines address the protocols for each facility to use in determining how to allocate its own scarce resources; they do not contemplate the ability to alleviate shortages in the first place by redistributing ventilators across facilities (or cities, states, etc.).
I believe that such a system — like the OPTN — could largely work on a voluntary basis. Of course, I’m quick to point out that the OPTN is a function of a massive involuntary and distortionary constraint: the illegality of organ sales. But I suspect that a crisis like the one we’re currently facing is enough to engender much the same sort of shortage (as if such a constraint were in place with respect to the use of ventilators), and thus that a similar system would be similarly useful. If not, of course, it’s possible that the government could, in emergency situations, actually commandeer privately-owned ventilators in order to effectuate the system. I leave for another day the consideration of the merits and defects of such a regime.
Of course, it need not rely on voluntary participation. There could be any number of feasible means of inducing hospitals that have unused ventilators to put their surpluses into the allocation network, presumably involving some sort of cash or other compensation. Or perhaps, if and when such a system were expanded to include other medical resources, it might involve moving donor hospitals up the queue for some other scarce resources they need that don’t face a current crisis. Surely there must be equipment that a New York City hospital has in relative surplus that a small town hospital covets.
But the key point is this: It doesn’t make sense to produce and purchase enough ventilators so that every hospital in the country can simultaneously address extremely rare peak demands. Doing so would be extraordinarily — and almost always needlessly — expensive. And emergency preparedness is never about ensuring that there are no shortages in the worst-case scenario; it’s about making a minimax calculation (as odious as those are) — i.e., minimizing the maximal cost/risk, not mitigating risk entirely. (For a literature review of emergency logistics in the context of large-scale disasters, see, e.g., here)
But nor does it make sense — as a policy matter — to allocate the new ventilators that will be produced in response to current demand solely on the basis of current demand. The epidemiological externalities of the current pandemic are substantial, and there is little reason to think that currently over-taxed emergency facilities — or even those preparing for their own expected demand — will make procurement decisions that reflect the optimal national (let alone global) allocation of such resources. A system like the one I outline here would effectively enable the conversion of private, constrained decisions to serve the broader demands required for optimal allocation of scarce resources in the face of epidemiological externalities.
Indeed — and importantly — such a program allows the government to supplement existing and future public and private procurement decisions to ensure an overall optimal level of supply (and, of course, government-owned ventilators — 10,000 of which already exist in the Strategic National Stockpile — would similarly be put into the registry and deployed using the same criteria). Meanwhile, it would allow private facilities to confront emergency scenarios like the current one with far more resources than it would ever make sense for any given facility to have on hand in normal times.
There are, as always, caveats. First, such a program relies on the continued, effective functioning of transportation networks. If any given emergency were to disrupt these — and surely some would — the program would not necessarily function as planned. Of course, some of this can be mitigated by caching emergency equipment in key locations, and, over the course of an emergency, regularly redistributing those caches to facilitate expected deployments as the relevant data comes in. But, to be sure, at the end of the day such a program depends on the ability to transport ventilators.
In addition, there will always be the risk that emergency needs swamp even the aggregate available resources simultaneously (as may yet occur during the current crisis). But at the limit there is nothing that can be done about such an eventuality: Short of having enough ventilators on hand so that every needy person in the country can use one essentially simultaneously, there will always be the possibility that some level of demand will outpace our resources. But even in such a situation — where allocation of resources is collectively guided by epidemiological (or, in the case of other emergencies, other relevant) criteria — the system will work to mitigate the likely overburdening of resources, and ensure that overall resource allocation is guided by medically relevant criteria, rather than merely the happenstance of geography, budget constraints, storage space, or the like.
Finally, no doubt a host of existing regulations make such a program difficult or impossible. Obviously, these should be rescinded. One set of policy concerns is worth noting: privacy concerns. There is an inherent conflict between strong data privacy, in which decisions about the sharing of information belong to each individual, and the data needs to combat an epidemic, in which each person’s privately optimal level of data sharing may result in a sociallysub-optimal level of shared data. To the extent that HIPAA or other privacy regulations would stand in the way of a program like this, it seems singularly important to relax them. Much of the relevant data cannot be efficiently collected on an opt-in basis (as is easily done, by contrast, for the OPTN). Certainly appropriate safeguards should be put in place (particularly with respect to the ability of government agencies/law enforcement to access the data). But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.
Appropriate standards for emergency preparedness policy generally
Importantly, such a plan would have broader applicability beyond ventilators and the current crisis. And this is a key aspect of addressing the problem: avoiding a myopic focus on the current emergency in lieu of more clear-eyed emergency preparedness plan.
It’s important to be thinking not only about the current crisis but also about the next emergency. But it’s equally important not to let political point-scoring and a bias in favor of focusing on the seen over the unseen coopt any such efforts. A proper assessment entails the following considerations (surely among others) (and hat tip to Ron Cass for bringing to my attention most of the following insights):
Arguably we are overweighting health and safety concerns with respect to COVID-19 compared to our assessments in other areas (such as ordinary flu (on which see this informative thread by Anup Malani), highway safety, heart & coronary artery diseases, etc.). That’s inevitable when one particular concern is currently so omnipresent and so disruptive. But it is important that we not let our preparations for future problems focus myopically on this cause, because the next crisis may be something entirely different.
Nor is it reasonable to expect that we would ever have been (or be in the future) fully prepared for a global pandemic. It may not be an “unknown unknown,” but it is impossible to prepare for all possible contingencies, and simply not sensible to prepare fully for such rare and difficult-to-predict events.
That said, we also shouldn’t be surprised that we’re seeing more frequent global pandemics (a function of broader globalization), and there’s little reason to think that we won’t continue to do so. It makes sense to be optimally prepared for such eventualities, and if this one has shown us anything, it’s that our ability to allocate medical resources that are made suddenly scarce by a widespread emergency is insufficient.
But rather than overreact to such crises — which is difficult, given that overreaction typically aligns with the private incentives of key decision makers, the media, and many in the “chattering class” — we should take a broader, more public-focused view of our response. Moreover, political and bureaucratic incentives not only produce overreactions to visible crises, they also undermine the appropriate preparation for such crises in the future.
Thus, we should create programs that identify and mobilize generically useful emergency equipment not likely to be made obsolete within a short period and likely to be needed whatever the source of the next emergency. In other words, we should continue to focus the bulk of our preparedness on things like quickly deployable ICU facilities, ventilators, and clean blood supplies — not, as we may be wrongly inclined to do given the salience of the current crisis, primarily on specially targeted drugs and test kits. Our predictive capacity for our future demand of more narrowly useful products is too poor to justify substantial investment.
Given the relative likelihood of another pandemic, generic preparedness certainly includes the ability to inhibit overly fast spread of a disease that can clog critical health care facilities. This isn’t disease-specific (or, that is, while the specific rate and contours of infection are specific to each disease, relatively fast and widespread contagion is what causes any such disease to overtax our medical resources, so if we’re preparing for a future virus-related emergency, we’re necessarily preparing for a disease that spreads quickly and widely).
Because the next emergency isn’t necessarily going to be — and perhaps isn’t even likely to be — a pandemic, our preparedness should not be limited to pandemic preparedness. This means, as noted above, overcoming the political and other incentives to focus myopically on the current problem even when nominally preparing for the next one. But doing so is difficult, and requires considerable political will and leadership. It’s hard to conceive of our current federal leadership being up to the task, but it’s certainly not the case that our current problems are entirely the makings of this administration. All governments spend too much time and attention solving — and regulating — the most visible problems, whether doing so is socially optimal or not.
Thus, in addition to (1) providing for the efficient and effective use of data to allocate emergency medical resources (e.g., as described above), and (2) ensuring that our preparedness centers primarily on generically useful emergency equipment, our overall response should also (3) recognize and correct the way current regulatory regimes also overweight visible adverse health effects and inhibit competition and adaptation by industry and those utilizing health services, and (4) make sure that the economic and health consequences of emergency and regulatory programs (such as the current quarantine) are fully justified and optimized.
A proposal like the one I outline above would, I believe, be consistent with these considerations and enable more effective medical crisis response in general.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]
The Wall Street Journal reports congressional leaders have agreed to impose limits on stock buybacks and dividend payments for companies receiving aid under the COVID-19 disaster relief package.
Rather than a flat-out ban, the draft legislation forbids any company taking federal emergency loans or loan guarantees from repurchasing its own stock or paying shareholder dividends. The ban lasts for the term of the loans, plus one year after the aid had ended.
In theory, under a strict set of conditions, there is no difference between dividends and buybacks. Both approaches distribute cash from the corporation to shareholders. In practice, there are big differences between dividends and share repurchases.
Dividends are publicly visible actions and require authorization by the board of directors. Shareholders have expectations of regular, stable dividends. Buybacks generally lack such transparency. Firms have flexibility in choosing the timing and the amount of repurchases, subject to the details of their repurchase programs.
Cash dividends have no effect on the number of shares outstanding. In contrast, share repurchases reduce the number of shares outstanding. By reducing the number of shares outstanding, buybacks increase earnings per share, all other things being equal.
Over the past 15 years, buybacks have outpaced dividend payouts. The figure above, from Seeking Alpha, shows that while dividends have grown relatively smoothly over time, the aggregate value of buybacks are volatile and vary with the business cycle. In general, firms increase their repurchases relative to dividends when the economy booms and reduce them when the economy slows or shrinks.
This observation is consistent with a theory that buybacks are associated with periods of greater-than-expected financial performance. On the other hand, dividends are associated with expectations of long-term profitability. Dividends can decrease, but only when profits are expected to be “permanently” lower.
During the Great Recession, the figure above shows that dividends declined by about 10%, the amount of share repurchases plummeted by approximately 85%. The flexibility afforded by buybacks provided stability in dividends.
There is some logic to dividend and buyback limits imposed by the COVID-19 disaster relief package. If a firm has enough cash on hand to pay dividends or repurchase shares, then it doesn’t need cash assistance from the federal government. Similarly, if a firm is so desperate for cash that it needs a federal loan or loan guarantee, then it doesn’t have enough cash to provide a payout to shareholders. Surely managers understand this and sophisticated shareholders should too.
Because of this understanding, the dividend and buyback limits may be a non-binding constraint. It’s not a “good look” for a corporation to accept millions of dollars in federal aid, only to turn around and hand out those taxpayer dollars to the company’s shareholders. That’s a sure way to get an unflattering profile in the New York Times and an invitation to attend an uncomfortable hearing at the U.S. Capitol. Even if a distressed firm could repurchase its shares, it’s unlikely that it would.
The logic behind the plus-one-year ban on dividends and buybacks is less clear. The relief package is meant to get the U.S. economy back to normal as fast as possible. That means if a firm repays its financial assistance early, the company’s shareholders should be rewarded with a cash payout rather than waiting a year for some arbitrary clock to run out.
The ban on dividends and buybacks may lead to an unintended consequence of increased merger and acquisition activity. Vox reports an email to Goldman Sachs’ investment banking division says Goldman expects to see an increase in hostile takeovers and shareholder activism as the prices of public companies fall. Cash rich firms who are subject to the ban and cannot get that cash to their existing shareholders may be especially susceptible takeover targets.
Desperate times call for desperate measures and these are desperate times. Buyback backlash has been brewing for sometime and the COVID-19 relief package presents a perfect opportunity to ban buybacks. With the pressures businesses are under right now, it’s unlikely there’ll be many buybacks over the next few months. The concern should be over the unintended consequences facing firms once the economy recovers.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]
The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions.
Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:
We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.
Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.
Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):
Consumers ration how much they really need;
Producers respond to the rising prices by ramping up supply and distributors make more available; and
Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain.
Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring.
Rationing by consumers
During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages.
Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods.
If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.
According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages.
For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)
The role of “price gouging” laws and social norms
A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.
But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.
Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution.
Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.
Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites.
But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain.
The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.
Increased production and distribution
During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount.
Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.
The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies.
Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.
For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.
These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis.
Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.
Amazon is now prioritizing the shipment of high-demand goods like household staples and medical supplies in its fulfillment services.
Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use.
While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets.
If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals.
Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher.
Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]
The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.
“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.
Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.
But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”
This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.
At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.
The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.
When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.
When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”
It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”
In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.
True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.
COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.