Admirers of the late Supreme Court Justice Louis Brandeis and other antitrust populists often trace the history of American anti-monopoly sentiments from the Founding Era through the Progressive Era’s passage of laws to fight the scourge of 19th century monopolists. For example, Matt Stoller of the American Economic Liberties Project, both in his book Goliath and in other writings, frames the story of America essentially as a battle between monopolists and anti-monopolists.
According to this reading, it was in the late 20th century that powerful corporations and monied interests ultimately succeeded in winning the battle in favor of monopoly power against antitrust authorities, aided by the scholarship of the “ideological” Chicago school of economics and more moderate law & economics scholars like Herbert Hovenkamp of the University of Pennsylvania Law School.
It is a framing that leaves little room for disagreements about economic theory or evidence. One is either anti-monopoly or pro-monopoly, anti-corporate power or pro-corporate power.
What this story muddles is that the dominant anti-monopoly strain from English common law, which continued well into the late 19th century, was opposed specifically to government-granted monopoly. In contrast, today’s “anti-monopolists” focus myopically on alleged monopolies that often benefit consumers, while largely ignoring monopoly power granted by government. The real monopoly problem antitrust law fails to solve is its immunization of anticompetitive government policies. Recovering the older anti-monopoly tradition would better focus activists today.
Common Law Anti-Monopoly Tradition
Scholars like Timothy Sandefur of the Goldwater Institute have written about the right to earn a living that arose out of English common law and was inherited by the United States. This anti-monopoly stance was aimed at government-granted privileges, not at successful business ventures that gained significant size or scale.
For instance, 1602’s Darcy v. Allein, better known as the “Case of Monopolies,” dealt with a “patent” originally granted by Queen Elizabeth I in 1576 to Ralph Bowes, and later bought by Edward Darcy, to make and sell playing cards. Darcy did not innovate playing cards; he merely had permission to be the sole purveyor. Thomas Allein, who attempted to sell playing cards he created, was sued for violating Darcy’s exclusive rights. Darcy’s monopoly ultimately was held to be invalid by the court, which refused to convict Allein.
Edward Coke, who actually argued on behalf of the patent in Darcy v. Allen, wrote that the case stood for the proposition that:
All trades, as well mechanical as others, which prevent idleness (the bane of the commonwealth) and exercise men and youth in labour, for the maintenance of themselves and their families, and for the increase of their substance, to serve the Queen when occasion shall require, are profitable for the commonwealth, and therefore the grant to the plaintiff to have the sole making of them is against the common law, and the benefit and liberty of the subject. (emphasis added)
In essence, Coke’s argument was more closely linked to a “right to work” than to market structures, business efficiency, or firm conduct.
The courts largely resisted royal monopolies in 17th century England, finding such grants to violate the common law. For instance, in The Case of the Tailors of Ipswich, the court cited Darcy and found:
…at the common law, no man could be prohibited from working in any lawful trade, for the law abhors idleness, the mother of all evil… especially in young men, who ought in their youth, (which is their seed time) to learn lawful sciences and trades, which are profitable to the commonwealth, and whereof they might reap the fruit in their old age, for idle in youth, poor in age; and therefore the common law abhors all monopolies, which prohibit any from working in any lawful trade. (emphasis added)
The principles enunciated in these cases were eventually codified in the Statute of Monopolies, which prohibited the crown from granting monopolies in most circumstances. This was especially the case when the monopoly prevented the right to otherwise lawful work.
This common-law tradition also had disdain for private contracts that created monopoly by restraining the right to work. For instance, the famous Dyer’s case of 1414 held that a contract in which John Dyer promised not to practice his trade in the same town as the plaintiff was void for being an unreasonable restraint on trade.The judge is supposed to have said in response to the plaintiff’s complaint that he would have imprisoned anyone who had claimed such a monopoly on his own authority.
Over time, the common law developed analysis that looked at the reasonableness of restraints on trade, such as the extent to which they were limited in geographic reach and duration, as well as the consideration given in return. This part of the anti-monopoly tradition would later constitute the thread pulled on by the populists and progressives who created the earliest American antitrust laws.
Early American Anti-Monopoly Tradition
American law largely inherited the English common law system. It also inherited the anti-monopoly tradition the common law embodied. The founding generation of American lawyers were trained on Edward Coke’s commentary in “The Institutes of the Laws of England,” wherein he strongly opposed government-granted monopolies.
This sentiment can be found in the 1641 Massachusetts Body of Liberties, which stated: “No monopolies shall be granted or allowed amongst us, but of such new Inventions that are profitable to the Countrie, and that for a short time.” In fact, the Boston Tea Party itself was in part a protest of the monopoly granted to the East India Company, which included a special refund from duties by Parliament that no other tea importers enjoyed.
This anti-monopoly tradition also can be seen in the debates at the Constitutional Convention. A proposal to give the federal government power to grant “charters of incorporation” was voted down on fears it could lead to monopolies. Thomas Jefferson, George Mason, and several Antifederalists expressed concerns about the new national government’s ability to grant monopolies, arguing that an anti-monopoly clause should be added to the Constitution. Six states wanted to include provisions that would ban monopolies and the granting of special privileges in the Constitution.
Coinciding with the Industrial Revolution, liberalization of corporate law made it easier for private persons to organize firms that were not simply grants of exclusive monopoly. But discontent with industrialization and other social changes contributed to the birth of a populist movement, and later to progressives like Brandeis, who focused on private combinations and corporate power rather than government-granted privileges. This is the strand of anti-monopoly sentiment that continues to dominate the rhetoric today.
What This Means for Today
Modern anti-monopoly advocates have largely forgotten the lessons of the long Anglo-American tradition that found government is often the source of monopoly power. Indeed, American law privileges government’s ability to grant favors to businesses through licensing, the tax code, subsidies, and even regulation. The state action doctrine from Parker v. Brown exempts state and municipal authorities from antitrust lawsuits even where their policies have anticompetitive effects. And the Noerr-Pennington doctrine protects the rights of industry groups to lobby the government to pass anticompetitive laws.
As a result, government is often used to harm competition, with no remedy outside of the political process that created the monopoly. Antitrust law is used instead to target businesses built by serving consumers well in the marketplace.
Recovering this older anti-monopoly tradition would help focus the anti-monopoly movement on a serious problem modern antitrust misses. While the consumer-welfare standard that modern antitrust advocates often decry has helped to focus the law on actual harms to consumers, antitrust more broadly continues to encourage rent-seeking by immunizing state action and lobbying behavior.
One of the key recommendations of the House Judiciary Committee’s antitrust report which seems to have bipartisan support (see Rep. Buck’s report) is shifting evidentiary burdens of proof to defendants with “monopoly power.” These recommended changes are aimed at helping antitrust enforcers and private plaintiffs “win” more. The result may well be more convictions, more jury verdicts, more consent decrees, and more settlements, but there is a cost.
Presumption of illegality for certain classes of defendants unless they can prove otherwise is inconsistent with the American traditions of the presumption of innocence and allowing persons to dispose of their property as they wish. Forcing antitrust defendants to defend themselves from what is effectively a presumption of guilt will create an enormous burden upon them. But this will be felt far beyond just antitrust defendants. Consumers who would have benefited from mergers that are deterred or business conduct that is prevented will have those benefits foregone.
The Presumption of Liberty in American Law
The Presumption of Innocence
There is nothing wrong with presumptions in law as a general matter. For instance, one of the most important presumptions in American law is that criminal defendants are presumed innocent until proven guilty. Prosecutors bear the burden of proof, and must prove guilt beyond a reasonable doubt. Even in the civil context, plaintiffs, whether public or private, have the burden of proving a violation of the law, by the preponderance of the evidence. In either case, the defendant is not required to prove they didn’t violate the law.
Fundamentally, the presumption of innocence is about liberty. As William Blackstone put it in his Commentaries on the Law of England centuries ago: “the law holds that it is better that ten guilty persons escape than that one innocent suffer.”
In economic terms, society must balance the need to deter bad conduct, however defined, with not deterring good conduct. In a world of uncertainty, this includes the possibility that decision-makers will get it wrong. For instance, if a mere allegation of wrongdoing places the burden upon a defendant to prove his or her innocence, much good conduct would be deterred out of fear of false allegations. In this sense, the presumption of innocence is important: it protects the innocent from allegations of wrongdoing, even if that means in some cases the guilty escape judgment.
Presumptions in Property, Contract, and Corporate Law
Similarly, presumptions in other areas of law protect liberty and are against deterring the good in the name of preventing the bad. For instance, the presumption when it comes to how people dispose of their property is that unless a law says otherwise, they may do as they wish. In other words, there is no presumption that a person may not use their property in a manner they wish to do so. The presumption is liberty, unless a valid law proscribes behavior. The exceptions to this rule typically deal with situations where a use of property could harm someone else.
In contracts, the right of persons to come to a mutual agreement is the general rule, with rare exceptions. The presumption is in favor of enforcing voluntary agreements. Default rules in the absence of complete contracting supplement these agreements, but even the default rules can be contracted around in most cases.
Bringing the two together, corporate law—essentially the nexus of contract law and property law— allows persons to come together to dispose of property and make contracts, supplying default rules which can be contracted around. The presumption again is that people are free to do as they choose with their own property. The default is never that people can’t create firms to buy or sell or make agreements.
A corollary right of the above is that people may start businesses and deal with others on whatever basis they choose, unless a generally applicable law says otherwise. In fact, they can even buy other businesses. Mergers and acquisitions are generally allowed by the law.
Presumptions in Antitrust Law
Antitrust is a generally applicable set of laws which proscribe how people can use their property. But even there, the presumption is not that every merger or act by a large company is harmful.
On the contrary, antitrust laws allow groups of people to dispose of property as they wish unless it can be shown that a firm has “market power” that is likely to be exercised to the detriment of competition or consumers. Plaintiffs, whether public or private, bear the burden of proving all the elements of the antitrust violation alleged.
In particular, antitrust law has incorporated the error cost framework. This framework considers the cost of getting decisions wrong. Much like the presumption of innocence is based on the tradeoff of allowing some guilty persons to go unpunished in order to protect the innocent, the error cost framework notes there is tradeoff between allowing some anticompetitive conduct to go unpunished in order to protect procompetitive conduct. American antitrust law seeks to avoid the condemnation of procompetitive conduct more than it avoids allowing the guilty to escape condemnation.
For instance, to prove a merger or acquisition would violate the antitrust laws, a plaintiff must show the transaction will substantially lessen competition. This involves defining the market, that the defendant has power over that market, and that the transaction would lessen competition. While concentration of the market is an important part of the analysis, antitrust law must consider the effect on consumer welfare as a whole. The law doesn’t simply condemn mergers or acquisitions by large companies just because they are large.
Similarly, to prove a monopolization claim, a plaintiff must establish the defendant has “monopoly power” in the relevant market. But monopoly power isn’t enough. As stated by the Supreme Court in Trinko:
The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period— is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.
The plaintiff must also prove the defendant has engaged in the “willful acquisition or maintenance of [market] power, as distinguished from growth or development as a consequence of a superior product, business acumen, or historical accident.” Antitrust law is careful to avoid mistaken inferences and false condemnations, which are especially costly because they “chill the very conduct antitrust laws are designed to protect.”
The presumption isn’t against mergers or business conduct even when those businesses are large. Antitrust law only condemns mergers or business conduct when it is likely to harm consumers.
How Changing Antitrust Presumptions will Harm Society
In light of all of this, the House Judiciary Committee’s Investigation of Competition in Digital Markets proposes some pretty radical departures from the law’s normal presumption in favor of people disposing property how they choose. Unfortunately, the minority report issued by Representative Buck agrees with the recommendations to shift burdens onto antitrust defendants in certain cases.
One of the recommendations from the Subcommittee is that Congress:
“codify bright-line rules for merger enforcement, including structural presumptions. Under a structural presumption, mergers resulting in a single firm controlling an outsized market share, or resulting in a significant increase in concentration, would be presumptively prohibited under Section 7 of the Clayton Act. This structural presumption would place the burden of proof upon the merging parties to show that the merger would not reduce competition. A showing that the merger would result in efficiencies should not be sufficient to overcome the presumption that it is anticompetitive. It is the view of Subcommittee staff that the 30% threshold established by the Supreme Court in Philadelphia National Bank is appropriate, although a lower standard for monopsony or buyer power claims may deserve consideration by the Subcommittee. By shifting the burden of proof to the merging parties in cases involving concentrated markets and high market shares, codifying the structural presumption would help promote the efficient allocation of agency resources and increase the likelihood that anticompetitive mergers are blocked. (emphasis added)
Under this proposal, in cases where concentration meets an arbitrary benchmark based upon the market definition, the presumption will be that the merger is illegal. Defendants will now bear the burden of proof to show the merger won’t reduce competition, without even getting to refer to efficiencies that could benefit consumers.
Changing the burden of proof to be against criminal defendants would lead to more convictions of guilty people, but it would also lead to a lot more false convictions of innocent defendants. Similarly, changing the burden of proof to be against antitrust defendants would certainly lead to more condemnations of anticompetitive mergers, but it would also lead to the deterrence of a significant portion of procompetitive mergers.
So yes, if adopted, plaintiffs would likely win more as a result of these proposed changes, including in cases where mergers are anticompetitive. But this does not necessarily mean it would be to the benefit of larger society.
Antitrust has evolved over time to recognize that concentration alone is not predictive of likely competitive harm in merger analysis. Both the horizontal merger guidelines and the vertical merger guidelines issued by the FTC and DOJ emphasize the importance of fact-specific inquiries into competitive effects, and not just a reliance on concentration statistics. This reflected a long-standing bipartisan consensus. The HJC’s majority report overturns this consensus by suggesting a return to the structural presumptions which have largely been rejected in antitrust law.
The HJC majority report also calls for changes in presumptions when it comes to monopolization claims. For instance, the report calls on Congress to consider creating a statutory presumption of dominance by a seller with a market share of 30% or more and a presumption of dominance by a buyer with a market share of 25% or more. The report then goes on to suggest overturning a number of precedents dealing with monopolization claims which in their view restricted claims of tying, predatory pricing, refusals to deal, leveraging, and self-preferencing. In particular, they call on Congress to “[c]larify that ‘false positives’ (or erroneous enforcement) are not more costly than ‘false negatives’ (erroneous non-enforcement), and that, when relating to conduct or mergers involving dominant firms, ‘false negatives’ are costlier.”
This again completely turns the ordinary presumptions about innocence and allowing people to dispose of the property as they see fit on their head. If adopted, defendants would largely have to prove their innocence in monopolization cases if their shares of the market are above a certain threshold.
Moreover, the report calls for Congress to consider making conduct illegal even if it “can be justified as an improvement for consumers.” It is highly likely that the changes proposed will harm consumer welfare in many cases, as the focus changes from economic efficiency to concentration.
The HJC report’s recommendations on changing antitrust presumptions should be rejected. The harms will be felt not only by antitrust defendants, who will be much more likely to lose regardless of whether they have violated the law, but by consumers whose welfare is no longer the focus. The result is inconsistent with the American tradition that presumes innocence and the ability of people to dispose of their property as they see fit.
During last week’s antitrust hearing, Representative Jamie Raskin (D-Md.) provided a sound bite that served as a salvo: “In the 19th century we had the robber barons, in the 21st century we get the cyber barons.” But with sound bites, much like bumper stickers, there’s no room for nuance or scrutiny.
The news media has extensively covered the “questioning” of the CEOs of Facebook, Google, Apple, and Amazon (collectively “Big Tech”). Of course, most of this questioning was actually political posturing with little regard for the actual answers or antitrust law. But just like with the so-called robber barons, the story of Big Tech is much more interesting and complex.
The myth of the robber barons: Market entrepreneurs vs. political entrepreneurs
In his Myth of the Robber Barons, Burton Folsom, Jr. makes the case that much of the received wisdom on the great 19th century businessmen is wrong. He distinguishes between the market entrepreneurs, which generated wealth by selling newer, better, or less expensive products on the free market without any government subsidies, and the political entrepreneurs, who became rich primarily by influencing the government to subsidize their businesses, or enacting legislation or regulation that harms their competitors.
Folsom narrates the stories of market entrepreneurs, like Thomas Gibbons & Cornelius Vanderbilt (steamships), James Hill (railroads), the Scranton brothers (iron rails), Andrew Carnegie & Charles Schwab (steel), and John D. Rockefeller (oil), who created immense value for consumers by drastically reducing the prices of the goods and services their companies provided. Yes, these men got rich. But the value society received was arguably even greater. Wealth was created because market exchange is a positive-sum game.
On the other hand, the political entrepreneurs, like Robert Fulton & Edward Collins (steamships), and Leland Stanford & Henry Villard (railroads), drained societal resources by using taxpayer money to create inefficient monopolies. Because they were not subject to the same market discipline due to their favored position, cutting costs and prices were less important to them than the market entrepreneurs. Their wealth was at the expense of the rest of society, because political exchange is a zero-sum game.
Big Tech makes society better off
Today’s titans of industry, i.e. Big Tech, have created enormous value for society. This is almost impossible to deny, though some try. From zero-priced search on Google, to the convenience and price of products on Amazon, to the nominally free social network(s) of Facebook, to the plethora of options in Apple’s App Store, consumers have greatly benefited from Big Tech. Consumers flock to use Google, Facebook, Amazon, and Apple for a reason: they believe they are getting a great deal.
By and large, the techlash comes from “intellectuals” who think they know better than consumers acting in the marketplace about what is good for them. And as noted by Alec Stapp, Americans in opinion polls consistently put a great deal of trust in Big Tech, at least compared to government institutions:
One of the basic building blocks of economics is that both parties benefit from voluntary exchanges ex ante, or else they would not be willing to engage in it. The fact that consumers use Big Tech to the extent they do is overwhelming evidence of their value. Obfuscations like “market power” mislead more than they inform. In the absence of governmental barriers to entry, consumers voluntarily choosing Big Tech does not mean they have power, it means they provide great service.
Big Tech companies are run by entrepreneurs who must ultimately answer to consumers. In a market economy, profits are a signal that entrepreneurs have successfully brought value to society. But they are also a signal to potential competitors. If Big Tech companies don’t continue to serve the interests of their consumers, they risk losing them to competitors.
Big Tech’s CEOs seem to get this. For instance, Jeff Bezos’ written testimony emphasized the importance of continual innovation at Amazon as a reason for its success:
Since our founding, we have strived to maintain a “Day One” mentality at the company. By that I mean approaching everything we do with the energy and entrepreneurial spirit of Day One. Even though Amazon is a large company, I have always believed that if we commit ourselves to maintaining a Day One mentality as a critical part of our DNA, we can have both the scope and capabilities of a large company and the spirit and heart of a small one.
In my view, obsessive customer focus is by far the best way to achieve and maintain Day One vitality. Why? Because customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and a constant desire to delight customers drives us to constantly invent on their behalf. As a result, by focusing obsessively on customers, we are internally driven to improve our services, add benefits and features, invent new products, lower prices, and speed up shipping times—before we have to. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it. And I could give you many such examples. Not every business takes this customer-first approach, but we do, and it’s our greatest strength.
The economics of multi-sided platforms: How Big Tech does it
Economically speaking, Big Tech companies are (mostly) multi-sided platforms. Multi-sided platforms differ from regular firms in that they have to serve two or more of these distinct types of consumers to generate demand from any of them.
Economist David Evans, who has done as much as any to help us understand multi-sided platforms, has identified three different types:
Market-Makers enable members of distinct groups to transact with each other. Each member of a group values the service more highly if there are more members of the other group, thereby increasing the likelihood of a match and reducing the time it takes to find an acceptable match. (Amazon and Apple’s App Store)
Audience-Makers match advertisers to audiences. Advertisers value a service more if there are more members of an audience who will react positively to their messages; audiences value a service more if there is more useful “content” provided by audience-makers. (Google, especially through YouTube, and Facebook, especially through Instagram)
Demand-Coordinators make goods and services that generate indirect network effects across two or more groups. These platforms do not strictly sell “transactions” like a market maker or “messages” like an audience-maker; they are a residual category much like irregular verbs – numerous, heterogeneous, and important. Software platforms such as Windows and the Palm OS, payment systems such as credit cards, and mobile telephones are demand coordinators. (Android, iOS)
In order to bring value, Big Tech has to consider consumers on all sides of the platform they operate. Sometimes, this means consumers on one side of the platform subsidize the other.
For instance, Google doesn’t charge its users to use its search engine, YouTube, or Gmail. Instead, companies pay Google to advertise to their users. Similarly, Facebook doesn’t charge the users of its social network, advertisers on the other side of the platform subsidize them.
As their competitors and critics love to point out, there are some complications in that some platforms also compete in the markets they create. For instance, Apple does place its own apps inits App Store, and Amazon does engage in some first-party sales on its platform. But generally speaking, both Apple and Amazon act as matchmakers for exchanges between users and third parties.
The difficulty for multi-sided platforms is that they need to balance the interests of each part of the platform in a way that maximizes its value.
For Google and Facebook, they need to balance the interests of users and advertisers. In the case of each, this means a free service for users that is subsidized by the advertisers. But the advertisers gain a lot of value by tailoring ads based upon search history, browsing history, and likes and shares. For Apple and Amazon they need to create platforms which are valuable for buyers and sellers, and balance how much first-party competition they want to have before they lose the benefits of third-party sales.
There are no easy answers to creating a search engine, a video service, a social network, an App store, or an online marketplace. Everything from moderation practices, to pricing on each side of the platform, to the degree of competition from the platform operators themselves needs to be balanced right or these platforms would lose participants on one side of the platform or the other to competitors.
Representative Raskin’s “cyber barons” were raked through the mud by Congress. But much like the falsely identified robber barons of the 19th century who were truly market entrepreneurs, the Big Tech companies of today are wrongfully maligned.
No one is forcing consumers to use these platforms. The incredible benefits they have brought to society through market processes shows they are not robbing anyone. Instead, they are constantly innovating and attempting to strike a balance between consumers on each side of their platform.
The myth of the cyber barons need not live on any longer than last week’s farcical antitrust hearing.
This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.
A boy throws a brick through a bakeshop window. He flees and is never identified. The townspeople gather around the broken glass. “Well,” one of them says to the furious baker, “at least this will generate some business for the windowmaker!”
A reasonable statement? Not really. Although it is indeed a good day for the windowmaker, the money for the new window comes from the baker. Perhaps the baker was planning to use that money to buy a new suit. Now, instead of owning a window and a suit, he owns only a window. The windowmaker’s gain, meanwhile, is simply the tailor’s loss.
This parable of the broken window was conceived by Frédéric Bastiat, a nineteenth-century French economist. He wanted to alert the reader to the importance of opportunity costs—in his words, “that which is not seen.” Time and money spent on one activity cannot be spent on another.
Today Bastiat might tell the parable of the harassed technology company. A tech firm creates a revolutionary new product or service and grows very large. Rivals, lawyers, activists, and politicians call for an antitrust probe. Eventually they get their way. Millions of documents are produced, dozens of depositions are taken, and several hearings are held. In the end no concrete action is taken. “Well,” the critics say, “at least other companies could grow while the firm was sidetracked by the investigation!”
Consider the antitrust case against Microsoft twenty years ago. The case ultimately settled, and Microsoft agreed merely to modify minor aspects of how it sold its products. “It’s worth wondering,” writes Brian McCullough, a generally astute historian of the internet, “how much the flowering of the dot-com era was enabled by the fact that the most dominant, rapacious player in the industry was distracted while the new era was taking shape.” “It’s easy to see,” McCullough says, “that the antitrust trial hobbled Microsoft strategically, and maybe even creatively.”
Should we really be glad that an antitrust dispute “distracted” and “hobbled” Microsoft? What would a focused and unfettered Microsoft have achieved? Maybe nothing; incumbents often grow complacent. Then again, Microsoft might have developed a great search engine or social-media platform. Or it might have invented something that, thanks to the lawsuit, remains absent to this day. What Microsoft would have created in the early 2000s, had it not had to fight the government, is that which is not seen.
But doesn’t obstructing the most successful companies create “room” for new competitors? David Cicilline, the chairman of the House’s antitrust subcommittee, argues that “just pursuing the [Microsoft] enforcement action itself” made “space for an enormous amount of additional innovation and competition.” He contends that the large tech firms seek to buy promising startups before they become full-grown threats, and that such purchases must be blocked.
It’s easy stuff to say. It’s not at all clear that it’s true or that it makes sense. Hindsight bias is rampant. In 2012, for example, Facebook bought Instagram for $1 billion, a purchase that is now cited as a quintessential “killer acquisition.” At the time of the sale, however, Instagram had 27 million users and $0 in revenue. Today it has around a billion users, it is estimated to generate $7 billion in revenue each quarter, and it is worth perhaps $100 billion. It is presumptuous to declare that Instagram, which had only 13 employees in 2012, could have achieved this success on its own.
If distraction is an end in itself, last week’s Big Tech hearing before Cicilline and his subcommittee was a smashing success. Presumably Jeff Bezos, Tim Cook, Sundar Pichai, and Mark Zuckerberg would like to spend the balance of their time developing the next big innovations and staying ahead of smart, capable, ruthless competitors, starting with each other and including foreign firms such as ByteDance and Huawei. Last week they had to put their aspirations aside to prepare for and attend five hours of political theater.
The most common form of exchange at the hearing ran as follows. A representative asks a slanted question. The witness begins to articulate a response. The representative cuts the witness off. The representative gives a prepared speech about how the witness’s answer proved her point.
Many of the antitrust subcommittee’s queries had nothing to do with antitrust. One representative fixated on Amazon’s ties with the Southern Poverty Law Center. Another seemed to want Facebook to interrogate job applicants about their political beliefs. A third asked Zuckerberg to answer for the conduct of Twitter. One representative demanded that social-media posts about unproven Covid-19 treatments be left up, another that they be taken down. Most of the questions that were at least vaguely on topic, meanwhile, were exceedingly weak. The representatives often mistook emails showing that tech CEOs play to win, that they seek to outcompete challengers and rivals, for evidence of anticompetitive harm to consumers. And the panel was often treated like a customer-service hotline. This app developer ran into a difficulty; what say you, Mr. Cook? That third-party seller has a gripe; why won’t you listen to her, Mr. Bezos?
In his opening remarks, Bezos cited a survey that ranked Amazon one of the country’s most trusted institutions. No surprise there. In many places one could have ordered a grocery delivery from Amazon as the hearing started and had the goods put away before it ended. Was Bezos taking a muted dig at Congress? He had every right to—it is one of America’s least trusted institutions. Pichai, for his part, noted that many users would be willing to pay thousands of dollars a year for Google’s free products. Is Congress providing people that kind of value?
The advance of technology will never be an unalloyed blessing. There are legitimate concerns, for instance, about how social-media platforms affect public discourse. “Human beings evolved to gossip, preen, manipulate, and ostracize,” psychologist Jonathan Haidt and technologist Tobias Rose-Stockwell observe. Social media exploits these tendencies, they contend, by rewarding those who trade in the glib put-down, the smug pronouncement, the theatrical smear. Speakers become “cruel and shallow”; “nuance and truth” become “casualties in [a] competition to gain the approval of [an] audience.”
Three things are true at once. First, Haidt and Rose-Stockwell have a point. Second, their point goes only so far. Social media does not force people to behave badly. Assuming otherwise lets individual humans off too easy. Indeed, it deprives them of agency. If you think it is within your power to display grace, love, and transcendence, you owe it to others to think it is within their power as well.
Third, if you really want to see adults act like children, watch a high-profile congressional hearing. A hearing for Attorney General William Barr, held the day before the Big Tech hearing and attended by many of the same representatives, was a classic of the format.
The tech hearing was not as shambolic as the Barr hearing. And the representatives act like sanctimonious halfwits in part to concoct the sick burns that attract clicks on the very platforms built, facilitated, and delivered by the tech companies. For these and other obvious reasons, no one should feel sorry for the four men who spent a Wednesday afternoon serving as props for demagogues. But that doesn’t mean the charade was a productive use of time. There is always that which is not seen.
Yet another sad story was caught on camera this week showing a group of police officers killing an unarmed African-American man named George Floyd. While the officers were fired from the police department, there is still much uncertainty about what will happen next to hold those officers accountable as a legal matter.
A well-functioning legal system should protect the constitutional rights of American citizens to be free of unreasonable force from police officers, while also allowing police officers the ability to do their jobs safely and well. In theory, civil rights lawsuits are supposed to strike that balance.
In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.
However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity. Qualified immunity started as a mechanism to protect officers from suit when they acted in “good faith.” Over time, though, the doctrine has evolved away from a subjective test based upon the actor’s good faith to an objective test based upon notice in judicial precedent. As a result, courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it. In the words of the Supreme Court, qualified immunity protects “all but the plainly incompetent or those who knowingly violate the law.”
This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.
Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity. On top of that, the regular practice of governments is to indemnify officers even when there is a settlement or a judgment. The result is to encourage police officers to take insufficient care when making the choice about the level of force to use.
Economics 101 makes a clear prediction: When unreasonable uses of force are not held accountable, you get more unreasonable uses of force. Unfortunately, the news continues to illustrate the accuracy of this prediction.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]
In an earlier TOTM post, we argued as the economy emerges from the COVID-19 crisis, perhaps the best policy would allow properly motivated firms and households to themselves balance the benefits, costs, and risks of transitioning to “business as usual.”
Sometimes, however, well meaning government policies disrupt the balance and realign motivations.
Our post contrasted firms who determined they could remain open by undertaking mitigation efforts with those who determined they could not safely remain open. One of these latter firms was Portland-based ChefStable, which operates more than 20 restaurants and bars. Kurt Huffman, the owner of ChefStable, shut down all the company’s properties one day before the Oregon governor issued her “Stay home, stay safe” order.
An unintended consequence
In a recent Wall Street Journal op-ed, Mr. Huffman reports his business was able to shift to carryout and delivery, which ended up being more successful than anticipated. So successful, in fact, that he needed to bring back some of the laid-off employees. That’s when he ran into one of the stimulus package’s unintended—but not unanticipated—consequences of providing federal-level payments on top of existing state-level guarantees:
We started making the calls last week, just as our furloughed employees began receiving weekly Federal Pandemic Unemployment Compensation checks of $600 under the Cares Act. When we asked our employees to come back, almost all said, “No thanks.” If they return to work, they’ll have to take a pay cut.
But as of this week, that same employee receives $1,016 a week, or $376 more than he made as a full time employee. Why on earth would he want to come back to work?
Mr. Huffman’s not alone. NPR reports on a Kentucky coffee shop owner who faces the same difficulty keeping her employees at work:
“The very people we hired have now asked us to be laid off,” Marietta wrote in a blog post. “Not because they did not like their jobs or because they did not want to work, but because it would cost them literally hundreds of dollars per week to be employed.”
With the federal government now offering $600 a week on top of the state’s unemployment benefits, she recognized her former employees could make more money staying home than they did on the job.
Or, a fully intended consequence
The NPR piece indicates the Trump administration opted for the relatively straightforward (if not simplistic) unemployment payments as a way to get the money to unemployed workers as quickly as possible.
On the other hand, maybe the unemployment premium was not an unintended consequence. Perhaps, there was some intention.
If the purpose of the stay-at-home orders is to “flatten the curve” and slow the spread of the coronavirus, then it can be argued the purpose of the stimulus spending is to mitigate some of the economic costs.
If this is the case, it can also be argued that the unemployment premium paid by the federal government was designed to encourage people to stay at home and delay returning to work. In fact, it may be more effective than a bunch of loophole laden employment regulations that would require an army of enforcers.
Mr. Huffman seems confident his employees will be ready to return to work in August, when the premium runs out. John Cochrane, however, is not so confident, writing on his blog, “Hint to Mr. Huffman: I would not bet too much that this deadline is not extended.”
With the administration’s state-by-state phased re-opening of the economy, the unemployment premium payments could be tweaked so only residents in states in Phase 1 or 2 would be eligible to receive the premium payments.
Of course, this tweak would unleash its own unintended consequences. In particular, it would encourage some states to slow walk the re-opening of their economies as a way to extract more federal money for their residents. My wild guess: The slow walking states will be the same states who have been most affected by the state and local tax deductibility provisions in the Tax Cuts and Jobs Act.
As with all government policies, the unemployment provisions in the COVID-19 stimulus raise the age old question: If a policy generates unintended consequences that are not unanticipated, can those consequences really be unintended?
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Miranda Perry Fleischer, (Professor Law and Co-Director of Tax Programs at the University of San Diego School of Law); and Matt Zwolinski (Professor of Philosophy, University of San Diego; founder and director, USD Center for Ethics, Economics, and Public Policy; founder and contributor, Bleeding Heart Libertarians Blog)]
This week, Americans began receiving cold, hard cash from the government. Meant to cushion the economic fallout of Covid-19, the CARES Act provides households with relief payments of up to $1200 per adult and $500 per child. As we have written elsewhere, direct cash transfers are the simplest, least paternalistic, and most efficient way to protect Americans’ economic health – pandemic or not. The idea of simply giving people money has deep historical and wide ideological roots, culminating in Andrew Yang’s popularization of a universal basic income (“UBI”) during his now-suspended presidential campaign. The CARES Act relief provisions embody some of the potential benefits of a UBI, but nevertheless fail in key ways to deliver its true promise.
Provide Cash, No-Strings-Attached
Most promisingly, the relief payments are no-strings-attached. Recipients can use them as they – not the government – think best, be it for rent, food, or a laptop for a child to learn remotely. This freedom is a welcome departure from most current aid programs, which are often in-kind or restricted transfers. Kansas prohibits welfare recipients from using benefits at movie theaters and swimming pools. SNAP recipients cannot purchase “hot food” such as a ready-to-eat roasted chicken; California has a 17-page pamphlet identifying which foods users of Women, Infants and Children (“WIC”) benefits can buy (for example, white eggs but not brown).
These restrictions arise from a distrust of beneficiaries. Yet numerous studies show that recipients of cash transfers do not waste benefits on alcohol, drugs or gambling. Instead, beneficiaries in developing countries purchase livestock, metal roofs, or healthier food. In wealthier countries, cash transfers are associated with improvements in infant health, better nutrition, higher test scores, more schooling, and lower rates of arrest for young adults – all of which suggest beneficiaries do not waste cash.
Avoid Asset Tests
A second positive of the relief payments is that they eschew asset tests, unlike many welfare programs. For example, a family can lose hundreds of dollars of SNAP benefits if their countable assets exceed $2,250. Such limits act as an implicit wealth tax and discourage lower-income individuals from saving. Indeed, some recipients report engaging in transactions like buying furniture on lay-away (which does not count) to avoid the asset limits. Lower-income individuals, for whom a car repair bill or traffic ticket can lead to financial ruin, should be encouraged to – not penalized for – saving for a rainy day.
Don’t Worry So Much about the Labor Market
A third pro is that the direct relief payments are not tied to a showing of desert. They do not require one to work, be looking for work, or show that one is either unable to work or engaged in a substitute such as child care or school. Again, this contrasts with most current welfare programs. SNAP requires able-bodied childless adults to work or participate in training or education 80 hours a month. Supplemental Security Income requires non-elderly recipients to prove that they are blind or disabled. Nor do the relief payments require recipients to pass a drug test, or prove they have no criminal record.
As with spending restrictions, these requirements display distrust of beneficiaries. The fear is that “money for nothing” will encourage low-income individuals to leave their jobs en masse. But this fear, too, is largely overblown. Although past experiments with unconditional transfers show that total work hours drop, the bulk of this drop is from teenagers staying in school longer, new mothers delaying entrance into the workforce, and primary earners reducing their hours from say, 60 to 50 hours a week. We could also imagine UBI recipients spending time volunteering, engaging in the arts, or taking care of friends and relatives. None of these are necessarily bad things.
Don’t Limit Aid to the “Deserving”
On these three counts, the CARES Act embraces the promise of a UBI. But the CARES Act departs from key aspects of a well-designed, true UBI. Most importantly, the size of the relief payments – one-time transfers of $1200 per adult – pale in comparison to the Act’s enhanced unemployment benefits of $600/week. This mismatch underscores how deeply ingrained our country’s obsession with helping only the “deserving” poor is and how narrowly “desert” is defined. The Act’s most generous aid is limited to individuals with pre-existing connections to the formal labor market who leave under very specific conditions. Someone who cannot work because they are caring for a family member sick with COVID-19 qualifies, but not an adult child who left a job months ago to care for an aging parent with Alzheimer’s. A parent who cannot work because her child’s school was cancelled due to the pandemic qualifies, but not a parent who hasn’t worked the past couple years due to the lack of affordable child care. And because unemployment benefits not only turn on being previously employed but also rise the higher one’s past wages were, this mismatch magnifies that our safety net helps the slightly poor much more than the very poorest among us.
Don’t Impose Bureaucratic Hurdles
The botched roll-out of the enhanced unemployment benefits illustrates another downside to targeting aid only to the “deserving”: It is far more complicated than giving aid to all who need it. Guidance for self-employed workers (newly eligible for such benefits) is still forthcoming. Individuals with more than one employer before the crisis struggle to input multiple jobs in the system, even though their benefits increase as their past wages do. Even college graduates have trouble completing the clunky forms; a friend who teaches yoga had to choose between “aqua fitness instructor” and “physical education” when listing her job.
These frustrations are just another example of the government’s ineptitude at determining who is and is not work capable – even in good times. Often, the very people that can navigate the system to convince the government they are unable to work are actually the most work-capable. Those least capable of work, unable to navigate the system, receive nothing. And as millions of Americans spend countless hours on the phone and navigating crashing websites, they are learning what has been painfully obvious to many lower-income individuals for years – the government often puts insurmountable barriers in the way of even the “deserving poor.” These barriers – numerous office visits, lengthy forms, drug tests – are sometimes so time consuming that beneficiaries must choose between obtaining benefits to which they are legally entitled and applying for jobs or working extra hours. Lesson one from the CARES Act is that universal payments, paid to all, avoid these pitfalls.
Don’t Means Test Up Front
The CARES Act contains three other flaws that a well-designed UBI would also fix. First, the structure of the cash transfers highlights the drawbacks of upfront means testing. In an attempt to limit aid to Americans in financial distress, the $1200 relief payments begin to phase-out at five cents on the dollar when income exceeds a certain threshold: $75,000 for childless, single individuals and $150,000 for married couples. The catch is that for most Americans, their 2019 or 2018 incomes will determine whether their relief payments phase-out – and therefore how much aid they receive now, in 2020. In a world where 22 million Americans have filed for unemployment in the past month, looking to one or two-year old data to determine need is meaningless. Many Americans whose pre-pandemic incomes exceeded the threshold are now struggling to make mortgage payments and put food on the table, but will receive little or no direct cash aid under the CARES Act until April of 2021.
This absurdity magnifies a problem inherent in ex ante means tests. Often, one’s past financial status does not tell us much about an individual’s current needs. This is particularly true when incomes fluctuate from period to period, as is the case with many lower-income workers. Imagine a fast food worker and SNAP beneficiary whose schedule changes month to month, if not week to week. If she is lucky enough to work a lot in November, she may see her December SNAP benefits reduced. But what if her boss gives her fewer shifts in December? Both her paycheck and her SNAP benefits will be lower in December, leaving her struggling.
The solution is to send cash to all Americans, and recapture the transfer through the income tax system. Mathematically, an ex post tax is exactly the same as an ex ante phase out. Consider the CARES Act. A childless single individual with an income of $85,000 is $10,000 over the threshold, reducing her benefit by $500 and netting her $700. Giving her a check for $1200 and taxing her an additional 5% on income above $75,000 also nets her $700. As a practical matter, however, an ex post tax is more accurate because hindsight is 20-20. Lesson two from the CARES Act is that universal payments offset by taxes are superior to ex ante means-testing.
Provide Regular Payments
Third, the CARES Act provides one lump sum payment, with struggling Americans wondering whether Congress will act again. This is a missed opportunity: Studies show that families receiving SNAP benefits face challenges planning for even a month at a time. Lesson three is that guaranteed monthly or bi-weekly payments – as a true UBI would provide — would help households plan and provide some peace of mind amidst this uncertainty.
Provide Equal Payments to Children and Adults
Finally, the CARES Act provides a smaller benefit to children than adults. This is nonsensical. A single parent with two children faces greater hardship than a married couple with one child, as she has the same number of mouths to feed with fewer earners. Further, social science evidence suggests that augmenting family income has positive long-run consequences for children. Lesson four from the CARES Act – the empirical case for a UBI is strongest for families with children.
It’s Better to Be Overly, not Underly, Generous
The Act’s direct cash payments are a step in the right direction. But they demonstrate that not all cash assistance plans are created equal. Uniform and periodic payments to all – regardless of age and one’s relationship to the workforce – is the best way to protect Americans’ economic health, pandemic or not. This is not the time to be stingy or moralistic in our assistance. Better to err on the side of being overly generous now, especially when we can correct that error later through the tax system. Errors that result in withholding aid from those who need it, alas, might not be so easy to correct.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored byThomas W. Hazlett, (Hugh H. Macaulay Endowed Professor of Economics, John E. Walker Department of Economics Clemson University)]
The brutal toll of the coronavirus pandemic has delivered dramatic public policies. The United States has closed institutions, banned crowds, postponed non-emergency medical procedures and instituted social distancing. All to “flatten the curve” of illness. The measures are expensive, but there is no obvious way to better save lives.
There is evidence that, even without the antivirals or vaccines we hope come soon, we are limiting the spread of COVID-19. Daily death totals for the world appear to be leveling; the most severely impacted countries, Italy and Spain, are seeing declines; the top U.S. hotspot, New York, appears to be peaking (and net new coronavirus hospital admissions fell substantially yesterday). I hope that, looking back, these inferences look reasonable.
But of course I do. Is that rational introspection, or confirmation bias? To try to know, we should look about to see how others are addressing this challenge, and how well they are doing. There are experiments being run, in real time on actual economies, and diversity of results is one of the few blessings conveyed by our coronavirus demon.
Differing approaches to mitigating externalities around the world
It strikes many as entirely off-topic to discuss the efficiency of our measures, as though only the most expensive, draconian remedies work. There is a tendency to stress how little room for optionality there exists. Exhortation seems to be the strategy. No doubt, we are confronted by a classic “public good” challenge, where individuals may impose costs on others. Not intentionally, but perhaps through actions that are short-sighted. If a neighbor fails to take “due care” they needlessly endanger others. To overcome such free riding, we “rally ‘round the flag” to condemn anti-social behavior. That is a community survival trait.
And entirely compatible with the pursuit of efficient rules. Shuttering the marketplace and freezing personal mobility imposes harsh hardships; they are, unsurprisingly, resisted. It is stunning how rapidly our Conventional Wisdom has changed, but as recently as January 29 N.Y. Times’ tech columnist Farhad Manjoo warned us to slow down, to “Beware the Pandemic Panic.” He echoed the World Health Organization’s view that the threat was meek and that we ought focus on “not the illness itself but the amped-up, ill-considered way our frightened world might respond to it.” (See Jonathan Tobin’s nice overview of the errors made, left and right, in the run-up to the lock-down. It notes Manjoo’s reversal in the Times, Feb. 26.)
When the disease seemed less, we were reluctant to impose costs; as the threat loomed larger, we rushed to make up for lost time. We now pay the price for acting late, but without perfect foresight – our perennial state – that insight does not much help us today or prep us for tomorrow. Keen observation of more efficient ways, and robust public discussion, will.
Sweden has adopted the hygiene and separation practices familiar to Americans. But the government has stopped short of mandates imposed elsewhere. While college courses have rolled over to the Internet, Sweden has not closed schools for students 16 and under. Bars and restaurants remain open, with gatherings up to 50 approved (the US President has asked crowds to be kept to 10 or less). Life seems almost normal to many – Americans might pay a ton for that. Still, substantial macroeconomic costs remain. One estimate predicts a 4% decline in 2020 GDP, beating expectations for Europe but similar to U.S. forecasts (see Goldman Sachs’ March 26 report with 2020 U.S. GDP growth projection of -3.8% and -9% for European markets.) Alas, the Swedish fatality rate, population adjusted, is higher than its Scandinavian peers and (as of April 7) about one-half higher than the U.S. See Table.
COVID-19 Fatality Rates per Million Population, Selected Countries (4.7.20)
The Czech Republic – with a much lower COVID-19 mortality rate – innovated. The Czechs imposed the standard hygiene and social distancing practices, but added a twist: every person, when in public, is obligated to wear a face mask. It need not be medical grade. This sidestep not only spares supplies for crucial medical professionals, who work in close proximity to patients infected with coronavirus, it has unleashed a popular movement to sew home-made masks. That has jump-started social norms to reduce infections by wearing protective gear. And its simple logic is compelling: you protect me, I protect you.
Of course, the masks do not block one hundred percent of potential transmissions – perhaps no more than two-thirds, under favorable conditions, according to a 2013 study in the journal Disaster Medicine and Public Health Preparedness, Testing Homemade Masks for Efficacy: Would they Protect in an Influenza Pandemic?.The findings, showing results for filtering effectiveness using different materials masks, are given in the Table below. They suggest that (a) no masks are perfectly effective in blocking all tiny particles, including infectious biological matter; (b) surgical masks are relatively effective; (c) homemade masks are less effective, but much better than nothing – and should be used in conjunction with other (distancing, hygiene, etc.) practices. Where surgical masks are too expensive or unavailable, cotton face masks (sewn with multiple layers) or vacuum bags (if you can snag them) are useful substitutes. Their role is to suppress rates of disease progression, bending the curve and managing the pandemic.
The decision to encourage and then require masks (with an order effective midnight March 18) led to an enthusiastic campaign to make stylish, personalized gear – soon posted on Insta. It channeled the desire of citizens to both battle coronavirus and yet to continue living their lives. Mask wearing then further served as a reminder to observe additional rules of separation, while discouraging people from touching their face. A video on the virus went viral. It’s beautifully logical and upbeat, as global emergency crisis responses go. Judge for yourself.
No doubt more research should be performed; an entire industry of PhD theses from epidemiology to sociology to public health may homestead this topic in the post-Coronavirus world. But we also must pay attention to our experimental results in real time. The Demonstration Effect is, and should be, powerful. Countries such as Slovakia and Belgium saw the Czech Republic’s approach, relative openness (low-cost mitigation), and superior survival rates, and quickly adopted similar policies.
The U.S. rationale for discouraging mask use
U.S. policy makers initially shielded themselves from the face mask question by issuing the “institutional no.” The American public was instructed by the Center for Disease Control (CDC) to refrain from wearing masks save in the instance where they were infected. There were three reasons. First, that wearing masks would actually harm healthy people not impacted with COVID-19. Second, the masks were ineffective in shielding small aerosol particles, particularly since non-professionals would not wear them properly. Third, the limited supply of high-quality, medical grade face masks should be reserved for doctors, nurses, and other health care workers who, by the nature of their tasks, could not observe “social distancing” or otherwise avoid infected COVID-19 patients.
The third rationale had an advantage over the first two as not being false. But by the logic used to prioritize medical professional mask protections, buttressed by a modicum of public education, the rest of us would be likely to benefit, as well. The CDC was arguing magnitude and rankings (OK), and then configuring the effectiveness arguments to justify the rankings (not OK). It was a blunder, squandering precious time and undercutting agency credibility. Moreover, the administrative edict pretended to be scientific when it was crafting (bad) economics. The Czechs and many Asian countries discovered (as disaster preparedness research had already found) that ad hoc masks work reasonably cheaply, quickly and well, and that the population can be protected to a non-trivial degree by producing their own. No need to steal N-95 respirators from frontline warriors; we’ll just make more (lower quality) protection devices.
Tip your cap to the Czech Republic. The story busted out. On March 30, The Guardian wrote: “Czechs get to work making masks after government decree: Czech Republic and Slovakia are only countries in Europe to make coronavirus mask-wearing mandatory.” By April 2, Dr. Ronald Depinho, a former president of M.D. Anderson, was editorializing: “Every American should wear a face mask to defeat Covid-19.” His empirical take was informed by a graphic (popularly Tweeted) showing fatality rates across countries – in general, the mask wearing societies of Asia (Japan, South Korean, Singapore, Taiwan) were seen to be doing relatively well in limiting the COVID-19 carnage.
Face Masks As Pandemic Defense (4.2.20) Source: STAT
Human experiments are often considered cruel. But when they are run, let us learn from them.
On April 3, President Trump announced that the CDC now recommends that the general population wear non-medical masks—meaning fabric that covers one’s nose and mouth, like bandanas or cut T-shirts—when they must leave their homes to go to places like the grocery store. The measure is voluntary. The mayors of Los Angeles and New York City have already made similar recommendations. In other parts of the country, it’s not voluntary: for example, officials in Laredo, Texas have said they can fine people up to $1,000 when residents do not wear a face covering in public.
Kudos to the agency. Mistakes will be made, and it’s a great idea to fix them. But it is also instructive to see where the policy was on March 4, when TIME ran a story on how the CDC was having to combat widespread public demand for masks. There had been a retail run on masks, wiping out inventories at stores, Amazon and everywhere else; many healthy people were ignoring the request not to mask up in public; celebrities like Gwyneth Paltrow and Bella Hadid were posting their pix online. And here’s the chilling part, and it’s sadly symptomatic: the magazine fully took the agency’s side on the science and had no trouble finding additional expert authority to suppress the urge to investigate. Instead, the issue was settled by decree and then embellished as factual necessity:
“It seems kind of intuitively obvious that if you put something—whether it’s a scarf or a mask—in front of your nose and mouth, that will filter out some of these viruses that are floating around out there,” says Dr. William Schaffner, professor of medicine in the division of infectious diseases at Vanderbilt University. The only problem: that’s not likely to be effective against respiratory illnesses like the flu and COVID-19. If it were, “the CDC would have recommended it years ago,” he says. “It doesn’t, because it makes science-based recommendations.”
About that, TIME wrote: “The science, according to the CDC, says that surgical masks won’t stop the wearer from inhaling small airborne particles, which can cause infection. Nor do these masks form a snug seal around the face.” The harm was not simply a run on supplies that would deprive health workers of necessary protective gear.
“Seriously people- STOP BUYING MASKS!” tweeted Dr. Jerome Adams, the U.S. Surgeon General, on Feb. 29. “They are NOT effective in preventing general public… Adams said that wearing a mask can even increase your risk of getting the virus.
This extended into the psychological realm:
Lynn Bufka, a clinical psychologist and senior director for practice, research and policy at the American Psychological Association, suspects that people are clinging to masks for the same reason they knock on wood or avoid walking under ladders. “Even if experts are saying it’s really not going to make a difference, a little [part of] people’s brains is thinking, well, it’s not going to hurt. Maybe it’ll cut my risk just a little bit, so it’s worth it to wear a mask,” she says. In that sense, wearing a mask is a “superstitious behavior”…
Earth to Experts: superstitions run in multiple directions. See: the current view of the CDC as a correction of their previous one. And note the new TIME, quoting quite a different expert view on April 6.
“Now with the realization that there are individuals who are asymptomatic, and those asymptomatic individuals can spread infection, it’s hard to make the recommendation that only ill individuals wear masks in the community setting for protection, because it’s not clear who is ill and who is not,” says Allison Aiello, a professor of epidemiology at the University of North Carolina at Chapel Hill’s Gillings School of Global Public Health, who has researched the efficacy of masks.
Another conventional view that COVID-19 spread needed person-to-person contact, touching or close-in exchange (via coughing, breathing). But now it appears to be the case that the virus hangs around in the air, and that dosing (how much you inhale) matters greatly. A well person who encounters a passing microbe might catch a mild case of COVID-19, whereas sitting next to an infected person for five hours on a bus or airplane will trigger severe infection. In this environment, the logic for masks swells.
Scientific inquiry continues. The World Health Organization posted (March 27) that there was insufficient evidence to say whether COVID-19 travels airborne for any distance. What is the action take-away? Nature(April 2) puts the state of debate like this:
[E]xperts that work on airborne respiratory illnesses and aerosols say that gathering unequivocal evidence for airborne transmission could take years and cost lives. We shouldn’t “let perfect be the enemy of convincing”, says Michael Osterholm, an infectious-disease epidemiologist at the University of Minnesota in Minneapolis. “In the mind of scientists working on this, there’s absolutely no doubt that the virus spreads in the air,” says aerosol scientist Lidia Morawska at the Queensland University of Technology in Brisbane, Australia. “This is a no-brainer.”
Nature notes that those working in the area recommended masks as a policy response.
Challenge the orthodoxy of the expert class, encourage intellectual diversity
Challenging orthodoxy is key to science; how else are errors uncovered or innovations discovered? On the frontiers there cannot be utter consensus. If there is, the thinkers have yet to probe nearly far enough. Safi Bakhall, in his remarkable Loonshots: Nurturing the Crazy Ideas that Win Wars, Cure Diseases and Transform Industries (2019), quotes Nobel Laureate in Medicine, Sir James Black: “it’s not a good drug unless it’s been killed at least three times” (45). The history of progress is pocked with failure, dispute, and persistence. Only then does a great breakthrough survive the Three Deaths.
Professor Zeynep Tufekci, of Information Sciences at the University of North Carolina, came to see her research to suggest that lives could be saved by the mass market adoption of simple, non-medical masks in the United States. She broke the ice on the N.Y. Times op-ed page with her March 17 gem: “Why Telling People They Don’t Need Masks Backfired: To help manage the shortage, the authorities sent a message that made them untrustworthy.”
Dr. Zeynep Tufekci, a professor of information science who specializes in the social effects of technology.
She put pieces of the puzzle together and made rational comparisons:
[P]laces like Hong Kong and Taiwan that jumped to action early with social distancing and universal mask wearing have the pandemic under much greater control, despite having significant travel from mainland China. Hong Kong health officials credit universal mask wearing as part of the solution and recommend universal mask wearing. In fact, Taiwan responded to the coronavirus by immediately ramping up mask production.
I’d wager Zeynep deserves a promotion, if not a Medal of Freedom. Because the fear is that this sort of commentary in the public forum will spark the opposite reaction. She believed, based on her scholarly study, that mass mask adoption might save lives, but cost her own, academically speaking. In a nifty interview with tech explainer Ben Thompson published April 2 on Stratechery, Zeynep confides in how her thinking progressed.
I watched somewhat flabbergasted over the next few months as the recommendation not to wear masks got harder and harder. Instead of getting softer as the epidemic became a pandemic and saying, well, we should see, we should reevaluate, I started seeing all these messages, like people wouldn’t know how to wear masks and they would infect themselves more and also there is a big shortage of masks, and that all came together in a very frustrating moment for me. The idea that people wouldn’t figure out how to wear a surgical mask or N95s, which are those medical grade masks that we’re now reserving only for hospitals and medical workers, is kind of ridiculous. People don’t wash their hands correctly either, right? So when the pandemic hit, we have songs to get people to wash them for the right amount and we teach them how, people can obviously learn how to wear masks correctly. And as you know, people in Hong Kong can do it, in Taiwan can do it.
But I wanted somebody else from the medical fields to write this. I wanted an epidemiologist, I wanted a virologist to come out and say, look, all these health authorities in Hong Kong and Taiwan, in South Korea, in Japan where it’s kind of customary, there are all these places with lower spread… You don’t even know if you’re sick, so the recommendation of wear this if you’re sick made no sense.
So here’s how I came to write it, even though it wasn’t my place to write this, and I really kind of dragged my foot a little bit, because… I’m not an epidemiologist. I don’t have a degree in virology, I’m not the person: I wrote it because none of the doctors could write it…. I said we have to talk about this, we have to change this conversation… So I wrote the piece pretty much making the case against what was then the CDC and the World Health Organization guidelines, and I braced for the biggest backlash of my life… and I thought, I’m going to get in so much trouble over this, I’m going to be canceled, I’m going to have the huge backlash… I thought this might be the end of my writing career as I knew it… but I just have to say this, I have to say my truth.
I hope Zeynep remains asymptomatic. No – actually, I hope she is a star. If she survives and flourishes, maybe diversity of thought, and alert empirical analysis, comparing realistic options during real-time social stress, can make a splash. If so, I hope it becomes airborne.
 The term is attributed to Amazon CEO Jeff Bezos in Brad Stone, “The Everything Store: Jeff Bezos in the Age of Amazon” (2013). It refers to the tendency of any organization, particularly large and complicated ones, to reflexively dismiss new ideas and their sources. It is a twist on the classic NIH (Not Invented Here) problem.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Daniel Takash,(Regulatory policy fellow at the Niskanen Center. He is the manager of Niskanen’s Captured Economy Project, https://capturedeconomy.com, and you can follow him @danieltakash or @capturedecon).]
The pharmaceutical industry should be one of the most well-regarded industries in America. It helps bring drugs to market that improve, and often save, people’s lives. Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular– trailing behind fossil fuels, lawyers, and even the federal government. The opioid crisis dominated the headlines for the past few years, but the high price of drugs is a top-of-mind issue that generates significant animosity toward the pharmaceutical industry. The effects of high drug prices are felt not just at every trip to the pharmacy, but also by those who are priced out of life-saving treatments. Many Americans simply can’t afford what their doctors prescribe. The pharmaceutical industry helps save lives, but it’s also been credibly accused of anticompetitive behavior–not just from generics, but even other brand manufacturers.
These extraordinary times are an opportunity to right the ship. AbbVie, roundly criticized for building a patent thicket around Humira, has donated its patent rights to a promising COVID-19 treatment. This is to be celebrated– yet pharma’s bad reputation is defined by its worst behaviors and the frequent apologetics for overusing the patent system. Hopefully corporate social responsibility will prevail, and such abuses will cease in the future.
The most effective long-term treatment for COVID-19 will be a vaccine. We also need drugs to treat those afflicted with COVID-19 to improve recovery and lower mortality rates for those that get sick before a vaccine is developed and widely available. This requires rapid drug development through effective public-private partnerships to bring these treatments to market.
Without a doubt, these solutions will come from the pharmaceutical industry. Increased funding for the National Institutes for Health, nonprofit research institutions, and private pharmaceutical researchers are likely needed to help accelerate the development of these treatments. But we must be careful to ensure whatever necessary upfront public support is given to these entities results in a fair trade-off for Americans. The U.S. taxpayer is one of the largest investors in early to mid-stage drug research, and we need to make sure that we are a good investor.
Basic research into the costs of drug development, especially when taxpayer subsidies are involved, is a necessary start. This is a feature of the We PAID Act, introduced by Senators Rick Scott (R-FL) and Chris Van Hollen (D-MD), which requires the Department of Health and Human Services to enter into a contract with the National Academy of Medicine to figure the reasonable price of drugs developed with taxpayer support. This reasonable price would include a suitable reward to the private companies that did the important work of finishing drug development and gaining FDA approval. This is important, as setting a price too low would reduce investments in indispensable research and development. But this must be balanced with the risk of using patents to charge prices above and beyond those necessary to finance research, development, and commercialization.
A little sunshine can go a long way. We should trust that pharmaceutical companies will develop a vaccine and treatments or coronavirus, but we must also verify these are affordable and accessible through public scrutiny. Take the drug manufacturer Gilead Science’s about-face on its application for orphan drug status on the possible COVID-19 treatment remdesivir. Remedesivir, developed in part with public funds and already covered by three Gilead patents, technically satisfied the definition of “orphan drug,” as COVID-19 (at the time of the application) afflicted fewer than 200,000 patents. In a pandemic that could infect tens of millions of Americans, this designation is obviously absurd, and public outcry led to Gilead to ask the FDA to rescind the application. Gilead claimed it sought the designation to speed up FDA review, and that might be true. Regardless, public attention meant that the FDA will give Gilead’s drug Remdesivir expedited review without Gilead needing a designation that looks unfair to the American people.
The success of this isolated effort is absolutely worth celebrating. But we need more research to better comprehend the pharmaceutical industry’s needs, and this is just what the study provisions of We PAID would provide.
But a thorough analysis provided under We PAID is the best way for us to fully understand just how much support the pharmaceutical industry needs, and just how successful it has been thus far. The NIH, one of the major sources of publicly funded research, invests about $41.7 billion annually in medical research. We need to better understand how these efforts link up, and how the torch is passed from public to private efforts.
Patents are essential to the functioning of the pharmaceutical industry by incentivizing drug development through temporary periods of exclusivity. But it is equally essential, in light of the considerable investment already made by taxpayers in drug research and development, to make sure we understand the effects of these incentives and calibrate them to balance the interests of patients and pharmaceutical companies. Most drugs require research funding from both public and private sources as well as patent protection. And the U.S. is one of the biggest investors of drug research worldwide (even compared to drug companies), yet Americans pay the highest prices in the world. Are these prices justified, and can we improve patent policy to bring these costs down without harming innovation?
Beyond a thorough analysis of drug pricing, what makes We PAID one of the most promising solutions to the problem of excessively high drug prices are the accountability mechanisms included. The bill, if made law, would establish a Drug Access and Affordability Committee. The Committee would use the methodology from the joint HHS and NAM study to determine a reasonable price for affected drugs (around 20 percent of drugs currently on the market, if the bill were law today). Any companies that price drugs granted exclusivity by a patent above the reasonable price would lose their exclusivity.
This may seem like a price control at first blush, but it isn’t–for two reasons. First, this only applies to drugs developed with taxpayer dollars, which any COVID-19 treatments or cures almost certainly would be considering the $785 million spent by the NIH since 2002 researching coronaviruses. It’s an accountability mechanism that would ensure the government is getting its money’s worth. This tool is akin to ensuring that a government contractor is not charging more than would be reasonable, lest it loses its contract.
Second, it is even less stringent than pulling a contract with a private firm overcharging the government for the services provided. Why? Losing a patent does not mean losing the ability to make a drug, or any other patented invention for that matter.This basic fact is often lost in the patent debate, but it cannot be stressed enough.
If patents functioned as licenses, then every patent expiration would mean another product going off the market. In reality, that means that any other firm can compete and use the patented design. Even if a firm violated the price regulations included in the bill and lost its patent, it could continue manufacturing the drug. And so could any other firm, bringing down prices for all consumers by opening up market competition.
The We PAID Act could be a dramatic change for the drug industry, and because of that many in Congress may want to first debate the particulars of the bill. This is fine, assuming this promising legislation isn’t watered down beyond recognition. But any objections to the Drug Affordability and Access Committee and reasonable pricing regulations aren’t an excuse to not, at a bare minimum, pass the study included in the bill as part of future coronavirus packages, if not sooner. It is an inexpensive way to get good information in a single, reputable source that would allow us to shape good policy.
Good information is needed for good policy. When the government lays the groundwork for future innovations by financing research and development, it can be compared to a venture capitalist providing the financing necessary for an innovative product or service. But just like in the private sector, the government should know what it’s getting for its (read: taxpayers’) money and make recipients of such funding accountable to investors.
The COVID-19 outbreak will be the most pressing issue for the foreseeable future, but determining how pharmaceuticals developed with public research are priced is necessary in good times and bad. The final prices for these important drugs might be fair, but the public will never know without a trusted source examining this information. Trust, but verify. The pharmaceutical industry’s efforts in fighting the COVID-19 pandemic might be the first step to improving Americans’ relationship with the industry. But we need good information to make that happen. Americans need to know when they are being treated fairly, and that policymakers are able to protect them when they are treated unfairly. The government needs to become a better-informed investor, and that won’t happen without something like the We PAID Act.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]
The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions.
Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:
We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.
Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.
Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):
Consumers ration how much they really need;
Producers respond to the rising prices by ramping up supply and distributors make more available; and
Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain.
Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring.
Rationing by consumers
During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages.
Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods.
If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.
According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages.
For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)
The role of “price gouging” laws and social norms
A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.
But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.
Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution.
Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.
Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites.
But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain.
The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.
Increased production and distribution
During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount.
Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.
The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies.
Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.
For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.
These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis.
Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.
Amazon is now prioritizing the shipment of high-demand goods like household staples and medical supplies in its fulfillment services.
Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use.
While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets.
If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals.
Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher.
Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]
The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.
“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.
Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.
But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”
This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.
At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.
The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.
When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.
When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”
It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”
In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.
True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.
COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.
Today the European Commission launched its latest salvo against Google, issuing a decision in its three-year antitrust investigation into the company’s agreements for distribution of the Android mobile operating system. The massive fine levied by the Commission will dominate the headlines, but the underlying legal theory and proposed remedies are just as notable — and just as problematic.
The nirvana fallacy
It is sometimes said that the most important question in all of economics is “compared to what?” UCLA economist Harold Demsetz — one of the most important regulatory economists of the past century — coined the term “nirvana fallacy” to critique would-be regulators’ tendency to compare messy, real-world economic circumstances to idealized alternatives, and to justify policies on the basis of the discrepancy between them. Wishful thinking, in other words.
The Commission’s Android decision falls prey to the nirvana fallacy. It conjures a world in which Google offers its Android operating system on unrealistic terms, prohibits it from doing otherwise, and neglects the actual consequences of such a demand.
The idea at the core of the Commission’s decision is that by making its own services (especially Google Search and Google Play Store) easier to access than competing services on Android devices, Google has effectively foreclosed rivals from effective competition. In order to correct that claimed defect, the Commission demands that Google refrain from engaging in practices that favor its own products in its Android licensing agreements:
At a minimum, Google has to stop and to not re-engage in any of the three types of practices. The decision also requires Google to refrain from any measure that has the same or an equivalent object or effect as these practices.
The basic theory is straightforward enough, but its application here reflects a troubling departure from the underlying economics and a romanticized embrace of industrial policy that is unsupported by the realities of the market.
In a recent interview, European Commission competition chief, Margrethe Vestager, offered a revealing insight into her thinking about her oversight of digital platforms, and perhaps the economy in general: “My concern is more about whether we get the right choices,” she said. Asked about Facebook, for example, she specified exactly what she thinks the “right” choice looks like: “I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.”
Some consumers may well be sympathetic with her preference (and even share her specific vision of what Facebook should offer them). But what if competition doesn’t result in our — or, more to the point, Margrethe Vestager’s — prefered outcomes? Should competition policy nevertheless enact the idiosyncratic consumer preferences of a particular regulator? What if offering consumers the “right” choices comes at the expense of other things they value, like innovation, product quality, or price? And, if so, can antitrust enforcers actually engineer a better world built around these preferences?
Android’s alleged foreclosure… that doesn’t really foreclose anything
The Commission’s primary concern is with the terms of Google’s deal: In exchange for royalty-free access to Android and a set of core, Android-specific applications and services (like Google Search and Google Maps) Google imposes a few contractual conditions.
Google allows manufacturers to use the Android platform — in which the company has invested (and continues to invest) billions of dollars — for free. It does not require device makers to include any of its core, Google-branded features. But if a manufacturer does decide to use any of them, it must include all of them, and make Google Search the device default. In another (much smaller) set of agreements, Google also offers device makers a small share of its revenue from Search if they agree to pre-install only Google Search on their devices (although users remain free to download and install any competing services they wish).
Essentially, that’s it. Google doesn’t allow device makers to pick and choose between parts of the ecosystem of Google products, free-riding on Google’s brand and investments. But manufacturers are free to use the Android platform and to develop their own competing brand built upon Google’s technology.
Other apps may be installed in addition to Google’s core apps. Google Search need not be the exclusive search service, but it must be offered out of the box as the default. Google Play and Chrome must be made available to users, but other app stores and browsers may be pre-installed and even offered as the default. And device makers who choose to do so may share in Search revenue by pre-installing Google Search exclusively — but users can and do install a different search service.
Alternatives to all of Google’s services (including Search) abound on the Android platform. It’s trivial both to install them and to set them as the default. Meanwhile, device makers regularly choose to offer these apps alongside Google’s services, and some, like Samsung, have developed entire customized app suites of their own. Still others, like Amazon, pre-install no Google apps and use Android without any of these constraints (and whose Google-free tablets are regularly ranked as the best-rated and most popular in Europe).
By contrast, Apple bundles its operating system with its devices, bypasses third-party device makers entirely, and offers consumers access to its operating system only if they pay (lavishly) for one of the very limited number of devices the company offers, as well. It is perhaps not surprising — although it is enlightening — that Apple earns more revenue in an average quarter from iPhone sales than Google is reported to have earnedin total from Android since it began offering it in 2008.
Reality — and the limits it imposes on efforts to manufacture nirvana
The logic behind Google’s approach to Android is obvious: It is the extension of Google’s “advertisers pay” platform strategy to mobile. Rather than charging device makers (and thus consumers) directly for its services, Google earns its revenue by charging advertisers for targeted access to users via Search. Remove Search from mobile devices and you remove the mechanism by which Google gets paid.
It’s true that most device makers opt to offer Google’s suite of services to European users, and that most users opt to keep Google Search as the default on their devices — that is, indeed, the hoped-for effect, and necessary to ensure that Google earns a return on its investment.
That users often choose to keep using Google services instead of installing alternatives, and that device makers typically choose to engineer their products around the Google ecosystem, isn’t primarily the result of a Google-imposed mandate; it’s the result of consumer preferences for Google’s offerings in lieu of readily available alternatives.
The EU decision against Google appears to imagine a world in which Google will continue to develop Android and allow device makers to use the platform and Google’s services for free, even if the likelihood of recouping its investment is diminished.
The Commission also assessed in detail Google’s arguments that the tying of the Google Search app and Chrome browser were necessary, in particular to allow Google to monetise its investment in Android, and concluded that these arguments were not well founded. Google achieves billions of dollars in annual revenues with the Google Play Store alone, it collects a lot of data that is valuable to Google’s search and advertising business from Android devices, and it would still have benefitted from a significant stream of revenue from search advertising without the restrictions.
But that world in which Google won’t alter its investment decisions based on a government-mandated reduction in its allowable return on investment doesn’t exist; it’s a fanciful Nirvana.
Google’s real alternatives to the status quo are charging for the use of Android, closing the Android platform and distributing it (like Apple) only on a fully integrated basis, or discontinuing Android.
In reality, and compared to these actual alternatives, Google’s restrictions are trivial. Remember, Google doesn’t insist that Google Search be exclusive, only that it benefit from a “leg up” by being pre-installed as the default. And on this thin reed Google finances the development and maintenance of the (free) Android operating system and all of the other (free) apps from which Google otherwise earns little or no revenue.
It’s hard to see how consumers, device makers, or app developers would be made better off without Google’s restrictions, but in the real world in which the alternative is one of the three manifestly less desirable options mentioned above.
Missing the real competition for the trees
What’s more, while ostensibly aimed at increasing competition, the Commission’s proposed remedy — like the conduct it addresses — doesn’t relate to Google’s most significant competitors at all.
Facebook, Instagram, Firefox, Amazon, Spotify, Yelp, and Yahoo, among many others, are some of the most popular apps on Android phones, including in Europe. They aren’t foreclosed by Google’s Android distribution terms, and it’s even hard to imagine that they would be more popular if only Android phones didn’t come with, say, Google Search pre-installed.
It’s a strange anticompetitive story that has Google allegedly foreclosing insignificant competitors while apparently ignoring its most substantial threats.
The primary challenges Google now faces are from Facebook drawing away the most valuable advertising and Amazon drawing away the most valuable product searches (and increasingly advertising, as well). The fact that Google’s challenged conduct has never shifted in order to target these competitors as their threat emerged, and has had no apparent effect on these competitive dynamics, says all one needs to know about the merits of the Commission’s decision and the value of its proposed remedy.
In reality, as Demsetz suggested, Nirvana cannot be designed by politicians, especially in complex, modern technology markets. Consumers’ best hope for something close — continued innovation, low prices, and voluminous choice — lies in the evolution of markets spurred by consumer demand, not regulators’ efforts to engineer them.