Archives For Law & Economics

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Geoffrey A. Manne is the president and founder of the International Center for Law and Economics.]

I’m delighted to add my comments to the chorus of voices honoring Ajit Pai’s remarkable tenure at the Federal Communications Commission. I’ve known Ajit longer than most. We were classmates in law school … let’s just say “many” years ago. Among the other symposium contributors I know of only one—fellow classmate, Tom Nachbar—who can make a similar claim. I wish I could say this gives me special insight into his motivations, his actions, and the significance of his accomplishments, but really it means only that I have endured his dad jokes and interminable pop-culture references longer than most. 

But I can say this: Ajit has always stood out as a genuinely humble, unfailingly gregarious, relentlessly curious, and remarkably intelligent human being, and he deployed these characteristics to great success at the FCC.   

Ajit’s tenure at the FCC was marked by an abiding appreciation for the importance of competition, both as a guiding principle for new regulations and as a touchstone to determine when to challenge existing ones. As others have noted (and as we have written elsewhere), that approach was reflected significantly in the commission’s Restoring Internet Freedom Order, which made competition—and competition enforcement by the antitrust agencies—the centerpiece of the agency’s approach to net neutrality. But I would argue that perhaps Chairman Pai’s greatest contribution to bringing competition to the forefront of the FCC’s mandate came in his work on media modernization.

Fairly early in his tenure at the commission, Ajit raised concerns with the FCC’s failure to modernize its media-ownership rules. In response to the FCC’s belated effort to initiate the required 2010 and 2014 Quadrennial Reviews of those rules, then-Commissioner Pai noted that the commission had abdicated its responsibility under the statute to promote competition. Not only was the FCC proposing to maintain a host of outdated existing rules, but it was also moving to impose further constraints (through new limitations on the use of Joint Sales Agreements (JSAs)). As Ajit noted, such an approach was antithetical to competition:

In smaller markets, the choice is not between two stations entering into a JSA and those same two stations flourishing while operating completely independently. Rather, the choice is between two stations entering into a JSA and at least one of those stations’ viability being threatened. If stations in these smaller markets are to survive and provide many of the same services as television stations in larger markets, they must cut costs. And JSAs are a vital mechanism for doing that.

The efficiencies created by JSAs are not a luxury in today’s digital age. They are necessary, as local broadcasters face fierce competition for viewers and advertisers.

Under then-Chairman Tom Wheeler, the commission voted to adopt the Quadrennial Review in 2016, issuing rules that largely maintained the status quo and, at best, paid tepid lip service to the massive changes in the competitive landscape. As Ajit wrote in dissent:

The changes to the media marketplace since the FCC adopted the Newspaper-Broadcast Cross-Ownership Rule in 1975 have been revolutionary…. Yet, instead of repealing the Newspaper-Broadcast Cross-Ownership Rule to account for the massive changes in how Americans receive news and information, we cling to it.

And over the near-decade since the FCC last finished a “quadrennial” review, the video marketplace has transformed dramatically…. Yet, instead of loosening the Local Television Ownership Rule to account for the increasing competition to broadcast television stations, we actually tighten that regulation.

And instead of updating the Local Radio Ownership Rule, the Radio-Television Cross-Ownership Rule, and the Dual Network Rule, we merely rubber-stamp them.

The more the media marketplace changes, the more the FCC’s media regulations stay the same.

As Ajit also accurately noted at the time:

Soon, I expect outside parties to deliver us to the denouement: a decisive round of judicial review. I hope that the court that reviews this sad and total abdication of the administrative function finds, once and for all, that our media ownership rules can no longer stay stuck in the 1970s consistent with the Administrative Procedure Act, the Communications Act, and common sense. The regulations discussed above are as timely as “rabbit ears,” and it’s about time they go the way of those relics of the broadcast world. I am hopeful that the intervention of the judicial branch will bring us into the digital age.

And, indeed, just this week the case was argued before the Supreme Court.

In the interim, however, Ajit became Chairman of the FCC. And in his first year in that capacity, he took up a reconsideration of the 2016 Order. This 2017 Order on Reconsideration is the one that finally came before the Supreme Court. 

Consistent with his unwavering commitment to promote media competition—and no longer a minority commissioner shouting into the wind—Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers:

Today we end the 2010/2014 Quadrennial Review proceeding. In doing so, the Commission not only acknowledges the dynamic nature of the media marketplace, but takes concrete steps to update its broadcast ownership rules to reflect reality…. In this Order on Reconsideration, we refuse to ignore the changed landscape and the mandates of Section 202(h), and we deliver on the Commission’s promise to adopt broadcast ownership rules that reflect the present, not the past. Because of our actions today to relax and eliminate outdated rules, broadcasters and local newspapers will at last be given a greater opportunity to compete and thrive in the vibrant and fast-changing media marketplace. And in the end, it is consumers that will benefit, as broadcast stations and newspapers—those media outlets most committed to serving their local communities—will be better able to invest in local news and public interest programming and improve their overall service to those communities.

Ajit’s approach was certainly deregulatory. But more importantly, it was realistic, well-reasoned, and responsive to changing economic circumstances. Unlike most of his predecessors, Ajit was unwilling to accede to the torpor of repeated judicial remands (on dubious legal grounds, as we noted in our amicus brief urging the Court to grant certiorari in the case), permitting facially and wildly outdated rules to persist in the face of massive and obvious economic change. 

Like Ajit, I am not one to advocate regulatory action lightly, especially in the (all-too-rare) face of judicial review that suggests an agency has exceeded its discretion. But in this case, the need for dramatic rule change—here, to deregulate—was undeniable. The only abuse of discretion was on the part of the court, not the agency. As we put it in our amicus brief:

[T]he panel vacated these vital reforms based on mere speculation that they would hinder minority and female ownership, rather than grounding its action on any record evidence of such an effect. In fact, the 2017 Reconsideration Order makes clear that the FCC found no evidence in the record supporting the court’s speculative concern.

…In rejecting the FCC’s stated reasons for repealing or modifying the rules, absent any evidence in the record to the contrary, the panel substituted its own speculative concerns for the judgment of the FCC, notwithstanding the FCC’s decades of experience regulating the broadcast and newspaper industries. By so doing, the panel exceeded the bounds of its judicial review powers under the APA.

Key to Ajit’s conclusion that competition in local media markets could be furthered by permitting more concentration was his awareness that the relevant market for analysis couldn’t be limited to traditional media outlets like broadcasters and newspapers; it must include the likes of cable networks, streaming video providers, and social-media platforms, as well. As Ajit put it in a recent speech:

The problem is a fundamental refusal to grapple with today’s marketplace: what the service market is, who the competitors are, and the like. When assessing competition, some in Washington are so obsessed with the numerator, so to speak—the size of a particular company, for instance—that they’ve completely ignored the explosion of the denominator—the full range of alternatives in media today, many of which didn’t exist a few years ago.

When determining a particular company’s market share, a candid assessment of the denominator should include far more than just broadcast networks or cable channels. From any perspective (economic, legal, or policy), it should include any kinds of media consumption that consumers consider to be substitutes. That could be TV. It could be radio. It could be cable. It could be streaming. It could be social media. It could be gaming. It could be still something else. The touchstone of that denominator should be “what content do people choose today?”, not “what content did people choose in 1975 or 1992, and how can we artificially constrict our inquiry today to match that?”

For some reason, this simple and seemingly undeniable conception of the market escapes virtually all critics of Ajit’s media-modernization agenda. Indeed, even Justice Stephen Breyer in this week’s oral argument seemed baffled by the notion that more concentration could entail more competition:

JUSTICE BREYER: I’m thinking of it solely as a — the anti-merger part, in — in anti-merger law, merger law generally, I think, has a theory, and the theory is, beyond a certain point and other things being equal, you have fewer companies in a market, the harder it is to enter, and it’s particularly harder for smaller firms. And, here, smaller firms are heavily correlated or more likely to be correlated with women and minorities. All right?

The opposite view, which is what the FCC has now chosen, is — is they want to move or allow to be moved towards more concentration. So what’s the theory that that wouldn’t hurt the minorities and women or smaller businesses? What’s the theory the opposite way, in other words? I’m not asking for data. I’m asking for a theory.

Of course, as Justice Breyer should surely know—and as I know Ajit Pai knows—counting the number of firms in a market is a horrible way to determine its competitiveness. In this case, the competition from internet media platforms, particularly for advertising dollars, is immense. A regulatory regime that prohibits traditional local-media outlets from forging efficient joint ventures or from obtaining the scale necessary to compete with those platforms does not further competition. Even if such a rule might temporarily result in more media outlets, eventually it would result in no media outlets, other than the large online platforms. The basic theory behind the Reconsideration Order—to answer Justice Breyer—is that outdated government regulation imposes artificial constraints on the ability of local media to adopt the organizational structures necessary to compete. Removing those constraints may not prove a magic bullet that saves local broadcasters and newspapers, but allowing the rules to remain absolutely ensures their demise. 

Ajit’s commitment to furthering competition in telecommunications markets remained steadfast throughout his tenure at the FCC. From opposing restrictive revisions to the agency’s spectrum screen to dissenting from the effort to impose a poorly conceived and retrograde regulatory regime on set-top boxes, to challenging the agency’s abuse of its merger review authority to impose ultra vires regulations, to, of course, rolling back his predecessor’s unsupportable Title II approach to net neutrality—and on virtually every issue in between—Ajit sought at every turn to create a regulatory backdrop conducive to competition.

Tom Wheeler, Pai’s predecessor at the FCC, claimed that his personal mantra was “competition, competition, competition.” His greatest legacy, in that regard, was in turning over the agency to Ajit.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Justin “Gus” Hurwitz is associate professor of law, the Menard Director of the Nebraska Governance and Technology Center, and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska College of Law. He is also director of law & economics programs at the International Center for Law & Economics.]

I was having a conversation recently with a fellow denizen of rural America, discussing how to create opportunities for academics studying the digital divide to get on-the-ground experience with the realities of rural telecommunications. He recounted a story from a telecom policy event in Washington, D.C., from not long ago. The story featured a couple of well-known participants in federal telecom policy as they were talking about how to close the rural digital divide. The punchline of the story was loud speculation from someone in attendance that neither of these bloviating telecom experts had likely ever set foot in a rural town.

And thus it is with most of those who debate and make telecom policy. The technical and business challenges of connecting rural America are different. Rural America needs different things out of its infrastructure than urban America. And the attitudes of both users and those providing service are different here than they are in urban America.

Federal Communications Commission Chairman Aji Pai—as I get to refer to him in writing for perhaps the last time—gets this. As is well-known, he is a native Kansan. He likely spent more time during his time as chairman driving rural roads than this predecessor spent hobnobbing at political fundraisers. I had the opportunity on one of these trips to visit a Nebraska farm with him. He was constantly running a bit behind schedule on this trip. I can attest that this is because he would wander off with a farmer to look at a combine or talk about how they were using drones to survey their fields. And for those cynics out there—I know there are some who don’t believe in the chairman’s interest in rural America—I can tell you that it meant a lot to those on the ground who had the chance to share their experiences.

Rural Digital Divide Policy on the Ground

Closing the rural digital divide is a defining public-policy challenge of telecommunications. It’s right there in the first sentence of the Communications Act, which established the FCC:

For the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States…a rapid, efficient, Nation-wide, and world-wide wire and radio communication service[.]

Depending on how one defines broadband internet, somewhere between 18 and 35 million Americans lack broadband internet access. No matter how you define it, however, most of those lacking access are in rural America.

It’s unsurprising why this is the case. Looking at North Dakota, South Dakota, and Nebraska—three of the five most expensive states to connect each household in both the 2015 and 2018 Connect America Fund models—the cost to connect a household to the internet in these states was twice that of connecting a household in the rest of the United States. Given the low density of households in these areas, often less than one household per square mile, there are relatively fewer economies of scale that allow carriers to amortize these costs across multiple households. We can add that much of rural America is both less wealthy than more urban areas and often doesn’t value the benefits of high-speed internet as highly. Taken together, the cost of providing service in these areas is much higher, and the demand for them much less, than in more urban areas.

On the flip side are the carriers and communities working to provide access. The reality in these states is that connecting those who live here is an all-hands-on-deck exercise. I came to Nebraska with the understanding that cable companies offer internet service via cable and telephone companies offer internet service via DSL or fiber. You can imagine my surprise the first time I spoke to a carrier who was using a mix of cable, DSL, fiber, microwave, and Wi-Fi to offer service to a few hundred customers. And you can also imagine my surprise when he started offering advice to another carrier—ostensibly a competitor—about how to get more performance out of some older equipment. Just last week, I was talking to a mid-size carrier about how they are using fixed wireless to offer service to customers outside of their service area as a stopgap until fiber gets out to the customer’s house.

Pai’s Progress Closing the Rural Digital Divide

This brings us to Chairman Pai’s work to close the rural digital divide. Literally on his first day on the job, he announced that his top priority was closing the digital divide. And he backed this up both with the commission’s agenda and his own time and attention.

On Chairman Pai’s watch, the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity. The recently completed RDOF auction promises to connect 10 million rural Americans to the internet; the 5G Fund will ensure that all but the most difficult-to-connect areas of the country will be covered by 5G mobile wireless. These are top-line items on Commissioner Pai’s resume as chairman. But it is important to recognize how much of a break they were from the commission’s previous approach to universal service and the digital divide. These funding mechanisms are best characterized by their technology-neutral, reverse-auction based approach to supporting service deployment.

This is starkly different from prior generations of funding, which focused on subsidizing specific carriers to provide specific levels of service using specific technologies. As I said above, the reality on the ground in rural America is that closing the digital divide is an all-hands-on-deck exercise. It doesn’t matter who is offering service or what technology they are using. Offering 10 mbps service today over a rusty barbed wire fence or a fixed wireless antenna hanging off the branch of a tree is better than offering no service or promising fiber that’s going to take two years to get into the ground. And every dollar saved by connecting one house with a lower-cost technology is a dollar that can be used to connect another house that may otherwise have gone unconnected.

The combination of the reverse-auction and technology-neutral approaches has made it possible for the commission to secure commitments to connect a record number of houses with high-speed internet over an incredibly short period of time.

Then there are the chairman’s accomplishments on the spectrum and wirelessinternet fronts. Here, he faced resistance from both within the government and industry. In some of the more absurd episodes of government in-fighting, he tangled with protectionist interests within the government to free up CBRS and other mid-band spectrum and to authorize new satellite applications. His support of fixed and satellite wireless has the potential to legitimately shake up the telecom industry. I honestly have no idea whether this is going to prove to be a good or bad bet in the long term—whether fixed wireless is going to be able to offer the quality and speed of service its proponents promise or whether it instead will be a short-run misallocation of capital that will require clawbacks and re-awards of funding in another few years—but the embrace of the technology demonstrated decisive leadership and thawed a too limited and ossified understanding of what technologies could be used to offer service. Again, as said above, closing the rural digital divide is an all-hands-on-deck problem; we do ourselves no favors by excluding possible solutions from our attempts to address it.

There is more that the commission did under Chairman Pai’s leadership, beyond the commission’s obvious order and actions, to close the rural digital divide. Over the past two years, I have had opportunities to work with academic colleagues from other disciplines on a range of federal funding opportunities for research and development relating to next generation technologies to support rural telecommunications, such as programs through the National Science Foundation. It has been wonderful to see increased FCC involvement in these programs. And similarly, another of Chairman Pai’s early initiatives was to establish the Broadband Deployment Advisory Committee. It has been rare over the past few years for me to be in a meeting with rural stakeholders that didn’t also include at least one member of a BDAC subcommittee. The BDAC process was a valuable way to communicate information up the chair, to make sure that rural stakeholders’ voices were heard in D.C.

But the BDAC process had another important effect: it made clear that there was someone in D.C. who was listening. Commissioner Pai said on his first day as chairman that closing the digital divide was his top priority. That’s easy to just say. But establishing a committee framework that ensures that stakeholders regularly engage with an appointed representative of the FCC, putting in the time and miles to linger with a farmer to talk about the upcoming harvest season, these things make that priority real.

Rural America certainly hopes that the next chair of the commission will continue to pay us as much attention as Chairman Pai did. But even if they don’t, we can rest with some comfort that he has set in motion efforts—from the next generation of universal service programs to supporting research that will help develop the technologies that will come after—that will serve us will for years to come.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

Admirers of the late Supreme Court Justice Louis Brandeis and other antitrust populists often trace the history of American anti-monopoly sentiments from the Founding Era through the Progressive Era’s passage of laws to fight the scourge of 19th century monopolists. For example, Matt Stoller of the American Economic Liberties Project, both in his book Goliath and in other writings, frames the story of America essentially as a battle between monopolists and anti-monopolists.

According to this reading, it was in the late 20th century that powerful corporations and monied interests ultimately succeeded in winning the battle in favor of monopoly power against antitrust authorities, aided by the scholarship of the “ideological” Chicago school of economics and more moderate law & economics scholars like Herbert Hovenkamp of the University of Pennsylvania Law School.

It is a framing that leaves little room for disagreements about economic theory or evidence. One is either anti-monopoly or pro-monopoly, anti-corporate power or pro-corporate power.

What this story muddles is that the dominant anti-monopoly strain from English common law, which continued well into the late 19th century, was opposed specifically to government-granted monopoly. In contrast, today’s “anti-monopolists” focus myopically on alleged monopolies that often benefit consumers, while largely ignoring monopoly power granted by government. The real monopoly problem antitrust law fails to solve is its immunization of anticompetitive government policies. Recovering the older anti-monopoly tradition would better focus activists today.

Common Law Anti-Monopoly Tradition

Scholars like Timothy Sandefur of the Goldwater Institute have written about the right to earn a living that arose out of English common law and was inherited by the United States. This anti-monopoly stance was aimed at government-granted privileges, not at successful business ventures that gained significant size or scale.

For instance, 1602’s Darcy v. Allein, better known as the “Case of Monopolies,” dealt with a “patent” originally granted by Queen Elizabeth I in 1576 to Ralph Bowes, and later bought by Edward Darcy, to make and sell playing cards. Darcy did not innovate playing cards; he merely had permission to be the sole purveyor. Thomas Allein, who attempted to sell playing cards he created, was sued for violating Darcy’s exclusive rights. Darcy’s monopoly ultimately was held to be invalid by the court, which refused to convict Allein.

Edward Coke, who actually argued on behalf of the patent in Darcy v. Allen, wrote that the case stood for the proposition that:

All trades, as well mechanical as others, which prevent idleness (the bane of the commonwealth) and exercise men and youth in labour, for the maintenance of themselves and their families, and for the increase of their substance, to serve the Queen when occasion shall require, are profitable for the commonwealth, and therefore the grant to the plaintiff to have the sole making of them is against the common law, and the benefit and liberty of the subject. (emphasis added)

In essence, Coke’s argument was more closely linked to a “right to work” than to market structures, business efficiency, or firm conduct.

The courts largely resisted royal monopolies in 17th century England, finding such grants to violate the common law. For instance, in The Case of the Tailors of Ipswich, the court cited Darcy and found:

…at the common law, no man could be prohibited from working in any lawful trade, for the law abhors idleness, the mother of all evil… especially in young men, who ought in their youth, (which is their seed time) to learn lawful sciences and trades, which are profitable to the commonwealth, and whereof they might reap the fruit in their old age, for idle in youth, poor in age; and therefore the common law abhors all monopolies, which prohibit any from working in any lawful trade. (emphasis added)

The principles enunciated in these cases were eventually codified in the Statute of Monopolies, which prohibited the crown from granting monopolies in most circumstances. This was especially the case when the monopoly prevented the right to otherwise lawful work.

This common-law tradition also had disdain for private contracts that created monopoly by restraining the right to work. For instance, the famous Dyer’s case of 1414 held that a contract in which John Dyer promised not to practice his trade in the same town as the plaintiff was void for being an unreasonable restraint on trade.The judge is supposed to have said in response to the plaintiff’s complaint that he would have imprisoned anyone who had claimed such a monopoly on his own authority.

Over time, the common law developed analysis that looked at the reasonableness of restraints on trade, such as the extent to which they were limited in geographic reach and duration, as well as the consideration given in return. This part of the anti-monopoly tradition would later constitute the thread pulled on by the populists and progressives who created the earliest American antitrust laws.

Early American Anti-Monopoly Tradition

American law largely inherited the English common law system. It also inherited the anti-monopoly tradition the common law embodied. The founding generation of American lawyers were trained on Edward Coke’s commentary in “The Institutes of the Laws of England,” wherein he strongly opposed government-granted monopolies.

This sentiment can be found in the 1641 Massachusetts Body of Liberties, which stated: “No monopolies shall be granted or allowed amongst us, but of such new Inventions that are profitable to the Countrie, and that for a short time.” In fact, the Boston Tea Party itself was in part a protest of the monopoly granted to the East India Company, which included a special refund from duties by Parliament that no other tea importers enjoyed.

This anti-monopoly tradition also can be seen in the debates at the Constitutional Convention. A proposal to give the federal government power to grant “charters of incorporation” was voted down on fears it could lead to monopolies. Thomas Jefferson, George Mason, and several Antifederalists expressed concerns about the new national government’s ability to grant monopolies, arguing that an anti-monopoly clause should be added to the Constitution. Six states wanted to include provisions that would ban monopolies and the granting of special privileges in the Constitution.

The American anti-monopoly tradition remained largely an anti-government tradition throughout much of the 19th century, rearing its head in debates about the Bank of the United States, publicly-funded internal improvements, and government-granted monopolies over bridges and seas. Pamphleteer Lysander Spooner even tried to start a rival to the Post Office by appealing to the strong American impulse against monopoly.

Coinciding with the Industrial Revolution, liberalization of corporate law made it easier for private persons to organize firms that were not simply grants of exclusive monopoly. But discontent with industrialization and other social changes contributed to the birth of a populist movement, and later to progressives like Brandeis, who focused on private combinations and corporate power rather than government-granted privileges. This is the strand of anti-monopoly sentiment that continues to dominate the rhetoric today.

What This Means for Today

Modern anti-monopoly advocates have largely forgotten the lessons of the long Anglo-American tradition that found government is often the source of monopoly power. Indeed, American law privileges government’s ability to grant favors to businesses through licensing, the tax code, subsidies, and even regulation. The state action doctrine from Parker v. Brown exempts state and municipal authorities from antitrust lawsuits even where their policies have anticompetitive effects. And the Noerr-Pennington doctrine protects the rights of industry groups to lobby the government to pass anticompetitive laws.

As a result, government is often used to harm competition, with no remedy outside of the political process that created the monopoly. Antitrust law is used instead to target businesses built by serving consumers well in the marketplace.

Recovering this older anti-monopoly tradition would help focus the anti-monopoly movement on a serious problem modern antitrust misses. While the consumer-welfare standard that modern antitrust advocates often decry has helped to focus the law on actual harms to consumers, antitrust more broadly continues to encourage rent-seeking by immunizing state action and lobbying behavior.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.]

To mark the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario”, Truth on the Market and  International Center for Law & Economics (ICLE) are hosting some of the world’s leading scholars and practitioners of competition law and economics to discuss some of the book’s themes.

In his book, Petit offers a “moligopoly” framework for understanding competition between large tech companies that may have significant market shares in their ‘home’ markets but nevertheless compete intensely in adjacent ones. Petit argues that tech giants coexist as both monopolies and oligopolies in markets defined by uncertainty and dynamism, and offers policy tools for dealing with the concerns people have about these markets that avoid crude “big is bad” assumptions and do not try to solve non-economic harms with the tools of antitrust.

This symposium asks contributors to give their thoughts either on the book as a whole or on a selected chapter that relates to their own work. In it we hope to explore some of Petit’s arguments with different perspectives from our contributors.

Confirmed Participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Kelly Fayne, Antitrust Associate, Latham & Watkins
  • Shane Greenstein, Professor of Business Administration; Co-chair of the HBS Digital Initiative, Harvard Business School
  • Peter Klein, Professor of Entrepreneurship and Chair, Department of Entrepreneurship and Corporate Innovation, Baylor University
  • William Kovacic, Global Competition Professor of Law and Policy; Director, Competition Law Center, George Washington University Law
  • Kai-Uwe Kuhn, Academic Advisor, University of East Anglia
  • Richard Langlois, Professor of Economics, University of Connecticut
  • Doug Melamed, Professor of the Practice of Law, Stanford law School
  • David Teece, Professor in Global Business, University of California’s Haas School of Business (Berkeley); Director, Center for Global Strategy; Governance and Faculty Director, Institute for Business Innovation

Thank you again to all of the excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting later today, October 12, 2020.

With the passing of Justice Ruth Bader Ginsburg, many have already noted her impact on the law as an advocate for gender equality and women’s rights, her importance as a role model for women, and her civility. Indeed, a key piece of her legacy is that she was a jurist in the classic sense of the word: she believed in using coherent legal reasoning to reach a result. And that meant Justice Ginsburg’s decisions sometimes cut against partisan political expectations. 

This is clearly demonstrated in our little corner of the law: RBG frequently voted in the majority on antitrust cases in a manner that—to populist leftwing observers—would be surprising. Moreover, she authored an important case on price discrimination that likewise cuts against the expectation of populist antitrust critics and demonstrates her nuanced jurisprudence.

RBG’s record on the Court shows a respect for the evolving nature of antitrust law

In the absence of written opinions of her own, it is difficult to discern what was actually in Justice Ginsburg’s mind as she encountered antitrust issues. But, her voting record represents at least a willingness to approach antitrust in an apolitical manner. 

Over the last several decades, Justice Ginsburg joined the Supreme Court majority in many cases dealing with a wide variety of antitrust issues, including the duty to deal doctrine, vertical restraints, joint ventures, and mergers. In many of these cases, RBG aligned herself with judgments of the type that the antitrust populists criticize.

The following are major consumer welfare standard cases that helped shape the current state of antitrust law in which she joined the majority or issued a concurrence: 

  • Verizon Commc’ns Inc. v. Law Offices of Curtis Trinko, LLP, 540 U.S. 398 (2004) (unanimous opinion heightening the standard for finding a duty to deal)
  • Pacific Bell Tel. Co v. linkLine Commc’ns, Inc.,  555 U.S. 438 (2009) (Justice Ginsburg joined the concurrence finding there was no “price squeeze” but suggesting the predatory pricing claim should be remanded)
  • Weyerhaeuser Co. v. Ross-Simmons Hardwood Lumber Co., Inc., 549 U.S. 312 (2007) (unanimous opinion finding predatory buying claims are still subject to the dangerous probability of recoupment test from Brooke Group)
  • Apple, Inc. v. Robert Pepper, 139 S.Ct. 1514 (2019) (part of majority written by Justice Kavanaugh finding that iPhone owners were direct purchasers under Illinois Brick that may sue Apple for alleged monopolization)
  • State Oil Co. v. Khan, 522 U.S. 3 (1997) (unanimous opinion overturning per se treatment of vertical maximum price fixing under Albrecht and applying rule of reason standard)
  • Texaco Inc. v. Dagher, 547 U.S. 1 (2006) (unanimous opinion finding it is not per se illegal under §1 of the Sherman Act for a lawful, economically integrated joint venture to set the prices at which it sells its products)
  • Illinois Tool Works Inc. v. Independent Ink, Inc., 547 U.S. 28 (2006) (unanimous opinion finding a patent does not necessarily confer market power upon the patentee, in all cases involving a tying arrangement, the plaintiff must prove that the defendant has market power in the tying product)
  • U.S. v. Baker Hughes, Inc., 908 F. 2d 981 (D.C. Cir. 1990) (unanimous opinion written by then-Judge Clarence Thomas while both were on the D.C. Circuit of Appeals finding against the government’s argument that the defendant in a Section 7 merger challenge can rebut a prima facie case only by a clear showing that entry into the market by competitors would be quick and effective)

Even where she joined the dissent in antitrust cases, she did so within the ambit of the consumer welfare standard. Thus, while she was part of the dissent in cases like Leegin Creative Leather Products, Inc. v. PSKS, Inc., 551 U.S. 877 (2007), Bell Atlantic Corp v. Twombly, 550 U.S. 544 (2007), and Ohio v. American Express Co., 138 S.Ct. 2274 (2018), she still left a legacy of supporting modern antitrust jurisprudence. In those cases, RBG simply  had a different vision for how best to optimize consumer welfare. 

Justice Ginsburg’s Volvo Opinion

The 2006 decision Volvo Trucks North America, Inc. v. Reeder-Simco GMC, Inc. was one of the few antitrust decisions authored by RBG and shows her appreciation for the consumer welfare standard. In particular, Justice Ginsburg affirmed the notion that antitrust law is designed to protect competition not competitors—a lesson that, as of late, needs to be refreshed. 

Volvo, a 7-2 decision, dealt with the Robinson-Patman Act’s prohibition on price discimination. Reeder-Simco, a retail car dealer that sold Volvos, alleged that Volvo Inc. was violating the Robinson-Patman Act by selling cars to them at different prices than to other Volvo dealers.

The Robinson-Patman Act is frequently cited by antitrust populists as a way to return antitrust law to its former glory. A main argument of Lina Khan’s Amazon’s Antitrust Paradox was that the Chicago School had distorted the law on vertical restraints generally, and price discrimination in particular. One source of this distortion in Khan’s opinion has been the Supreme Court’s mishandling of the Robinson-Patman Act.

Yet, in Volvo we see Justice Ginsburg wrestling with the Robinson-Patman Act in a way to give effect to the law as written, which may run counter to some of the contemporary populist impulse to revise the Court’s interpretation of antitrust laws. Justice Ginsburg, citing Brown & Williamson, first noted that: 

Mindful of the purposes of the Act and of the antitrust laws generally, we have explained that Robinson-Patman does not “ban all price differences charged to different purchasers of commodities of like grade and quality.”

Instead, the Robinson-Patman Act was aimed at a particular class of harms that Congress believed existed when large chain-stores were able to exert something like monopsony buying power. Moreover, Justice Ginsburg noted, the Act “proscribes ‘price discrimination only to the extent that it threatens to injure competition’[.]”

Under the Act, plaintiffs needed to demonstrate evidence of Volvo Inc. systematically treating plaintiffs as “disfavored” purchasers as against another set of “favored” purchasers. Instead, all plaintiffs could produce was anecdotal and inconsistent evidence of Volvo Inc. disfavoring them. Thus, the plaintiffs— and theoretically other similarly situated Volvo dealers— were in fact harmed in a sense by Volvo Inc. Yet, Justice Ginsburg was unwilling to rewrite the Act on Congress’s behalf to incorporate new harms later discovered (a fact which would not earn her accolades in populist circles these days). 

Instead, Justice Ginsburg wrote that:

Interbrand competition, our opinions affirm, is the “primary concern of antitrust law.”… The Robinson-Patman Act signals no large departure from that main concern. Even if the Act’s text could be construed in the manner urged by [plaintiffs], we would resist interpretation geared more to the protection of existing competitors than to the stimulation of competition. In the case before us, there is no evidence that any favored purchaser possesses market power, the allegedly favored purchasers are dealers with little resemblance to large independent department stores or chain operations, and the supplier’s selective price discounting fosters competition among suppliers of different brands… By declining to extend Robinson-Patman’s governance to such cases, we continue to construe the Act “consistently with broader policies of the antitrust laws.” Brooke Group, 509 U.S., at 220… (cautioning against Robinson-Patman constructions that “extend beyond the prohibitions of the Act and, in doing so, help give rise to a price uniformity and rigidity in open conflict with the purposes of other antitrust legislation”).

Thus, interested in the soundness of her jurisprudence in the face of a well-developed body of antitrust law, Justice Ginsburg chose to continue to develop that body of law rather than engage in judicial policymaking in favor of a sympathetic plaintiff. 

It must surely be tempting for a justice on the Court to adopt less principled approaches to the law in any given case, and it is equally as impressive that Justice Ginsburg consistently stuck to her principles. We can only hope her successor takes note of Justice Ginsburg’s example.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

The great Dr. Thomas Sowell

One of the great scholars of law & economics turns 90 years old today. In his long and distinguished career, Thomas Sowell has written over 40 books and countless opinion columns. He has been a professor of economics and a long-time Senior Fellow at the Hoover Institution. He received a National Humanities Medal in 2002 for a lifetime of scholarship, which has only continued since then. His ability to look at issues with an international perspective, using the analytical tools of economics to better understand institutions, is an inspiration to us at the International Center for Law & Economics.

Here, almost as a blog post festschrift as a long-time reader of his works, I want to briefly write about how Sowell’s voluminous writings on visions, law, race, and economics could be the basis for a positive agenda to achieve a greater measure of racial justice in the United States.

The Importance of Visions

One of the most important aspects of Sowell’s work is his ability to distill wide-ranging issues into debates involving different mental models, or a “Conflict of Visions.” He calls one vision the “tragic” or “constrained” vision, which sees all humans as inherently limited in knowledge, wisdom, and virtue, and fundamentally self-interested even at their best. The other vision is the “utopian” or “unconstrained” vision, which sees human limitations as artifacts of social arrangements and cultures, and that there are some capable by virtue of superior knowledge and morality that can redesign society to create a better world. 

An implication of the constrained vision is that the difference in knowledge and virtue between the best and the worst in society is actually quite small. As a result, no one person or group of people can be trusted with redesigning institutions which have spontaneously evolved. The best we can hope for is institutions that reasonably deter bad conduct and allow people the freedom to solve their own problems. 

An important implication of the unconstrained vision, on the other hand,  is that there are some who because of superior enlightenment, which Sowell calls the “Vision of the Anointed,” can redesign institutions to fundamentally change human nature, which is seen as malleable. Institutions are far more often seen as the result of deliberate human design and choice, and that failures to change them to be more just or equal is a result of immorality or lack of will.

The importance of visions to how we view things like justice and institutions makes all the difference. In the constrained view, institutions like language, culture, and even much of the law result from the “spontaneous ordering” that is the result of human action but not of human design. Limited government, markets, and tradition are all important in helping individuals coordinate action. Markets work because self-interested individuals benefit when they serve others. There are no solutions to difficult societal problems, including racism, only trade-offs. 

But in the unconstrained view, limits on government power are seen as impediments to public-spirited experts creating a better society. Markets, traditions, and cultures are to be redesigned from the top down by those who are forward-looking, relying on their articulated reason. There is a belief that solutions could be imposed if only there is sufficient political will and the right people in charge. When it comes to an issue like racism, those who are sufficiently “woke” should be in charge of redesigning institutions to provide for a solution to things like systemic racism.

For Sowell, what he calls “traditional justice” is achieved by processes that hold people accountable for harms to others. Its focus is on flesh-and-blood human beings, not abstractions like all men or blacks versus whites. Differences in outcomes are not just or unjust, by this point of view, what is important is that the processes are just. These processes should focus on institutional incentives of participants. Reforms should be careful not to upset important incentive structures which have evolved over time as the best way for limited human beings to coordinate behavior.

The “Quest for Cosmic Justice,” on the other hand, flows from the unconstrained vision. Cosmic justice sees disparities between abstract groups, like whites and blacks, as unjust and in need of correction. If results from impartial processes like markets or law result in disparities, those with an unconstrained vision often see those processes as themselves racist. The conclusion is that the law should intervene to create better outcomes. This presumes considerable knowledge and morality on behalf of those who are in charge of the interventions. 

For Sowell, a large part of his research project has been showing that those with the unconstrained vision often harm those they are proclaiming the intention to help in their quest for cosmic justice. 

A Constrained Vision of Racial Justice

Sowell has written quite a lot on race, culture, intellectuals, economics, and public policy. One of the main thrusts of his argument about race is that attempts at cosmic justice often harm living flesh-and-blood individuals in the name of intertemporal abstractions like “social justice” for black Americans. Sowell nowhere denies that racism is an important component of understanding the history of black Americans. But his constant challenge is that racism can’t be the only variable which explains disparities. Sowell points to the importance of culture and education in building human capital to be successful in market economies. Without taking those other variables into account, there is no way to determine the extent that racism is the cause of disparities. 

This has important implications for achieving racial justice today. When it comes to policies pursued in the name of racial justice, Sowell has argued that many programs often harm not only members of disfavored groups, but the members of the favored groups.

For instance, Sowell has argued that affirmative action actually harms not only flesh-and-blood white and Asian-Americans who are passed over, but also harms those African-Americans who are “mismatched” in their educational endeavors and end up failing or dropping out of schools when they could have been much better served by attending schools where they would have been very successful. Another example Sowell often points to is minimum wage legislation, which is often justified in the name of helping the downtrodden, but has the effect of harming low-skilled workers by increasing unemployment, most especially young African-American males. 

Any attempts at achieving racial justice, in terms of correcting historical injustices, must take into account how changes in processes could actually end up hurting flesh-and-blood human beings, especially when those harmed are black Americans. 

A Positive Agenda for Policy Reform

In Sowell’s constrained vision, a large part of the equation for African-American improvement is going to be cultural change. However, white Americans should not think that this means they have no responsibility in working towards racial justice. A positive agenda must take into consideration real harms experienced by African-Americans due to government action (and inaction). Thus, traditional justice demands institutional reforms, and in some cases, recompense.

The policy part of this equation outlined below is motivated by traditional justice concerns that hold people accountable under the rule of law for violations of constitutional rights and promotes institutional reforms to more properly align incentives. 

What follows below are policy proposals aimed at achieving a greater degree of racial justice for black Americans, but fundamentally informed by the constrained vision and traditional justice concerns outlined by Sowell. Most of these proposals are not on issues Sowell has written a lot on. In fact, some proposals may actually not be something he would support, but are—in my opinion—consistent with the constrained vision and traditional justice.

Reparations for Historical Rights Violations

Sowell once wrote this in regards to reparations for black Americans:

Nevertheless, it remains painfully clear that those people who were torn from their homes in Africa in centuries past and forcibly brought across the Atlantic in chains suffered not only horribly, but unjustly. Were they and their captors still alive, the reparations and retribution owed would be staggering. Time and death, however, cheat us of such opportunities for justice, however galling that may be. We can, of course, create new injustices among our flesh-and-blood contemporaries for the sake of symbolic expiation, so that the son or daughter of a black doctor or executive can get into an elite college ahead of the son or daughter of a white factory worker or farmer, but only believers in the vision of cosmic justice are likely to take moral solace from that. We can only make our choices among alternatives actually available, and rectifying the past is not one of those options.

In other words, if the victims and perpetrators of injustice are no longer alive, it is not just to hold entire members of respective races accountable for crimes which they did not commit. However, this would presumably leave open the possibility of applying traditional justice concepts in those cases where death has not cheated us.

For instance, there are still black Americans alive who suffered from Jim Crow, as well as children and family members of those lynched. While it is too little, too late, it seems consistent with traditional justice to still seek out and prosecute criminally perpetrators who committed heinous acts but a few generations ago against still living victims. This is not unprecedented. Old Nazis are still prosecuted for crimes against Jews. A similar thing could be done in the United States.

Similarly, civil rights lawsuits for the damages caused by Jim Crow could be another way to recompense those who were harmed. Alternatively, it could be done by legislation. The Civil Liberties Act of 1988 was passed under President Reagan and gave living Japanese Americans who were interned during World War II some limited reparations. A similar system could be set up for living victims of Jim Crow. 

Statutes of limitations may need to be changed to facilitate these criminal prosecutions and civil rights lawsuits, but it is quite clearly consistent with the idea of holding flesh-and-blood persons accountable for their unlawful actions.

Holding flesh-and-blood perpetrators accountable for rights violations should not be confused with the cosmic justice idea—that Sowell consistently decries—that says intertemporal abstractions can be held accountable for crimes. In other words, this is not holding “whites” accountable for all historical injustices to “blacks.” This is specifically giving redress to victims and deterring future bad conduct.  

End Qualified Immunity

Another way to promote racial justice consistent with the constrained vision is to end one of the Warren Court’s egregious examples of judicial activism: qualified immunity. Qualified immunity is nowhere mentioned in the statute for civil rights, 42 USC § 1983. As Sowell argues in his writings, judges in the constrained vision are supposed to declare what the law is, not what they believe it should be, unlike those in the unconstrained vision who—according to Sowell— believe they have the right to amend the laws through judicial edict. The introduction of qualified immunity into the law by the activist Warren Court should be overturned.

Currently, qualified immunity effectively subsidizes police brutality, to the detriment of all Americans, but disproportionately affecting black Americans. The law & economics case against qualified immunity is pretty straightforward: 

In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.

However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity… courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it… This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.

Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity… The result is to encourage police officers to take insufficient care when making the choice about the level of force to use. 

Those with a constrained vision focus on processes and incentives. In this case, it is police officers who have insufficient incentives to take reasonable care when they receive qualified immunity for their conduct.

End the Drug War

While not something he has written a lot on, Sowell has argued for the decriminalization of drugs, comparing the War on Drugs to the earlier attempts at Prohibition of alcohol. This is consistent with the constrained vision, which cares about the institutional incentives created by law. 

Interestingly, work by Michelle Alexander in the second chapter of The New Jim Crow is largely consistent with Sowell’s point of view. There she argued the institutional incentives of police departments were systematically changed when the drug war was ramped up. 

Alexander asks a question which is right in line with the constrained vision:

[I]t is fair to wonder why the police would choose to arrest such an astonishing percentage of the American public for minor drug crimes. The fact that police are legally allowed to engage in a wholesale roundup of nonviolent drug offenders does not answer the question why they would choose to do so, particularly when most police departments have far more serious crimes to prevent and solve. Why would police prioritize drug-law enforcement? Drug use and abuse is nothing new; in fact, it was on the decline, not on the rise, when the War on Drugs began.

Alexander locates the impetus for ramping up the drug war in federal subsidies:

In 1988, at the behest of the Reagan administration, Congress revised the program that provides federal aid to law enforcement, renaming it the Edward Byrne Memorial State and Local Law Enforcement Assistance Program after a New York City police officer who was shot to death while guarding the home of a drug-case witness. The Byrne program was designed to encourage every federal grant recipient to help fight the War on Drugs. Millions of dollars in federal aid have been offered to state and local law enforcement agencies willing to wage the war. By the late 1990s, the overwhelming majority of state and local police forces in the country had availed themselves of the newly available resources and added a significant military component to buttress their drug-war operations. 

On top of that, police departments were benefited by civil asset forfeiture:

As if the free military equipment, training, and cash grants were not enough, the Reagan administration provided law enforcement with yet another financial incentive to devote extraordinary resources to drug law enforcement, rather than more serious crimes: state and local law enforcement agencies were granted the authority to keep, for their own use, the vast majority of cash and assets they seize when waging the drug war. This dramatic change in policy gave state and local police an enormous stake in the War on Drugs—not in its success, but in its perpetual existence. Suddenly, police departments were capable of increasing the size of their budgets, quite substantially, simply by taking the cash, cars, and homes of people suspected of drug use or sales. Because those who were targeted were typically poor or of moderate means, they often lacked the resources to hire an attorney or pay the considerable court costs. As a result, most people who had their cash or property seized did not challenge the government’s action, especially because the government could retaliate by filing criminal charges—baseless or not.

As Alexander notes, black Americans (and other minorities) were largely targeted in this ramped up War on Drugs, noting the drug war’s effects have been to disproportionately imprison black Americans even though drug usage and sales are relatively similar across races. Police officers have incredible discretion in determining who to investigate and bring charges against. When it comes to the drug war, this discretion is magnified because the activity is largely consensual, meaning officers can’t rely on victims to come to them to start an investigation. Alexander finds the reason the criminal justice system has targeted black Americans is because of implicit bias in police officers, prosecutors, and judges, which mirrors the bias shown in media coverage and in larger white American society. 

Anyone inspired by Sowell would need to determine whether this is because of racism or some other variable. It is important to note here that Sowell never denies that racism exists or is a real problem in American society. But he does challenge us to determine whether this alone is the cause of disparities. Here, Alexander makes a strong case that it is implicit racism that causes the disparities in enforcement of the War on Drugs. A race-neutral explanation could be as follows, even though it still suggests ending the War on Drugs: the enforcement costs against those unable to afford to challenge the system are lower. And black Americans are disproportionately represented among the poor in this country. As will be discussed below in the section on reforming indigent criminal defense, most prosecutions are initiated against defendants who can’t afford a lawyer. The result could be racially disparate even without a racist motivation. 

Regardless of whether racism is the variable that explains the disparate impact of the War on Drugs, it should be ended. This may be an area where traditional and cosmic justice concerns can be united in an effort to reform the criminal justice system.

Reform Indigent Criminal Defense

A related aspect of how the criminal justice system has created a real barrier for far too many black Americans is the often poor quality of indigent criminal defense. Indigent defense is a large part of criminal defense in this country since a very high number of criminal prosecutions are initiated against those who are often too poor to afford a lawyer (roughly 80%). Since black Americans are disproportionately represented among the indigent and those in the criminal justice system, it should be no surprise that black Americans are disproportionately represented by public defenders in this country.

According to the constrained vision, it is important to look at the institutional incentives of public defenders. Considering the extremely high societal costs of false convictions, it is important to get these incentives right.

David Friedman and Stephen Schulhofer’s seminal article exploring the law & economics of indigent criminal defense highlighted the conflict of interest inherent in government choosing who represents criminal defendants when the government is in charge of prosecuting. They analyzed each of the models used in the United States for indigent defense from an economic point of view and found each wanting. On top of that, there is also a calculation problem inherent in government-run public defender’s offices whereby defendants may be systematically deprived of viable defense strategies because of a lack of price signals. 

An interesting alternative proposed by Friedman and Schulhofer is a voucher system. This is similar to the voucher system Sowell has often touted for education. The idea would be that indigent criminal defendants get to pick the lawyer of their choosing that is part of the voucher program. The government would subsidize the provision of indigent defense, in this model, but would not actually pick the lawyer or run the public defender organization. Incentives would be more closely aligned between the defendant and counsel. 

Conclusion

While much more could be said consistent with the constrained vision that could help flesh-and-blood black Americans, including abolishing occupational licensing, ending wage controls, promoting school choice, and ending counterproductive welfare policies, this is enough for now. Racial justice demands holding rights violators accountable and making victims whole. Racial justice also means reforming institutions to make sure incentives are right to deter conduct which harms black Americans. However, the growing desire to do something to promote racial justice in this country should not fall into the trap of cosmic justice thinking, which often ends up hurting flesh-and-blood people of all races in the present in the name of intertemporal abstractions. 

Happy 90th birthday to one of the greatest law & economics scholars ever, Dr. Thomas Sowell. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

While much of the world of competition policy has focused on mergers in the COVID-19 era. Some observers see mergers as one way of saving distressed but valuable firms. Others have called for a merger moratorium out of fear that more mergers will lead to increased concentration and market power. In the meantime, there has been a growing push for increased nationalization of a wide range of businesses and industries.

In most cases, the call for a government takeover is not a reaction to the public health and economic crises associated with coronavirus. Instead, COVID-19 is a convenient excuse to pursue long sought after policies.

Last year, well before the pandemic, New York mayor Bill de Blasio called for a government takeover of electrical grid operator ConEd because he was upset over blackouts during a heatwave. Earlier that year, he threatened to confiscate housing units from private landlords, “we will seize their buildings, and we will put them in the hands of a community nonprofit that will treat tenants with the respect they deserve.”

With that sort of track record, it should come as no surprise the mayor proposed a government takeover of key industries to address COVID-19: “This is a case for a nationalization, literally a nationalization, of crucial factories and industries that could produce the medical supplies to prepare this country for what we need.” Dana Brown, director of The Next System Project at The Democracy Collaborative, agrees, “We should nationalize what remains of the American vaccine industry now, thereby assuring that any coronavirus vaccines produced can be made as widely available and as inexpensive soon as possible.” 

Dan Sullivan in the American Prospect suggests the U.S. should nationalize all the airlines. Some have gone so far as calling for nationalization of the U.S. oil industry.

On the one hand, it’s clear that de Blasio and Brown have no confidence in the price system to efficiently allocate resources. Alternatively, they may have overconfidence in the political/bureaucratic system to efficiently, and “equitably,” distribute resources. On the other hand, as Daniel Takash points out in an earlier post, both pharmaceuticals and oil are relatively unpopular industries with many Americans, in which case the threat of a government takeover has a big dose of populist score settling:

Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular–trailing behind fossil fuels, lawyers, and even the federal government. 

In the early days of the pandemic, France’s finance minister Bruno Le Maire promised to protect “big French companies.” The minister identified a range of actions under consideration: “That can be done by recapitalization, that can be done by taking a stake, I can even use the term nationalization if necessary.” While he did not mention any specific companies, it’s been speculated Air France KLM may be a target.

The Italian government is expected to nationalize Alitalia soon. The airline has been in state administration since May 2017, and the Italian government will have 100% control of the airline by June. Last week, the German government took a 20% stake in Lufthansa, in what has been characterized as a “temporary partial nationalization.” In Canada, Prime Minister Justin Trudeau has been coy about speculation that the government might nationalize Air Canada. 

Obviously, these takeovers have “bailout” written all over them, and bailouts have their own anticompetitive consequences that can be worse than those associated with mergers. For example, RyanAir announced it will contest the aid package for Lufthansa. RyanAir chief executive Michael O’Leary claims the aid will allow Lufthansa to “engage in below-cost selling” and make it harder for RyanAir and its rival low-cost carrier EasyJet to compete. 

There is also a bit of a “national champion” aspect to the takeovers. Each of the potential targets are (or were) considered their nation’s flagship airline. World Bank economists Tanja Goodwin and Georgiana Pop highlight the risk of nationalization harming competition: 

These [sic] should avoid rescuing firms that were already failing. …  But governments should also refrain from engaging in production or service delivery in industries that can be served by the private sector. The role of SOEs [state owned enterprises] should be assessed in order to ensure that bailout packages are not exclusively and unnecessarily favoring a dominant SOE.

To be sure, COVID-19 related mergers could raise the specter of increased market power post-pandemic. But, this risk must be balanced against the risks posed by a merger moratorium. These include the risk of widespread bankruptcies (that’s another post) and/or the possibility of nationalization of firms and industries. Either option can reduce competition which can bring harm to consumers, employees, and suppliers.

Yet another sad story was caught on camera this week showing a group of police officers killing an unarmed African-American man named George Floyd. While the officers were fired from the police department, there is still much uncertainty about what will happen next to hold those officers accountable as a legal matter. 

A well-functioning legal system should protect the constitutional rights of American citizens to be free of unreasonable force from police officers, while also allowing police officers the ability to do their jobs safely and well. In theory, civil rights lawsuits are supposed to strike that balance.

In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.

However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity. Qualified immunity started as a mechanism to protect officers from suit when they acted in “good faith.” Over time, though, the doctrine has evolved away from a subjective test based upon the actor’s good faith to an objective test based upon notice in judicial precedent. As a result, courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it. In the words of the Supreme Court, qualified immunity protects “all but the plainly incompetent or those who knowingly violate the law.” 

This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.

Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity. On top of that, the regular practice of governments is to indemnify officers even when there is a settlement or a judgment. The result is to encourage police officers to take insufficient care when making the choice about the level of force to use. 

Economics 101 makes a clear prediction: When unreasonable uses of force are not held accountable, you get more unreasonable uses of force. Unfortunately, the news continues to illustrate the accuracy of this prediction.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ian Adams, (Executive Director, International Center for Law & Economics).]

The COVID-19 crisis has recast virtually every contemporary policy debate in the context of public health, and digital privacy is no exception. Conversations that once focused on the value and manner of tracking to enable behavioral advertising have shifted. Congress, on the heels of years of false-starts and failed efforts to introduce nationwide standards, is now lurching toward framing privacy policy through the lens of  proposed responses to the virus.

To that end, two legislative vehicles, one from Senate Republicans and another from a bicameral group of Democrats, have been offered specifically in response to the hitherto unprecedented occasion that society has to embrace near-universally available technologies to identify, track, and remediate the virus. The bills present different visions of what it means to protect and promote the privacy of Americans in the COVID-19 era, both of which are flawed (though, to differing degrees) as a matter of principle and practice. 

Failure as a matter of principle

Privacy has always been one value among many, not an end in itself, but a consideration to be weighed in the pursuit of life’s many varied activities (a point explored in greater depth here). But while the value of privacy in the context of exigent circumstances has traditionally waned, it has typically done so to make room for otherwise intrusive state action

The COVID-19 crisis presents a different scenario. Now, private firms, not the state, are best positioned to undertake the steps necessary to blunt the virus’ impact and, as good fortune would have it, substantial room already exists within U.S. law for firms to deploy software that would empower people to remediate the virus. Indeed, existing U.S. law affords people the ability to weigh their privacy preferences directly with their level of public health concern.

Strangely, in this context, both political parties have seen fit to advance restrictive privacy visions specific to the COVID-19 crisis that would substantially limit the ability of individuals to use tools to make themselves, and their communities, safer. In other words, both parties have offered proposals that make it harder to achieve the public health outcomes they claim to be seeking at precisely the moment that governments (federal, state, and local) are taking unprecedented (and liberty restricting) steps to achieve exactly those outcomes.

Failure as a matter of practice

The dueling legislative proposals are structured in parallel (a complete breakdown is available here). Each includes provisions concerning the entities and data to be covered, the obligations placed upon entities interacting with covered data, and the scope, extent and power of enforcement measures. While the scope of the entities and data covered vary significantly, with the Democratic proposal encumbering far more of each, they share a provision requiring both “opt-in” consent for access and use of data and a requirement that a mechanism exist to revoke that consent. 

The bipartisan move to affirmative consent represents a significant change in the Congressional privacy conversation. Hitherto, sensitive data have elicited calls for context-dependent levels of privacy, but no previous GOP legislative proposal had suggested the use of an “opt-in” mechanism. The timing of this novel bipartisanship could not be worse because, in the context of COVID-19 response, using the FTC’s 2012 privacy report as a model, the privacy benefits of raising the bar for the adoption of tools to track the course of the virus are likely substantially outweighed by the benefits that don’t just accrue to the covered entity, but to society as a whole with firms relatively freer to experiment with COVID-19-tracking technologies. 

There is another way forward. Instead of introducing design restraints and thereby limiting the practical manner in which firms go about developing tools to address COVID-19, Congress should be moving to articulate discrete harms related to unintended or coerced uses of information that it would like to prevent. For instance: defining what would constitute a deceptive use of COVID-related health information, or clarifying what fraudulent inducement should involve for purposes of downloading a contract tracing app. At least with particularized harms in mind policymakers and the public will more readily be able to assess and balance the value of what is gained in terms of privacy versus what is lost in terms of public health capabilities.

Congress, and the broader public policy debate around privacy, has come to a strange place. The privacy rights that lawmakers are seeking to create, utterly independent of potential privacy harms, pose a substantial new regulatory burden to firms attempting to achieve the very public health outcomes for which society is clamoring. In the process, arguably far more significant impingements upon individual liberty, in the form of largely indiscriminate restrictions on movement, association and commerce, are necessary to achieve what elements of contract tracing promises. That’s not just getting privacy wrong – that’s getting privacy all wrong.