This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.
A boy throws a brick through a bakeshop window. He flees and is never identified. The townspeople gather around the broken glass. “Well,” one of them says to the furious baker, “at least this will generate some business for the windowmaker!”
A reasonable statement? Not really. Although it is indeed a good day for the windowmaker, the money for the new window comes from the baker. Perhaps the baker was planning to use that money to buy a new suit. Now, instead of owning a window and a suit, he owns only a window. The windowmaker’s gain, meanwhile, is simply the tailor’s loss.
This parable of the broken window was conceived by Frédéric Bastiat, a nineteenth-century French economist. He wanted to alert the reader to the importance of opportunity costs—in his words, “that which is not seen.” Time and money spent on one activity cannot be spent on another.
Today Bastiat might tell the parable of the harassed technology company. A tech firm creates a revolutionary new product or service and grows very large. Rivals, lawyers, activists, and politicians call for an antitrust probe. Eventually they get their way. Millions of documents are produced, dozens of depositions are taken, and several hearings are held. In the end no concrete action is taken. “Well,” the critics say, “at least other companies could grow while the firm was sidetracked by the investigation!”
Consider the antitrust case against Microsoft twenty years ago. The case ultimately settled, and Microsoft agreed merely to modify minor aspects of how it sold its products. “It’s worth wondering,” writes Brian McCullough, a generally astute historian of the internet, “how much the flowering of the dot-com era was enabled by the fact that the most dominant, rapacious player in the industry was distracted while the new era was taking shape.” “It’s easy to see,” McCullough says, “that the antitrust trial hobbled Microsoft strategically, and maybe even creatively.”
Should we really be glad that an antitrust dispute “distracted” and “hobbled” Microsoft? What would a focused and unfettered Microsoft have achieved? Maybe nothing; incumbents often grow complacent. Then again, Microsoft might have developed a great search engine or social-media platform. Or it might have invented something that, thanks to the lawsuit, remains absent to this day. What Microsoft would have created in the early 2000s, had it not had to fight the government, is that which is not seen.
But doesn’t obstructing the most successful companies create “room” for new competitors? David Cicilline, the chairman of the House’s antitrust subcommittee, argues that “just pursuing the [Microsoft] enforcement action itself” made “space for an enormous amount of additional innovation and competition.” He contends that the large tech firms seek to buy promising startups before they become full-grown threats, and that such purchases must be blocked.
It’s easy stuff to say. It’s not at all clear that it’s true or that it makes sense. Hindsight bias is rampant. In 2012, for example, Facebook bought Instagram for $1 billion, a purchase that is now cited as a quintessential “killer acquisition.” At the time of the sale, however, Instagram had 27 million users and $0 in revenue. Today it has around a billion users, it is estimated to generate $7 billion in revenue each quarter, and it is worth perhaps $100 billion. It is presumptuous to declare that Instagram, which had only 13 employees in 2012, could have achieved this success on its own.
If distraction is an end in itself, last week’s Big Tech hearing before Cicilline and his subcommittee was a smashing success. Presumably Jeff Bezos, Tim Cook, Sundar Pichai, and Mark Zuckerberg would like to spend the balance of their time developing the next big innovations and staying ahead of smart, capable, ruthless competitors, starting with each other and including foreign firms such as ByteDance and Huawei. Last week they had to put their aspirations aside to prepare for and attend five hours of political theater.
The most common form of exchange at the hearing ran as follows. A representative asks a slanted question. The witness begins to articulate a response. The representative cuts the witness off. The representative gives a prepared speech about how the witness’s answer proved her point.
Many of the antitrust subcommittee’s queries had nothing to do with antitrust. One representative fixated on Amazon’s ties with the Southern Poverty Law Center. Another seemed to want Facebook to interrogate job applicants about their political beliefs. A third asked Zuckerberg to answer for the conduct of Twitter. One representative demanded that social-media posts about unproven Covid-19 treatments be left up, another that they be taken down. Most of the questions that were at least vaguely on topic, meanwhile, were exceedingly weak. The representatives often mistook emails showing that tech CEOs play to win, that they seek to outcompete challengers and rivals, for evidence of anticompetitive harm to consumers. And the panel was often treated like a customer-service hotline. This app developer ran into a difficulty; what say you, Mr. Cook? That third-party seller has a gripe; why won’t you listen to her, Mr. Bezos?
In his opening remarks, Bezos cited a survey that ranked Amazon one of the country’s most trusted institutions. No surprise there. In many places one could have ordered a grocery delivery from Amazon as the hearing started and had the goods put away before it ended. Was Bezos taking a muted dig at Congress? He had every right to—it is one of America’s least trusted institutions. Pichai, for his part, noted that many users would be willing to pay thousands of dollars a year for Google’s free products. Is Congress providing people that kind of value?
The advance of technology will never be an unalloyed blessing. There are legitimate concerns, for instance, about how social-media platforms affect public discourse. “Human beings evolved to gossip, preen, manipulate, and ostracize,” psychologist Jonathan Haidt and technologist Tobias Rose-Stockwell observe. Social media exploits these tendencies, they contend, by rewarding those who trade in the glib put-down, the smug pronouncement, the theatrical smear. Speakers become “cruel and shallow”; “nuance and truth” become “casualties in [a] competition to gain the approval of [an] audience.”
Three things are true at once. First, Haidt and Rose-Stockwell have a point. Second, their point goes only so far. Social media does not force people to behave badly. Assuming otherwise lets individual humans off too easy. Indeed, it deprives them of agency. If you think it is within your power to display grace, love, and transcendence, you owe it to others to think it is within their power as well.
Third, if you really want to see adults act like children, watch a high-profile congressional hearing. A hearing for Attorney General William Barr, held the day before the Big Tech hearing and attended by many of the same representatives, was a classic of the format.
The tech hearing was not as shambolic as the Barr hearing. And the representatives act like sanctimonious halfwits in part to concoct the sick burns that attract clicks on the very platforms built, facilitated, and delivered by the tech companies. For these and other obvious reasons, no one should feel sorry for the four men who spent a Wednesday afternoon serving as props for demagogues. But that doesn’t mean the charade was a productive use of time. There is always that which is not seen.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Oscar Súmar, Dean of the Law School of the Scientific University of the South (Peru)).]
Peru’s response to the pandemic has been one of the most radical in Latin America: Restrictions were imposed sooner, lasted longer and were among the strictest in the region. Peru went into lockdown on March 15 after only 71 cases had been reported. Along with the usual restrictions (temporary restaurant and school closures), the Peruvian government took other measures such as bans on the use of private vehicles and the mandatory nightly curfews. For a time, there even were gender-based movement restrictions: men and women were allowed out on different days.
A few weeks into the lockdown, it became obvious that these measures were not flattening the curve of infections. But instead of reconsidering its strategy, the government insisted on the same path, with depressing results. Peru is one of the world’s worst hit countries by Covid-19, with 300k total cases by July 4th, 2020 and one of the countries with the highest “excess of deaths,” reaching 140%. Peru’s government has tried a rich country’s response, despite the fact that Peru lacks the institutions and wealth to make that possible.
The Peruvian response to coronavirus can be attributed to three factors. One, paternalism is popular in Peru and arguments for liberty are ignored. This is confirmed by the fact that President Vizcarra enjoys to this day a great deal of popularity thanks to this draconian lockdown even when the government has repeatedly blamed people’s negligence as the main cause of contagion. Two, government officials have socialistic tendencies. For instance, the Prime Minister – Mr. Zeballos – used to speak freely about price regulations and nationalization, even before the pandemic. And three, Peru’s health system is one of the worst in the region. It was foreseeable that our health system would be overwhelmed in the first few weeks, so our government decided to go into early lockdown.
If anything, Peru played the crisis by the book. But Peru´s lack of strong, legitimate and honest institutions have made its policies ineffectual. Just few months prior to the beginning of the pandemic, President Vizcarra dissolved the Congress. And Peru has been engulfed in a far-reaching corruption scandal for years. Only two years ago, former president Pedro Pablo Kuczynski resigned the presidency being directly implicated in the scandal, and his vice president at the time, Martin Vizcarra, took over. Much of Peru’s political and business elite have also been implicated in this scandal, with members of the elite summoned daily to the criminal prosecutor’s office for questioning.
However, if we want to understand the lack of strong institutions in Peru – and how this affected our response to the pandemic – we need to go back even further. In the 1980s, after having lived through a socialist military dictatorship, a young candidate named Alan Garcia was democratically elected as president. But during Garcia´s presidency, Peru achieved a trillion-dollar foreign debt, record levels of inflation, and imposed price controls and nationalizations. Peru fought a losing war against an armed Marxist terrorist group. By 1990, Peru was on the edge of the abyss. In the 1990 presidential campaign, Peruvians had to decide between a celebrated libertarian intellectual with little political experience, the novelist Mario Vargas Llosa, and Alberto Fujimori, a political “outsider” with rather unknown ideas but an aura of pragmatism over his head. We chose the latter.
Fujimori’s two main goals were to end domestic terrorism and to stabilize Peru’s ruined economy. This second task was achieved by following the Washington Consensus receipt: changing the Constitution after a self-inflicted coup d’état. The Consensus has been deemed as a “neoliberal” group of policies, but was really the product of a decades-long consensus among World Bank experts about policies that almost all mainstream economists favor. The policies included were privatization, deregulation, free trade, monetary stability, control over borrowing, and a focusing of public spending on health, education and infrastructure. A secondary part of the recommendations was aimed at institutional reform, poverty alleviation and the reform of tax and labor laws.
The implementation of the Consensus by Fujimori and subsequent governments was a mix of the actual “structural adjustments” recommended by the Bank and systemic over-regulation, mercantilism, and corruption. Every Peruvian president since 1990 is either currently being investigated or has been charged with corruption.
Although Peru’s GDP increased by more than 5% per year for several years since 1990, and poverty numbers have shrunk more than 50% in the last decade, other problems have remained. People have no access to decent healthcare; basic education in Peru is one of the worst in the world; and, more than half of the population does not have access to clean drinking water. Also, informality remains one of our biggest problems since the tax and labor reforms didn’t take place. Our tax base is very small, and our labor legislation is among the costliest in Latin America.
In Peruvian eyes, this is what “neoliberalism” looks like. Peru was good at implementing many of the high-level reforms, but not the detailed and complex institutional ones. The Consensus assumed the coexistence of free market institutions and measures of social assistance. Peru had some of these, but not enough. Even the reforms that did take place weren´t legitimate or part of our actual social consensus.
Taking advantage of people´s discontent, now, some leftist politicians, journalists, academics and activists want nothing more than to return to our previous interventionist Constitution and to socialism. Peruvian people are crying out for change. If the current situation is partially explained by our implementation of the Washington Consensus and that Consensus is deemed “neoliberal”, it´s no surprise that “change” is understood as going back to a more interventionist regime. Our current situation could be seen as the result of people demanding more government intervention, with the government and Congress simply meeting that demand, with no institutional framework to resist this.
The health crisis we are currently experiencing highlights the cost of Peru’s lack of strong institutions. Peru had one of the most ill-prepared public healthcare systems in the World at the beginning of the pandemic, with just 100 intensive care units. But there is virtually no private alternative, because that is so heavily regulated, and what exists is mostly the preserve of the elite. So, instead of working to improve the public system or promote more competition in the private sector, the government threatened clinics with a takeover.
The Peruvian government was unable to deliver policies that matched the real conditions of its population. We have, in effect, the lockdown of a rich country with few of the conditions that have allowed them to work. Inner-city poverty and a large informal economy (at an estimated 70% of Peru’s economy) made the lockdown a health and economic trap for the majority of the population (this study of Norma Loayza is very illustrative).
Incapable of facing the truth about Peru’s ability to withstand a lockdown, government officials relied on regulation to try to reshape reality to their wishes. The result is 20-40 pages of “protocols” to be fulfilled by small companies, completely ignored by the informal 70% of the economy. In some cases, these regulations were obvious examples of rent-seeking as well. For example, only firms with 1 million soles (approximately 300,000 USD) in sales in the past year and with at least three physical branches were allowed to do business online during the lockdown.
Even after the lockdown has been officially terminated since July 1st, the government must approve every industry in order to operate again. At the same time, our Congress has passed legislation prohibiting toll collection (even when is a contractual agreement); it has criminalized “hoarding” and restated “speculation” as a felony crime; and a proposal to freeze all financial debts. Some economic commentators argue that in Peru the “populist virus” is even worse than Covid-19. Peru’s failure in dealing with the virus must be understood in light of its long history of interventionist governments that have let economic sclerosis set in through overregulation and done little to build up the kinds of institutions that would allow a pandemic response that suits Peru to work. Our lack of strong institutions, confidence in the market economy, and human capital in the public sector has put us in an extremely fragile position to fight the virus.
The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.
Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.
One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry.
The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on.
The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is.
But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.
There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).
The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher.
In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either.
In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.
A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.
In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:
To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.
This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.
The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.
The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control.
Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well.
This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).
The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.
The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.
In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.
Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.
In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed. Devin Coldewey of TechCrunch, for example, recently asserted that:
caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.
The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).
What’s UBB really about?
For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.
But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.
In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.
A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its subscribers use less than 1.2 TB of data monthly).
On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.
Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.
Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.
Usage-based pricing in other sectors
To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.
Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.
Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.
Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch.
All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.
Aligning network costs with usage
Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.
As President Obama’s first FCC Chairman, Julius Genachowski, put it:
Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.
Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.
Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:
[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)
When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place — from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.
Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).
Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.
The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.
Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.
At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.
The case for pricing freedom
Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.
The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.
Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.
Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.
Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.
While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?
At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.
Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.
To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.
Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.
Competition law on a spectrum
To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:
While these classifications are certainly not airtight, this would be my reasoning:
In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs).
In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc.
Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement.
Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.
The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).
The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazoninvestigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.
Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shoppingdecision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Androiddecision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.
What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).
The empty quadrant
All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.
There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format.
This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).
An evolutionary explanation?
The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market?
I can see at least three potential explanations:
Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
Consumers have opted for closed systems precisely because they are closed.
I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.
Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them.
For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.
Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3).
For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS).
Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision.
There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.
It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.
To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.
One of the great scholars of law & economics turns 90 years old today. In his long and distinguished career, Thomas Sowell has written over 40 books and countless opinion columns. He has been a professor of economics and a long-time Senior Fellow at the Hoover Institution. He received a National Humanities Medal in 2002 for a lifetime of scholarship, which has only continued since then. His ability to look at issues with an international perspective, using the analytical tools of economics to better understand institutions, is an inspiration to us at the International Center for Law & Economics.
Here, almost as a blog post festschrift as a long-time reader of his works, I want to briefly write about how Sowell’s voluminous writings on visions, law, race, and economics could be the basis for a positive agenda to achieve a greater measure of racial justice in the United States.
The Importance of Visions
One of the most important aspects of Sowell’s work is his ability to distill wide-ranging issues into debates involving different mental models, or a “Conflict of Visions.” He calls one vision the “tragic” or “constrained” vision, which sees all humans as inherently limited in knowledge, wisdom, and virtue, and fundamentally self-interested even at their best. The other vision is the “utopian” or “unconstrained” vision, which sees human limitations as artifacts of social arrangements and cultures, and that there are some capable by virtue of superior knowledge and morality that can redesign society to create a better world.
An implication of the constrained vision is that the difference in knowledge and virtue between the best and the worst in society is actually quite small. As a result, no one person or group of people can be trusted with redesigning institutions which have spontaneously evolved. The best we can hope for is institutions that reasonably deter bad conduct and allow people the freedom to solve their own problems.
An important implication of the unconstrained vision, on the other hand, is that there are some who because of superior enlightenment, which Sowell calls the “Vision of the Anointed,” can redesign institutions to fundamentally change human nature, which is seen as malleable. Institutions are far more often seen as the result of deliberate human design and choice, and that failures to change them to be more just or equal is a result of immorality or lack of will.
The importance of visions to how we view things like justice and institutions makes all the difference. In the constrained view, institutions like language, culture, and even much of the law result from the “spontaneous ordering” that is the result of human action but not of human design. Limited government, markets, and tradition are all important in helping individuals coordinate action. Markets work because self-interested individuals benefit when they serve others. There are no solutions to difficult societal problems, including racism, only trade-offs.
But in the unconstrained view, limits on government power are seen as impediments to public-spirited experts creating a better society. Markets, traditions, and cultures are to be redesigned from the top down by those who are forward-looking, relying on their articulated reason. There is a belief that solutions could be imposed if only there is sufficient political will and the right people in charge. When it comes to an issue like racism, those who are sufficiently “woke” should be in charge of redesigning institutions to provide for a solution to things like systemic racism.
For Sowell, what he calls “traditional justice” is achieved by processes that hold people accountable for harms to others. Its focus is on flesh-and-blood human beings, not abstractions like all men or blacks versus whites. Differences in outcomes are not just or unjust, by this point of view, what is important is that the processes are just. These processes should focus on institutional incentives of participants. Reforms should be careful not to upset important incentive structures which have evolved over time as the best way for limited human beings to coordinate behavior.
The “Quest for Cosmic Justice,” on the other hand, flows from the unconstrained vision. Cosmic justice sees disparities between abstract groups, like whites and blacks, as unjust and in need of correction. If results from impartial processes like markets or law result in disparities, those with an unconstrained vision often see those processes as themselves racist. The conclusion is that the law should intervene to create better outcomes. This presumes considerable knowledge and morality on behalf of those who are in charge of the interventions.
For Sowell, a large part of his research project has been showing that those with the unconstrained vision often harm those they are proclaiming the intention to help in their quest for cosmic justice.
A Constrained Vision of Racial Justice
Sowell has written quite a lot on race, culture, intellectuals, economics, and public policy. One of the main thrusts of his argument about race is that attempts at cosmic justice often harm living flesh-and-blood individuals in the name of intertemporal abstractions like “social justice” for black Americans. Sowell nowhere denies that racism is an important component of understanding the history of black Americans. But his constant challenge is that racism can’t be the only variable which explains disparities. Sowell points to the importance of culture and education in building human capital to be successful in market economies. Without taking those other variables into account, there is no way to determine the extent that racism is the cause of disparities.
This has important implications for achieving racial justice today. When it comes to policies pursued in the name of racial justice, Sowell has argued that many programs often harm not only members of disfavored groups, but the members of the favored groups.
For instance, Sowell has argued that affirmative action actually harms not only flesh-and-blood white and Asian-Americans who are passed over, but also harms those African-Americans who are “mismatched” in their educational endeavors and end up failing or dropping out of schools when they could have been much better served by attending schools where they would have been very successful. Another example Sowell often points to is minimum wage legislation, which is often justified in the name of helping the downtrodden, but has the effect of harming low-skilled workers by increasing unemployment, most especially young African-American males.
Any attempts at achieving racial justice, in terms of correcting historical injustices, must take into account how changes in processes could actually end up hurting flesh-and-blood human beings, especially when those harmed are black Americans.
A Positive Agenda for Policy Reform
In Sowell’s constrained vision, a large part of the equation for African-American improvement is going to be cultural change. However, white Americans should not think that this means they have no responsibility in working towards racial justice. A positive agenda must take into consideration real harms experienced by African-Americans due to government action (and inaction). Thus, traditional justice demands institutional reforms, and in some cases, recompense.
The policy part of this equation outlined below is motivated by traditional justice concerns that hold people accountable under the rule of law for violations of constitutional rights and promotes institutional reforms to more properly align incentives.
What follows below are policy proposals aimed at achieving a greater degree of racial justice for black Americans, but fundamentally informed by the constrained vision and traditional justice concerns outlined by Sowell. Most of these proposals are not on issues Sowell has written a lot on. In fact, some proposals may actually not be something he would support, but are—in my opinion—consistent with the constrained vision and traditional justice.
Reparations for Historical Rights Violations
Sowell once wrote this in regards to reparations for black Americans:
Nevertheless, it remains painfully clear that those people who were torn from their homes in Africa in centuries past and forcibly brought across the Atlantic in chains suffered not only horribly, but unjustly. Were they and their captors still alive, the reparations and retribution owed would be staggering. Time and death, however, cheat us of such opportunities for justice, however galling that may be. We can, of course, create new injustices among our flesh-and-blood contemporaries for the sake of symbolic expiation, so that the son or daughter of a black doctor or executive can get into an elite college ahead of the son or daughter of a white factory worker or farmer, but only believers in the vision of cosmic justice are likely to take moral solace from that. We can only make our choices among alternatives actually available, and rectifying the past is not one of those options.
In other words, if the victims and perpetrators of injustice are no longer alive, it is not just to hold entire members of respective races accountable for crimes which they did not commit. However, this would presumably leave open the possibility of applying traditional justice concepts in those cases where death has not cheated us.
For instance, there are still black Americans alive who suffered from Jim Crow, as well as children and family members of those lynched. While it is too little, too late, it seems consistent with traditional justice to still seek out and prosecute criminally perpetrators who committed heinous acts but a few generations ago against still living victims. This is not unprecedented. Old Nazis are still prosecuted for crimes against Jews. A similar thing could be done in the United States.
Similarly, civil rights lawsuits for the damages caused by Jim Crow could be another way to recompense those who were harmed. Alternatively, it could be done by legislation. The Civil Liberties Act of 1988 was passed under President Reagan and gave living Japanese Americans who were interned during World War II some limited reparations. A similar system could be set up for living victims of Jim Crow.
Statutes of limitations may need to be changed to facilitate these criminal prosecutions and civil rights lawsuits, but it is quite clearly consistent with the idea of holding flesh-and-blood persons accountable for their unlawful actions.
Holding flesh-and-blood perpetrators accountable for rights violations should not be confused with the cosmic justice idea—that Sowell consistently decries—that says intertemporal abstractions can be held accountable for crimes. In other words, this is not holding “whites” accountable for all historical injustices to “blacks.” This is specifically giving redress to victims and deterring future bad conduct.
End Qualified Immunity
Another way to promote racial justice consistent with the constrained vision is to end one of the Warren Court’s egregious examples of judicial activism: qualified immunity. Qualified immunity is nowhere mentioned in the statute for civil rights, 42 USC § 1983. As Sowell argues in his writings, judges in the constrained vision are supposed to declare what the law is, not what they believe it should be, unlike those in the unconstrained vision who—according to Sowell— believe they have the right to amend the laws through judicial edict. The introduction of qualified immunity into the law by the activist Warren Court should be overturned.
In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.
However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity… courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it… This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.
Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity… The result is to encourage police officers to take insufficient care when making the choice about the level of force to use.
Those with a constrained vision focus on processes and incentives. In this case, it is police officers who have insufficient incentives to take reasonable care when they receive qualified immunity for their conduct.
End the Drug War
While not something he has written a lot on, Sowell has argued for the decriminalization of drugs, comparing the War on Drugs to the earlier attempts at Prohibition of alcohol. This is consistent with the constrained vision, which cares about the institutional incentives created by law.
Interestingly, work by Michelle Alexander in the second chapter of The New Jim Crow is largely consistent with Sowell’s point of view. There she argued the institutional incentives of police departments were systematically changed when the drug war was ramped up.
Alexander asks a question which is right in line with the constrained vision:
[I]t is fair to wonder why the police would choose to arrest such an astonishing percentage of the American public for minor drug crimes. The fact that police are legally allowed to engage in a wholesale roundup of nonviolent drug offenders does not answer the question why they would choose to do so, particularly when most police departments have far more serious crimes to prevent and solve. Why would police prioritize drug-law enforcement? Drug use and abuse is nothing new; in fact, it was on the decline, not on the rise, when the War on Drugs began.
Alexander locates the impetus for ramping up the drug war in federal subsidies:
In 1988, at the behest of the Reagan administration, Congress revised the program that provides federal aid to law enforcement, renaming it the Edward Byrne Memorial State and Local Law Enforcement Assistance Program after a New York City police officer who was shot to death while guarding the home of a drug-case witness. The Byrne program was designed to encourage every federal grant recipient to help fight the War on Drugs. Millions of dollars in federal aid have been offered to state and local law enforcement agencies willing to wage the war. By the late 1990s, the overwhelming majority of state and local police forces in the country had availed themselves of the newly available resources and added a significant military component to buttress their drug-war operations.
On top of that, police departments were benefited by civil asset forfeiture:
As if the free military equipment, training, and cash grants were not enough, the Reagan administration provided law enforcement with yet another financial incentive to devote extraordinary resources to drug law enforcement, rather than more serious crimes: state and local law enforcement agencies were granted the authority to keep, for their own use, the vast majority of cash and assets they seize when waging the drug war. This dramatic change in policy gave state and local police an enormous stake in the War on Drugs—not in its success, but in its perpetual existence. Suddenly, police departments were capable of increasing the size of their budgets, quite substantially, simply by taking the cash, cars, and homes of people suspected of drug use or sales. Because those who were targeted were typically poor or of moderate means, they often lacked the resources to hire an attorney or pay the considerable court costs. As a result, most people who had their cash or property seized did not challenge the government’s action, especially because the government could retaliate by filing criminal charges—baseless or not.
As Alexander notes, black Americans (and other minorities) were largely targeted in this ramped up War on Drugs, noting the drug war’s effects have been to disproportionately imprison black Americans even though drug usage and sales are relatively similar across races. Police officers have incredible discretion in determining who to investigate and bring charges against. When it comes to the drug war, this discretion is magnified because the activity is largely consensual, meaning officers can’t rely on victims to come to them to start an investigation. Alexander finds the reason the criminal justice system has targeted black Americans is because of implicit bias in police officers, prosecutors, and judges, which mirrors the bias shown in media coverage and in larger white American society.
Anyone inspired by Sowell would need to determine whether this is because of racism or some other variable. It is important to note here that Sowell never denies that racism exists or is a real problem in American society. But he does challenge us to determine whether this alone is the cause of disparities. Here, Alexander makes a strong case that it is implicit racism that causes the disparities in enforcement of the War on Drugs. A race-neutral explanation could be as follows, even though it still suggests ending the War on Drugs: the enforcement costs against those unable to afford to challenge the system are lower. And black Americans are disproportionately represented among the poor in this country. As will be discussed below in the section on reforming indigent criminal defense, most prosecutions are initiated against defendants who can’t afford a lawyer. The result could be racially disparate even without a racist motivation.
Regardless of whether racism is the variable that explains the disparate impact of the War on Drugs, it should be ended. This may be an area where traditional and cosmic justice concerns can be united in an effort to reform the criminal justice system.
Reform Indigent Criminal Defense
A related aspect of how the criminal justice system has created a real barrier for far too many black Americans is the often poor quality of indigent criminal defense. Indigent defense is a large part of criminal defense in this country since a very high number of criminal prosecutions are initiated against those who are often too poor to afford a lawyer (roughly 80%). Since black Americans are disproportionately represented among the indigent and those in the criminal justice system, it should be no surprise that black Americans are disproportionately represented by public defenders in this country.
According to the constrained vision, it is important to look at the institutional incentives of public defenders. Considering the extremely high societal costs of false convictions, it is important to get these incentives right.
David Friedman and Stephen Schulhofer’s seminal article exploring the law & economics of indigent criminal defense highlighted the conflict of interest inherent in government choosing who represents criminal defendants when the government is in charge of prosecuting. They analyzed each of the models used in the United States for indigent defense from an economic point of view and found each wanting. On top of that, there is also a calculation problem inherent in government-run public defender’s offices whereby defendants may be systematically deprived of viable defense strategies because of a lack of price signals.
An interesting alternative proposed by Friedman and Schulhofer is a voucher system. This is similar to the voucher system Sowell has often touted for education. The idea would be that indigent criminal defendants get to pick the lawyer of their choosing that is part of the voucher program. The government would subsidize the provision of indigent defense, in this model, but would not actually pick the lawyer or run the public defender organization. Incentives would be more closely aligned between the defendant and counsel.
While much more could be said consistent with the constrained vision that could help flesh-and-blood black Americans, including abolishing occupational licensing, ending wage controls, promoting school choice, and ending counterproductive welfare policies, this is enough for now. Racial justice demands holding rights violators accountable and making victims whole. Racial justice also means reforming institutions to make sure incentives are right to deter conduct which harms black Americans. However, the growing desire to do something to promote racial justice in this country should not fall into the trap of cosmic justice thinking, which often ends up hurting flesh-and-blood people of all races in the present in the name of intertemporal abstractions.
Happy 90th birthday to one of the greatest law & economics scholars ever, Dr. Thomas Sowell.
Last month the EU General Court annulled the EU Commission’s decision to block the proposed merger of Telefónica UK by Hutchison 3G UK.
It what could be seen as a rebuke of the Directorate-General for Competition (DG COMP), the court clarified the proof required to block a merger, which could have a significant effect on future merger enforcement:
In the context of an analysis of a significant impediment to effective competition the existence of which is inferred from a body of evidence and indicia, and which is based on several theories of harm, the Commission is required to produce sufficient evidence to demonstrate with a strong probability the existence of significant impediments following the concentration. Thus, the standard of proof applicable in the present case is therefore stricter than that under which a significant impediment to effective competition is “more likely than not,” on the basis of a “balance of probabilities,” as the Commission maintains. By contrast, it is less strict than a standard of proof based on “being beyond all reasonable doubt.”
Over the relevant time period, there were four retail mobile network operators in the United Kingdom: (1) EE Ltd, (2) O2, (3) Hutchison 3G UK Ltd (“Three”), and (4) Vodafone. The merger would have combined O2 and Three, which would account for 30-40% of the retail market.
The Commission argued that Three’s growth in market share over time and its classification as a “maverick” demonstrated that Three was an “important competitive force” that would be eliminated with the merger. The court was not convinced:
The mere growth in gross add shares over several consecutive years of the smallest mobile network operator in an oligopolistic market, namely Three, which has in the past been classified as a “maverick” by the Commission (Case COMP/M.5650 — T-Mobile/Orange) and in the Statement of Objections in the present case, does not in itself constitute sufficient evidence of that operator’s power on the market or of the elimination of the important competitive constraints that the parties to the concentration exert upon each other.
While the Commission classified Three as a maverick, it also claimed that maverick status was not necessary to be an important competitive force. Nevertheless, the Commission pointed to Three’s history of maverick-y behavior by launching its “One Plan” as well as free international roaming and offering 4G at no additional cost. The court, however, noted that those initiatives were “historical in nature,” and provided no evidence of future conduct:
The Commission’s reasoning in that regard seems to imply that an undertaking which has historically played a disruptive role will necessarily play the same role in the future and cannot reposition itself on the market by adopting a different pricing policy.
The EU General Court appears to express the same frustration with mavericks as the court in in H&R Block/TaxACT: “The arguments over whether TaxACT is or is not a ‘maverick’ — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis.”
With the General Court’s recent decision raising the bar of proof required to block a merger, it also provided a “strong probability” that the days of maverick madness may soon be over.
Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech.
In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.
While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby undermining long-standing conservative principles and the ability of conservatives to be treated fairly online.
There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated.
Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them.
It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.
Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional.
But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.
These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation.
This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.
Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.
It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about.
This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:
I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.
And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.
Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.
This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law.
State bar associations, with the backing of state judiciaries and legislatures, are typically entrusted with a largely unqualified monopoly over licensing in legal services markets. This poses an unavoidable policy tradeoff. Designating the bar as gatekeeper might protect consumers by ensuring a minimum level of service quality. Yet the gatekeeper is inherently exposed to influence by interests with an economic stake in the existing market. Any licensing requirement that might shield uninformed consumers from unqualified or opportunistic lawyers also necessarily raises an entry barrier that protects existing lawyers against more competition. A proper concern for consumer welfare therefore requires that the gatekeeper impose licensing requirements only when they ensure that the efficiency gains attributable to a minimum quality threshold outweigh the efficiency losses attributable to constraints on entry.
There is increasing reason for concern that state bar associations are falling short of this standard. In particular, under the banner of “legal ethics,” some state bar associations and courts have blocked or impeded entry by innovative “legaltech” services without a compelling consumer protection rationale.
The LegalMatch Case: A misunderstood platform
This trend is illustrated by a recent California appellate court decision interpreting state regulations pertaining to legal referral services. In Jackson v. LegalMatch, decided in late 2019, the court held that LegalMatch, a national online platform that matches lawyers and potential clients, constitutes an illegal referral service, even though it is not a “referral service” under the American Bar Association’s definition of the term, and the California legislature had previously declined to include online services within the statutory definition.
The court’s reasoning: the “marketing” fee paid by subscribing attorneys to participate in the platform purportedly runs afoul of state regulations that proscribe attorneys from paying a fee to referral services that have not been certified by the bar. (The lower court had felt differently, finding that LegalMatch was not a referral service for this purpose, in part because it did not “exercise any judgment” on clients’ legal issues.)
The court’s formalist interpretation of applicable law overlooks compelling policy arguments that strongly favor facilitating, rather than obstructing, legal matching services. In particular, the LegalMatch decision illustrates the anticompetitive outcomes that can ensue when courts and regulators blindly rely on an unqualified view of platforms as an inherent source of competitive harm.
Contrary to this presumption, legal services referral platforms enhance competition by reducing transaction-cost barriers to efficient lawyer-client relationships. These matching services benefit consumers that otherwise lack access to the full range of potential lawyers and smaller or newer law firms that do not have the marketing resources or brand capital to attract the full range of potential clients. Consistent with the well-established economics of platform markets, these services operate under a two-sided model in which the unpriced delivery of attorney information to potential clients is financed by the positively priced delivery of interested clients to subscribing attorneys. Without this two-sided fee structure, the business model collapses and the transaction-cost barriers to matching the credentials of tens of thousands of lawyers with the preferences of millions of potential clients are inefficiently restored. Some legal matching platforms also offer fixed-fee service plans that can potentially reduce legal representation costs relative to the conventional billable hour model that can saddle clients with unexpectedly or inappropriately high legal fees given the difficulty in forecasting the required quantity of legal services ex ante and measuring the quality of legal services ex post.
Blocking entry by these new business models is likely to adversely impact competition and, as observed in a 2018 report by an Illinois bar committee, to injure lower-income consumers in particular. The result is inefficient, regressive, and apparently protectionist.
Indeed, subsequent developments in thislitigation are regrettably consistent with the last possibility. After the California bar prevailed in its legal interpretation of “referral service” at the appellate court, and the Supreme Court of California declined to review the decision, LegalMatch then sought to register as a certified lawyer referral service with the bar. The bar responded by moving to secure a temporary restraining order against the continuing operation of the platform. In May 2020, a lower state court judge both denied the petition and expressed disappointment in the bar’s handling of the litigation.
Bar associations’ puzzling campaign against “LegalTech” innovation
This case of regulatory overdrive is hardly unique to the LegalMatch case. Bar associations have repeatedly acted to impede entry by innovators that deploy digital technologies to enhance legal services, which can drive down prices in a field that is known for meager innovation and rigid pricing. Puzzlingly from a consumer welfare perspective, the bar associations have taken actions that impede or preclude entry by online services that expand opportunities for lawyers, increase the information available to consumers, and, in certain cases, place a cap on maximum legal fees.
In 2017, New Jersey Supreme Court legal ethics committees, following an “inquiry” by the state bar association, prohibited lawyers from partnering with referral services and legal services plans offered by Avvo, LegalZoom, and RocketLawyer. In 2018, Avvo discontinued operations due in part to opposition from multiple state bar associations (often backed up by state courts).
In some cases, bar associations have issued advisory opinions that, given the risk of disciplinary action, can have an in terrorem effect equivalent to an outright prohibition. In 2018, the Indiana Supreme Court Disciplinary Commission issued a “nonbinding advisory” opinion stating that attorneys who pay “marketing fees” to online legal referral services or agree to fixed-fee arrangements with such services “risk violation of several Indiana [legal] ethics rules.”
State bar associations similarly sought to block the entry of LegalZoom, an online provider of standardized legal forms that can be more cost-efficient for “cookie-cutter” legal situations than the traditional legal services model based on bespoke document preparation. These disputes are protracted and costly: it took LegalZoom seven years to reach a settlement with the North Carolina State Bar that allowed it to continue operating in the state. In a case pending before the Florida Supreme Court, the Florida bar is seeking to shut down a smartphone application that enables drivers to contest traffic tickets at a fixed fee, a niche in which the traditional legal services model is likely to be cost-inefficient given the relatively modest amounts that are typically involved.
State bar associations, with supporting action or inaction by state courts and legislatures, have ventured well beyond the consumer protection rationale that is the only potentially publicly-interested justification for the bar’s licensing monopoly. The results sometimes border on absurdity. In 2006, the New Jersey bar issued an opinion precluding attorneys from stating in advertisements that they had appeared in an annual “Super Lawyers” ranking maintained by an independent third-party publication. In 2008, based on a 304-page report prepared by a “special master,” the bar’s ethics committee vacated the opinion but merely recommended further consideration taking into account “legitimate commercial speech activities.” In 2012, the New York legislature even changed the “unlicensed practice of law” from a misdemeanor to a felony, an enhancement proposed by . . . the New York bar (see here and here).
In defending their actions against online referral services, the bar associations argue that these steps are necessary to defend the public’s interest in receiving legal advice free from any possible conflict of interest. This is a presumptively weak argument. The associations’ licensing and other requirements are inherently tainted throughout by a “meta” conflict of interest. Hence it is the bar that rightfully bears the burden in demonstrating that any such requirement imposes no more than a reasonably necessary impediment to competition. This is especially so given that each bar association often operates its own referral service.
The unrealized potential of North Carolina State Board of Dental Examiners v. FTC
Bar associations might nonetheless take the legal position that they have statutory or regulatory discretion to take these actions and therefore any antitrust scrutiny is inapposite. If that argument ever held water, that is clearly no longer the case.
In an undeservedly underapplied decision, North Carolina State Board of Dental Examiners v. FTC, the Supreme Court held definitively in 2015 that any action by a “non-sovereign” licensing entity is subject to antitrust scrutiny unless that action is “actively supervised” by, and represents a “clearly articulated” policy of, the state. The Court emphasized that the degree of scrutiny is highest for licensing bodies administered by constituencies in the licensed market—precisely the circumstances that characterize state bar associations.
The North Carolina decision is hardly an outlier. It followed a string of earlier cases in which the Court had extended antitrust scrutiny to a variety of “hard” rules and “soft” guidance that bar associations had issued and defended on putatively publicly-interested grounds of consumer protection or legal ethics.
At the Court, the bar’s arguments did not meet with success. The Court rejected any special antitrust exemption for a state bar association’s “advisory” minimum fee schedule (Goldfarb v. Virginia State Bar(1975)) and, in subsequent cases, similarly held that limitations by professional associations on advertising by members—another requirement to “protect” consumers—do not enjoy any special antitrust exemption. The latter set of cases addressed specifically both advertising restrictions on price and quality by a California dental association (California Dental Association v. FTC (1999) ) and blanket restrictions on advertising by a bar association (Bates v. State Bar of Arizona(1977 )). As suggested by the bar associations’ recent actions toward online lawyer referral services, the Court’s consistent antitrust decisions in this area appear to have had relatively limited impact in disciplining potentially protectionist actions by professional associations and licensing bodies, at least in the legal services market.
A neglected question: Is the regulation of legal services anticompetitive?
The current economic situation poses a unique historical opportunity for bar associations to act proactively by enlisting independent legal and economic experts to review each component of the current licensing infrastructure and assess whether it passes the policy tradeoff between protecting consumers and enhancing competition. If not, any such component should be modified or eliminated to elicit competition that can leverage digital technologies and managerial innovations—often by exploiting the efficiencies of multi-sided platform models—that have been deployed in other industries to reduce prices and transaction costs. These modifications would expand access to legal services consistent with the bar’s mission and, unlike existing interventions to achieve this objective through government subsidies, would do so with a cost to the taxpayer of exactly zero dollars.
This reexamination exercise is arguably demanded by the line of precedent anchored in the Goldfarb and Bates decisions in 1975 and 1977, respectively, and culminating in the North Carolina Dental decision in 2015. This line of case law is firmly grounded in antitrust law’s underlying commitment to promote consumer welfare by deterring collective action that unjustifiably constrains the free operation of competitive forces. In May 2020, the California bar took a constructive if tentative step in this direction by reviving consideration of a “regulatory sandbox” to facilitate experimental partnerships between lawyers and non-lawyers in pioneering new legal services models. This follows somewhat more decisive action by the Utah Supreme Court, which in 2019 approved commencing a staged process that may modify regulation of the legal services market, including lifting or relaxing restrictions on referral fees and partnerships between lawyers and non-lawyers.
Neither the legal profession generally nor the antitrust bar in particular has allocated substantial attention to potentially anticompetitive elements in the manner in which the practice of law has long been regulated. Restrictions on legal referral services are only one of several practices that deserve a closer look under the policy principles and legal framework set forth most recently in North Carolina Dental and previously in California Dental. A few examples can illustrate this proposition.
Currently limitations on partnerships between lawyers and non-lawyers constrain the ability to achieve economies of scale and scope in the delivery of legal services and preclude firms from offering efficient bundles of complementary legal and non-legal services. Under a more surgical regulatory regime, legal services could be efficiently bundled with related accounting and consulting services, subject to appropriately targeted precautions against conflicts of interest. Additionally, as other commentators have observed and as “legaltech” innovations demonstrate, software could be more widely deployed to provide “direct-to-consumer” products that deliver legal services at a far lower cost than the traditional one-on-one lawyer-client model, subject to appropriately targeted precautions that reflect informational asymmetries in individual and small-business legal markets.
In another example, the blanket requirement of seven years of undergraduate and legal education raises entry costs that are not clearly justified for all areas of legal practice, some of which could potentially be competently handled by practitioners with intermediate categories of legal training. These are just two out of many possibilities that could be constructively explored under a more antitrust-sensitive approach that takes seriously the lessons of North Carolina Dental and the competitive risks inherent to lawyer self-regulation of legal services markets. (An alternative and complementary policy approach would be to move certain areas of legal services regulation out of the hands of the legal profession entirely.)
The LegalMatch case is indicative of a largely unexploited frontier in the application of antitrust law and principles to the practice of law itself. While commentators have called attention to the antitrust concerns raised by the current regulatory regime in legal services markets, and the evolution of federal case law has increasingly reflected these concerns, there has been little practical action by state bar associations, the state judiciary or state legislatures. This might explain why the delivery of legal services has changed relatively little during the same period in which other industries have been transformed by digital technologies, often with favorable effects for consumers in the form of increased convenience and lower costs. There is strong reason to believe a rigorous and objective examination of current licensing and related limitations imposed by bar associations in legal services markets is likely to find that many purportedly “ethical” requirements, at least when applied broadly and without qualification, do much to inhibit competition and little to protect consumers.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]
While much of the world of competition policy has focused on mergers in the COVID-19 era. Some observers see mergers as one way of saving distressed but valuable firms. Others have called for a merger moratorium out of fear that more mergers will lead to increased concentration and market power. In the meantime, there has been a growing push for increased nationalization of a wide range of businesses and industries.
In most cases, the call for a government takeover is not a reaction to the public health and economic crises associated with coronavirus. Instead, COVID-19 is a convenient excuse to pursue long sought after policies.
Last year, well before the pandemic, New York mayor Bill de Blasio called for a government takeover of electrical grid operator ConEd because he was upset over blackouts during a heatwave. Earlier that year, he threatened to confiscate housing units from private landlords, “we will seize their buildings, and we will put them in the hands of a community nonprofit that will treat tenants with the respect they deserve.”
With that sort of track record, it should come as no surprise the mayor proposed a government takeover of key industries to address COVID-19: “This is a case for a nationalization, literally a nationalization, of crucial factories and industries that could produce the medical supplies to prepare this country for what we need.” Dana Brown, director of The Next System Project at The Democracy Collaborative, agrees, “We should nationalize what remains of the American vaccine industry now, thereby assuring that any coronavirus vaccines produced can be made as widely available and as inexpensive soon as possible.”
Dan Sullivan in the American Prospect suggests the U.S. should nationalize all the airlines. Some have gone so far as calling for nationalization of the U.S. oil industry.
On the one hand, it’s clear that de Blasio and Brown have no confidence in the price system to efficiently allocate resources. Alternatively, they may have overconfidence in the political/bureaucratic system to efficiently, and “equitably,” distribute resources. On the other hand, as Daniel Takash points out in an earlier post, both pharmaceuticals and oil are relatively unpopular industries with many Americans, in which case the threat of a government takeover has a big dose of populist score settling:
Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular–trailing behind fossil fuels, lawyers, and even the federal government.
In the early days of the pandemic, France’s finance minister Bruno Le Maire promised to protect “big French companies.” The minister identified a range of actions under consideration: “That can be done by recapitalization, that can be done by taking a stake, I can even use the term nationalization if necessary.” While he did not mention any specific companies, it’s been speculated Air France KLM may be a target.
The Italian government is expected to nationalize Alitalia soon. The airline has been in state administration since May 2017, and the Italian government will have 100% control of the airline by June. Last week, the German government took a 20% stake in Lufthansa, in what has been characterized as a “temporary partial nationalization.” In Canada, Prime Minister Justin Trudeau has been coy about speculation that the government might nationalize Air Canada.
Obviously, these takeovers have “bailout” written all over them, and bailouts have their own anticompetitive consequences that can be worse than those associated with mergers. For example, RyanAir announced it will contest the aid package for Lufthansa. RyanAir chief executive Michael O’Leary claims the aid will allow Lufthansa to “engage in below-cost selling” and make it harder for RyanAir and its rival low-cost carrier EasyJet to compete.
There is also a bit of a “national champion” aspect to the takeovers. Each of the potential targets are (or were) considered their nation’s flagship airline. World Bank economists Tanja Goodwin and Georgiana Pop highlight the risk of nationalization harming competition:
These [sic] should avoid rescuing firms that were already failing. … But governments should also refrain from engaging in production or service delivery in industries that can be served by the private sector. The role of SOEs [state owned enterprises] should be assessed in order to ensure that bailout packages are not exclusively and unnecessarily favoring a dominant SOE.
To be sure, COVID-19 related mergers could raise the specter of increased market power post-pandemic. But, this risk must be balanced against the risks posed by a merger moratorium. These include the risk of widespread bankruptcies (that’s another post) and/or the possibility of nationalization of firms and industries. Either option can reduce competition which can bring harm to consumers, employees, and suppliers.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]
Privacy absolutism is the misguided belief that protecting citizens’ privacy supersedes all other policy goals, especially economic ones. This is a mistake. Privacy is one value among many, not an end in itself. Unfortunately, the absolutist worldview has filtered into policymaking and is beginning to have very real consequences. Readers need look no further than contact tracing applications and the fight against Covid-19.
Covid-19 has presented the world with a privacy conundrum worthy of the big screen. In fact, it’s a plotline we’ve seen before. Moviegoers will recall that, in the wildly popular film “The Dark Knight”, Batman has to decide between preserving the privacy of Gotham’s citizens or resorting to mass surveillance in order to defeat the Joker. Ultimately, the caped crusader begrudgingly chooses the latter. Before the Covid-19 outbreak, this might have seemed like an unrealistic plot twist. Fast forward a couple of months, and it neatly illustrates the difficult decision that most western societies urgently need to make as they consider the use of contract tracing apps to fight Covid-19.
Contact tracing is often cited as one of the most promising tools to safely reopen Covid-19-hit economies. Unfortunately, its adoption has been severely undermined by a barrage of overblown privacy fears.
Take the contact tracing API and App co-developed by Apple and Google. While these firms’ efforts to rapidly introduce contact tracing tools are laudable, it is hard to shake the feeling that they have been holding back slightly.
In an overt attempt to protect users’ privacy, Apple and Google’s joint offering does not collect any location data (a move that has irked some states). Similarly, both firms have repeatedly stressed that users will have to opt-in to their contact tracing solution (as opposed to the API functioning by default). And, of course, all the data will be anonymous – even for healthcare authorities.
This is a missed opportunity. Google and Apple’s networks include billions of devices. That puts them in a unique position to rapidly achieve the scale required to successfully enable the tracing of Covid-19 infections. Contact tracing applications need to reach a critical mass of users to be effective. For instance, some experts have argued that an adoption rate of at least 60% is necessary. Unfortunately, existing apps – notably in Singapore, Australia, Norway and Iceland – have struggled to get anywhere near this number. Forcing users to opt-out of Google and Apple’s services could go a long way towards inverting this trend. Businesses could also boost these numbers by making them mandatory for their employees and consumers.
However, it is hard to blame Google or Apple for not pushing the envelope a little bit further. For the best part of a decade, they and other firms have repeatedly faced specious accusations of “surveillance capitalism”. This has notably resulted in heavy-handed regulation (including the GDPR, in the EU, and the CCPA, in California), as well as significant fines and settlements.
Those chickens have now come home to roost. The firms that are probably best-placed to implement an effective contact tracing solution simply cannot afford the privacy-related risks. This includes the risk associated with violating existing privacy law, but also potential reputational consequences.
Matters have also been exacerbated by the overly cautious stance of many western governments, as well as their citizens:
The European Data Protection Board cautioned governments and private sector actors to anonymize location data collected via contact tracing apps. The European Parliament made similar pronouncements.
A group of Democratic Senators pushed back against Apple and Google’s contact tracing solution, notably due to privacy considerations.
And public support for contact tracing is also critically low. Surveys in the US show that contact tracing is significantly less popular than more restrictive policies, such as business and school closures. Similarly, polls in the UK suggest that between 52% and 62% of Britons would consider using contact tracing applications.
Belgium’s initial plans for a contact tracing application were struck down by its data protection authority on account that they did not comply with the GDPR.
Finally, across the globe, there has been pushback against so-called “centralized” tracing apps, notably due to privacy fears.
In short, the West’s insistence on maximizing privacy protection is holding back its efforts to combat the joint threats posed by Covid-19 and the unfolding economic recession.
But contrary to the mass surveillance portrayed in the Dark Knight, the privacy risks entailed by contact tracing are for the most part negligible. State surveillance is hardly a prospect in western democracies. And the risk of data breaches is no greater here than with many other apps and services that we all use daily. To wit, password, email, and identity theft are still, by far, the most common targets for cyber attackers. Put differently, cyber criminals appear to be more interested in stealing assets that can be readily monetized, rather than location data that is almost worthless. This suggests that contact tracing applications, whether centralized or not, are unlikely to be an important target for cyberattackers.
The meagre risks entailed by contact tracing – regardless of how it is ultimately implemented – are thus a tiny price to pay if they enable some return to normalcy. At the time of writing, at least 5,8 million human beings have been infected with Covid-19, causing an estimated 358,000 deaths worldwide. Both Covid-19 and the measures destined to combat it have resulted in a collapse of the global economy – what the IMF has called “the worst economic downturn since the great depression”. Freedoms that the west had taken for granted have suddenly evaporated: the freedom to work, to travel, to see loved ones, etc. Can anyone honestly claim that is not worth temporarily sacrificing some privacy to partially regain these liberties?
More generally, it is not just contact tracing applications and the fight against Covid-19 that have suffered because of excessive privacy fears. The European GDPR offers another salient example. Whatever one thinks about the merits of privacy regulation, it is becoming increasingly clear that the EU overstepped the mark. For instance, an early empirical study found that the entry into force of the GDPR markedly decreased venture capital investments in Europe. Michal Gal aptly summarizes the implications of this emerging body of literature:
The price of data protection through the GDPR is much higher than previously recognized. The GDPR creates two main harmful effects on competition and innovation: it limits competition in data markets, creating more concentrated market structures and entrenching the market power of those who are already strong; and it limits data sharing between different data collectors, thereby preventing the realization of some data synergies which may lead to better data-based knowledge. […] The effects on competition and innovation identified may justify a reevaluation of the balance reached to ensure that overall welfare is increased.
In short, just like the Dark Knight, policymakers, firms and citizens around the world need to think carefully about the tradeoff that exists between protecting privacy and other objectives, such as saving lives, promoting competition, and increasing innovation. As things stand, however, it seems that many have veered too far on the privacy end of the scale.
Yet another sad story was caught on camera this week showing a group of police officers killing an unarmed African-American man named George Floyd. While the officers were fired from the police department, there is still much uncertainty about what will happen next to hold those officers accountable as a legal matter.
A well-functioning legal system should protect the constitutional rights of American citizens to be free of unreasonable force from police officers, while also allowing police officers the ability to do their jobs safely and well. In theory, civil rights lawsuits are supposed to strike that balance.
In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.
However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity. Qualified immunity started as a mechanism to protect officers from suit when they acted in “good faith.” Over time, though, the doctrine has evolved away from a subjective test based upon the actor’s good faith to an objective test based upon notice in judicial precedent. As a result, courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it. In the words of the Supreme Court, qualified immunity protects “all but the plainly incompetent or those who knowingly violate the law.”
This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.
Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity. On top of that, the regular practice of governments is to indemnify officers even when there is a settlement or a judgment. The result is to encourage police officers to take insufficient care when making the choice about the level of force to use.
Economics 101 makes a clear prediction: When unreasonable uses of force are not held accountable, you get more unreasonable uses of force. Unfortunately, the news continues to illustrate the accuracy of this prediction.