Archives For Health Care

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

When I was a kid, I trailed behind my mother in the grocery store with a notepad and a pencil adding up the cost of each item she added to our cart. This was partly my mother’s attempt to keep my math skills sharp, but it was also a necessity. As a low-income family, there was no slack in the budget for superfluous spending. The Hostess cupcakes I longed for were a luxury item that only appeared in our cart if there was an unexpected windfall. If the antitrust populists who castigate all forms of market power succeed in their crusade to radically deconcentrate the economy, life will be much harder for low-income families like the one I grew up in.

Antitrust populists like Biden White House official Tim Wu and author Matt Stoller decry the political influence of large firms. But instead of advocating for policies that tackle this political influence directly, they seek reforms to antitrust enforcement that aim to limit the economic advantages of these firms, believing that will translate into political enfeeblement. The economic advantages arising from scale benefit consumers, particularly low-income consumers, often at the expense of smaller economic rivals. But because the protection of small businesses is so paramount to their worldview, antitrust populists blithely ignore the harm that advancing their objectives would cause to low-income families.

This desire to protect small businesses, without acknowledging the economic consequences for low-income families, is plainly obvious in calls for reinvigorated Robinson-Patman Act enforcement (a law from the 1930s for which independent businesses advocated to limit the rise of chain stores) and in plans to revise the antitrust enforcement agencies’ merger guidelines. The U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) recently held a series of listening sessions to demonstrate the need for new guidelines. During the listening session on food and agriculture, independent grocer Anthony Pena described the difficulty he has competing with larger competitors like Walmart. He stated that:

Just months ago, I was buying a 59-ounce orange juice just north of $4 a unit, where we couldn’t get the supplier to sell it to us … Meanwhile, I go to the bigger box like a Walmart or a club store. Not only do they have it fully stocked, but they have it about half the price that I would buy it for at cost.

Half the price. Anthony Pena is complaining that competitors such as Walmart are selling the same product at half the price. To protect independent grocers like Anthony Pena, antitrust populists would have consumers, including low-income families, pay twice as much for groceries.

Walmart is an important food retailer for low-income families. Nearly a fifth of all spending through the Supplemental Nutrition Assistance Program (SNAP), the program formerly known as food stamps, takes place at Walmart. After housing and transportation, food is the largest expense for low-income families. The share of expenditures going toward food for low-income families (i.e., families in the lowest 20% of the income distribution) is 34% higher than for high-income families (i.e., families in the highest 20% of the income distribution). This means that higher grocery prices disproportionately burden low-income families.

In 2019, the U.S. Department of Agriculture (USDA) launched the SNAP Online Purchasing Pilot, which allows SNAP recipients to use their benefits at online food retailers. The pandemic led to an explosion in the number of SNAP recipients using their benefits online—increasing from just 35,000 households in March 2020 to nearly 770,000 households just three months later. While the pilot originally only included Walmart and Amazon, the number of eligible retailers has expanded rapidly. In order to make grocery delivery more accessible to low-income families, an important service during the pandemic, Amazon reduced its Prime membership fee (which helps pay for free delivery) by 50% for SNAP recipients.

The antitrust populists are not only targeting the advantages of large brick-and-mortar retailers, such as Walmart, but also of large online retailers like Amazon. Again, these advantages largely flow to consumers—particularly low-income ones.

The proposed American Innovation and Choice Online Act (AICOA), which was voted out of the Senate Judiciary Committee in February and may make an appearance on the Senate floor this summer, threatens those consumer benefits. AICOA would prohibit so-called “self-preferencing” by Amazon and other large technology platforms.

Should a ban on self-preferencing come to fruition, Amazon would not be able to prominently show its own products in any capacity—even when its products are a good match for a consumer’s search. In search results, Amazon will not be able to promote its private-label products, including Amazon Basics and 365 by Whole Foods, or products for which it is a first-party seller (i.e., a reseller of another company’s product). Amazon may also have to downgrade the ranking of popular products it sells, making them harder for consumers to find. Forcing Amazon to present offers that do not correspond to products consumers want to buy or are not a good value inflicts harm on all consumers but is particularly problematic for low-income consumers. All else equal, most consumers, especially low-income ones, obviously prefer cheaper products. It is important not to take that choice away from them.

Consider the case of orange juice, the product causing so much consternation for Mr. Pena. In a recent search on Amazon for a 59-ounce orange juice, as seen in the image below, the first four “organic” search results are SNAP-eligible, first-party, or private-label products sold by Amazon and ranging in price from $3.55 to $3.79. The next two results are from third-party sellers offering two 59-ounce bottles of orange juice at $38.99 and $84.54—more than five times the unit price offered by Amazon. By prohibiting self-preferencing, Amazon would be forced to promote products to consumers that are significantly more expensive and that are not SNAP-eligible. This increases costs directly for consumers who purchase more expensive products when cheaper alternatives are available but not presented. But it also increases costs indirectly by forcing consumers to search longer for better prices and SNAP-eligible products or by discouraging them from considering timesaving, online shopping altogether. Low-income families are least able to afford these increased costs.

The upshot is that antitrust populists are choosing to support (often well-off) small-business owners at the expense of vulnerable working people. Congress should not allow them to put the squeeze on low-income families. These families are already suffering due to record-high inflation—particularly for items that constitute the largest share of their expenditures, such as transportation and food. Proposed antitrust reforms such as AICOA and reinvigorated Robinson-Patman Act enforcement will only make it harder for low-income families to make ends meet.

If you wander into an undergraduate economics class on the right day at the right time, you might catch the lecturer talking about Giffen goods: the rare case where demand curves can slope upward. The Irish potato famine is often used as an example. As the story goes, potatoes were a huge part of the Irish diet and consumed a large part of Irish family budgets. A failure of the potato crop reduced the supply of potatoes and potato prices soared. Because families had to spend so much on potatoes, they couldn’t afford much else, so spending on potatoes increased despite rising prices.

It’s a great story of injustice with a nugget of economics: Demand curves can slope upward!

Follow the students around for a few days, and they’ll be looking for Giffen goods everywhere. Surely, packaged ramen and boxed macaroni and cheese are Giffen goods. So are white bread and rice. Maybe even low-end apartments.

While it’s a fun concept to consider, the potato famine story is likely apocryphal. In truth, it’s nearly impossible to find a Giffen good in the real world. My version of Greg Mankiw’s massive “Principles of Economics” textbook devotes five paragraphs to Giffen goods, but it’s not especially relevant, which is perhaps why it’s only five paragraphs.

Wander into another economics class, and you might catch the lecturer talking about monopsony—that is, a market in which a small number of buyers control the price of inputs such as labor. I say “might” because—like Giffen goods—monopsony is an interesting concept to consider, but very hard to find a clear example of in the real world. Mankiw’s textbook devotes only four paragraphs to monopsony, explaining that the book “does not present a formal model of monopsony because, in the world, monopsonies are rare.”

Even so, monopsony is a hot topic these days. It seems that monopsonies are everywhere. Walmart and Amazon are monopsonist employers. So are poultry, pork, and beef companies. Local hospitals monopsonize the market for nurses and physicians. The National Collegiate Athletic Association is a monopsony employer of college athletes. Ultimate Fighting Championship has a monopsony over mixed-martial-arts fighters.

In 1994, David Card and Alan Krueger’s earthshaking study found a minimum wage increase had no measurable effect on fast-food employment and retail prices. They investigated monopsony power as one explanation but concluded that a monopsony model was not supported by their findings. They note:

[W]e find that prices of fast-food meals increased in New Jersey relative to Pennsylvania, suggesting that much of the burden of the minimum-wage rise was passed on to consumers. Within New Jersey, however, we find no evidence that prices increased more in stores that were most affected by the minimum-wage rise. Taken as a whole, these findings are difficult to explain with the standard competitive model or with models in which employers face supply constraints (e.g., monopsony or equilibrium search models). [Emphasis added]

Even so, the monopsony hunt was on and it intensified during President Barack Obama’s administration. During his term, the U.S. Justice Department (DOJ) brought suit against several major Silicon Valley employers for anticompetitively entering into agreements not to “poach” programmers and engineers from each other. The administration also brought suit against a hospital association for an agreement to set uniform billing rates for certain nurses. Both cases settled but the Silicon Valley allegations led to a private class-action lawsuit.

In 2016, Obama’s Council of Economic Advisers published an issue brief on labor-market monopsony. The brief concluded that “evidence suggest[s] that firms may have wage-setting power in a broad range of settings.”

Around the same time, the Obama administration announced that it intended to “criminally investigate naked no-poaching or wage-fixing agreements that are unrelated or unnecessary to a larger legitimate collaboration between the employers.” The DOJ argued that no-poach agreements that allocate employees between companies are per se unlawful restraints of trade that violate Section 1 of the Sherman Act.

If one believes that monopsony power is stifling workers’ wages and benefits, then this would be a good first step to build up a body of evidence and precedence. Go after the low-hanging fruit of a conspiracy that is a per se violation of the Sherman Act, secure some wins, and then start probing the more challenging cases.

After several matters that resulted in settlements, the DOJ brought its first criminal wage-fixing case in late 2020. In United States v. Jindal, the government charged two employees of a Texas health-care staffing company of colluding with another staffing company to decrease pay rates for physical therapists and physical-therapist assistants.

The defense in Jindal conceded that that price-fixing was per se illegal under the Sherman Act but argued that prices and wages are two different concepts. Therefore, the defense claimed that, even if it was engaged in wage-fixing, the conduct would not be per se illegal. That was a stretch, and the district court judge was having none of that in ruling that: “The antitrust laws fully apply to the labor markets, and price-fixing agreements among buyers … are prohibited by the Sherman Act.”

Nevertheless, the jury in Jindal found the defendants not guilty of wage-fixing in violation of the Sherman Act, and also not guilty of a related conspiracy charge.

The DOJ also brought criminal no-poach cases against three other health-care companies and their employees: United States v. Surgical Care Affiliates LLC; United States v. Hee; and United States v. DaVita Inc. Each of the indictments alleged no-poach agreements in which defendants conspired with competitors not to recruit each other’s employees. Hee also included wage-fixing allegations.

Before trial, the defense in DaVita filed a motion to dismiss, arguing that no-poach agreements did not amount to illegal market-allocation agreements. Instead, the defense claimed that no-poach agreements were something less restrictive. Rather than a flat-out refusal to hire competitors’ employees, they were more akin to agreeing not to seek out competitors’ employees. As with Jindal, this was too much of a stretch for the judge who ruled that no-poach agreements could be an illegal market-allocation agreement.

A day after the Jindal verdict, the jury in DaVita acquitted the kidney-dialysis provider and its former CEO of charges that they conspired with competitors to suppress competition for employees through no-poach agreements.

The DaVita jurors appeared to be hung up on the definition of “meaningful competition” in the relevant market. The defense presented information showing that, despite any agreements, employees frequently changed jobs among the companies. Thus, it was argued that any agreement did not amount to an allocation of the market for employees.

The prosecution called several corporate executives who testified that the non-solicitation agreements merely required DaVita employees to tell their bosses they were looking for another job before they could be considered for positions at the three alleged co-conspirator companies. Some witnesses indicated that, by informing their bosses, they were able to obtain promotions and/or increased compensation. This was supported by expert testimony concluding that DaVita salaries changed during the alleged conspiracy period at a rate higher than the health-care industry as a whole. This finding is at-odds with a theory that the non-solicitation agreement was designed to stabilize or suppress compensation.

The Jindal and DaVita cases highlight some of the enormous challenges in mounting a labor-monopsonization case. Even if agencies can “win” or get concessions on defining the relevant markets, they still face challenges in establishing that no-poach agreements amount to a “meaningful” restraint of trade. DaVita suggests that a showing of job turnover and/or increased compensation during an alleged conspiracy period may be sufficient to convince a jury that a no-poach agreement may not be anticompetitive and—under certain circumstances—may even be pro-competitive.

For now, the hunt for a monopsony labor market continues its quest, along with the hunt for the ever-elusive Giffen good.

[Continuing our FTC UMC Rulemaking symposium, today’s first guest post is from Richard J. Pierce Jr., the Lyle T. Alverson Professor of Law at George Washington University Law School. We are also publishing a related post today from Andrew K. Magloughlin and Randolph J. May of the Free State Foundation. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

FTC Rulemaking Power

In 2021, President Joe Biden appointed a prolific young scholar, Lina Khan, to chair the Federal Trade Commission (FTC). Khan strongly dislikes almost every element of antitrust law. She has stated her intention to use notice and comment rulemaking to change antitrust law in many ways. She was unable to begin this process for almost a year because the FTC was evenly divided between Democratic and Republican appointees, and she has not been able to elicit any support for her agenda from the Republican members. She will finally get the majority she needs to act in the next few days, as the U.S. Senate appears set to confirm Alvaro Bedoya to the fifth spot on the commission.   

Chair Khan has argued that the FTC has the power to use notice-and-comment rulemaking to define the term “unfair methods of competition” as that term is used in Section 5 of the Federal Trade Commission Act. Section 5 authorizes the FTC to define and to prohibit both “unfair acts” and “unfair methods of competition.” For more than 50 years after the 1914 enactment of the statute, the FTC, Congress, courts, and scholars interpreted it to empower the FTC to use adjudication to implement Section 5, but not to use rulemaking for that purpose.

In 1973, the U.S. Court of Appeals for the D.C. Circuit held that the FTC has the power to use notice-and-comment rulemaking to implement Section 5. Congress responded by amending the statute in 1975 and 1980 to add many time-consuming and burdensome procedures to the notice-and-comment process. Those added procedures had the effect of making the rulemaking process so long that the FTC gave up on its attempts to use rulemaking to implement Section 5.

Khan claims that the FTC has the power to use notice-and-comment rulemaking to define “unfair methods of competition,” even though it must use the extremely burdensome procedures that Congress added in 1975 and 1980 to define “unfair acts.” Her claim is based on a combination of her belief that the current U.S. Supreme Court would uphold the 1973 D.C. Circuit decision that held that the FTC has the power to use notice-and-comment rulemaking to implement Section 5 and her belief that a peculiarly worded provision of the 1975 amendment to the FTC Act allows the FTC to use notice-and-comment rulemaking to define “unfair methods of competition,” even though it requires the FTC to use the extremely burdensome procedure to issue rules that define “unfair acts.” The FTC has not attempted to use notice-and-comment rulemaking to define “unfair methods of competition” since Congress amended the statute in 1975. 

I am skeptical of Khan’s argument. I doubt that the Supreme Court would uphold the 1973 D.C. Circuit opinion, because the D.C. Circuit used a method of statutory interpretation that no modern court uses and that is inconsistent with the methods of statutory interpretation that the Supreme Court uses today. I also doubt that the Supreme Court would interpret the 1975 statutory amendment to distinguish between “unfair acts” and “unfair methods of competition” for purposes of the procedures that the FTC is required to use to issue rules to implement Section 5.

Even if the FTC has the power to use notice-and-comment rulemaking to define “unfair methods of competition,” I am confident that the Supreme Court would not uphold an exercise of that power that has the effect of making a significant change in antitrust law. That would be a perfect candidate for application of the major questions doctrine. The court will not uphold an “unprecedented” action of “vast economic or political significance” unless it has “unmistakable legislative support.” I will now describe four hypothetical exercises of the rulemaking power that Khan believes that the FTC possesses to illustrate my point.

Hypothetical Exercises of FTC Rulemaking Power

Creation of a Right to Repair

President Biden has urged the FTC to create a right for an owner of any product to repair the product or to have it repaired by an independent service organization (ISO). The Supreme Court’s 1992 opinion in Eastman Kodak v. Image Technical Services tells us all we need to know about the likelihood that it would uphold a rule that confers a right to repair. When Kodak took actions that made it impossible for ISOs to repair Kodak photocopiers, the ISOs argued that Kodak’s action violated both Section 1 and Section 2 of the Sherman Act. The Court held that Kodak could prevail only if it could persuade a jury that its view of the facts was accurate. The Court remanded the case for a jury trial to address three contested issues of fact.

The Court’s reasoning in Kodak is inconsistent with any version of a right to repair that the FTC might attempt to create through rulemaking. The Court expressed its view that allowing an ISO to repair a product sometimes has good effects and sometimes has bad effects. It concluded that it could not decide whether Kodak’s new policy was good or bad without first resolving the three issues of fact on which the parties disagreed. In a 2021 report to Congress, the FTC agreed with the Supreme Court. It identified seven factual contingencies that can cause a prohibition on repair of a product by an ISO to have good effects or bad effects. It is naïve to expect the Supreme Court to change its approach to repair rights in response to a rule in which the FTC attempts to create a right to repair, particularly when the FTC told Congress that it agrees with the Court’s approach immediately prior to Khan’s arrival at the agency.

Prohibition of Reverse-Payment Settlements of Patent Disputes Involving Prescription Drugs

Some people believe that settlements of patent-infringement disputes in which the manufacturer of a generic drug agrees not to market the drug in return for a cash payment from the manufacturer of the brand-name drug are thinly disguised agreements to create a monopoly and to share the monopoly rents. Khan has argued that the FTC could issue a rule that prohibits such reverse-payment settlements. Her belief that a court would uphold such a rule is contradicted by the Supreme Court’s 2013 opinion in FTC v. Actavis. The Court unanimously rejected the FTC’s argument in support of a rebuttable presumption that reverse payments are illegal. Four justices argued that reverse-payment settlements can never be illegal if they are within the scope of the patent. The five-justice majority held that a court can determine that a reverse-payment settlement is illegal only after a hearing in which it applies the rule of reason to determine whether the payment was reasonable.

A Prohibition on Below-Cost Pricing When the Firm Cannot Recoup Its Losses

Khan believes that illegal predatory pricing by dominant firms is widespread and extremely harmful to competition. She particularly dislikes the Supreme Court’s test for identifying predatory pricing. That test requires proof that a firm that engages in below-cost pricing has a reasonable prospect of recouping its losses. She wants the FTC to issue a rule in which it defines predatory pricing as below-cost pricing without any prospect that the firm will be able to recoup its losses.

The history of the Court’s predatory-pricing test shows how unrealistic it is to expect the Court to uphold such a rule. The Court first announced the test in a Sherman Act case in 1986. Plaintiffs attempted to avoid the precedential effect of that decision by filing complaints based on predatory pricing under the Robinson-Patman Act. The Court rejected that attempt in a 1993 opinion. The Court made it clear that the test for determining whether a firm is engaged in illegal predatory pricing is the same no matter whether the case arises under the Sherman Act or the Robinson-Patman Act. The Court undoubtedly would reject the FTC’s effort to change the definition of predatory pricing by relying on the FTC Act instead of the Sherman Act or the Robinson-Patman Act.

A Prohibition of Noncompete Clauses in Contracts to Employ Low-Wage Employees

President Biden has expressed concern about the increasing prevalence of noncompete clauses in employment contracts applicable to low wage employees. He wants the FTC to issue a rule that prohibits inclusion of noncompete clauses in contracts to employ low-wage employees. The Supreme Court would be likely to uphold such a rule.

A rule that prohibits inclusion of noncompete clauses in employment contracts applicable to low-wage employees would differ from the other three rules I discussed in many respects. First, it has long been the law that noncompete clauses can be included in employment contracts only in narrow circumstances, none of which have any conceivable application to low-wage contracts. The only reason that competition authorities did not bring actions against firms that include noncompete clauses in low-wage employment contracts was their belief that state labor law would be effective in deterring firms from engaging in that practice. Thus, the rule would be entirely consistent with existing antitrust law.

Second, there are many studies that have found that state labor law has not been effective in deterring firms from including noncompete clauses in low-wage employment contracts and many studies that have found that the increasing use of noncompete clauses in low-wage contracts is causing a lot of damage to the performance of labor markets. Thus, the FTC would be able to support its rule with high-quality evidence.

Third, the Supreme Court’s unanimous 2021 opinion in NCAA v. Alstom indicates that the Court is receptive to claims that a practice that harms the performance of labor markets is illegal. Thus, I predict that the Court would uphold a rule that prohibits noncompete clauses in employment contracts applicable to low-wage employees if it holds that the FTC can use notice-and-comment rulemaking to define “unfair methods of competition,” as that term is used in Section 5 of the FTC Act. That caveat is important, however. As I indicated at the beginning of this essay, I doubt that the FTC has that power.

I would urge the FTC not to use notice-and comment rulemaking to address the problems that are caused by the increasing use of noncompete clauses in low-wage contracts. There is no reason for the FTC to put a lot of time and effort into a notice-and-comment rulemaking in the hope that the Court will conclude that the FTC has the power to use notice-and-comment rulemaking to implement Section 5. The FTC can implement an effective prohibition on the inclusion of noncompete clauses in employment contracts applicable to low-wage employees by using a combination of legal tools that it has long used and that it clearly has the power to use—issuance of interpretive rules and policy statements coupled with a few well-chosen enforcement actions.

Alternative Ways to Improve Antitrust Law       

There are many other ways in which Khan can move antitrust law in the directions that she prefers. She can make common cause with the many mainstream antitrust scholars who have urged incremental changes in antitrust law and who have conducted the studies needed to support those proposed changes. Thus, for instance, she can move aggressively against other practices that harm the performance of labor markets, change the criteria that the FTC uses to decide whether to challenge proposed mergers and acquisitions, and initiate actions against large platform firms that favor their products over the products of third parties that they sell on their platforms.     

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.

The patent system is too often caricatured as involving the grant of “monopolies” that may be used to delay entry and retard competition in key sectors of the economy. The accumulation of allegedly “poor-quality” patents into thickets and portfolios held by “patent trolls” is said by critics to spawn excessive royalty-licensing demands and threatened “holdups” of firms that produce innovative products and services. These alleged patent abuses have been characterized as a wasteful “tax” on high-tech implementers of patented technologies, which inefficiently raises price and harms consumer welfare.

Fortunately, solid scholarship has debunked these stories and instead pointed to the key role patents play in enhancing competition and driving innovation. See, for example, here, here, here, here, here, here, and here.

Nevertheless, early indications are that the Biden administration may be adopting a patent-skeptical attitude. Such an attitude was revealed, for example, in the president’s July 9 Executive Order on Competition (which suggested an openness to undermining the Bayh-Dole Act by using march-in rights to set prices; to weakening pharmaceutical patent rights; and to weakening standard essential patents) and in the administration’s inexplicable decision to waive patent protection for COVID-19 vaccines (see here and here).

Before it takes further steps that would undermine patent protections, the administration should consider new research that underscores how patents help to spawn dynamic market growth through “design around” competition and through licensing that promotes new technologies and product markets.

Patents Spawn Welfare-Enhancing ‘Design Around’ Competition

Critics sometimes bemoan the fact that patents covering a new product or technology allegedly retard competition by preventing new firms from entering a market. (Never mind the fact that the market might not have existed but for the patent.) This thinking, which confuses a patent with a product-market monopoly, is badly mistaken. It is belied by the fact that the publicly available patented technology itself (1) provides valuable information to third parties; and (2) thereby incentivizes them to innovate and compete by refining technologies that fall outside the scope of the patent. In short, patents on important new technologies stimulate, rather than retard, competition. They do this by leading third parties to “design around” the patented technology and thus generate competition that features a richer set of technological options realized in new products.

The importance of design around is revealed, for example, in the development of the incandescent light bulb market in the late 19th century, in reaction to Edison’s patent on a long-lived light bulb. In a 2021 article in the Journal of Competition Law and Economics, Ron D. Katznelson and John Howells did an empirical study of this important example of product innovation. The article’s synopsis explains:

Designing around patents is prevalent but not often appreciated as a means by which patents promote economic development through competition. We provide a novel empirical study of the extent and timing of designing around patent claims. We study the filing rate of incandescent lamp-related patents during 1878–1898 and find that the enforcement of Edison’s incandescent lamp patent in 1891–1894 stimulated a surge of patenting. We studied the specific design features of the lamps described in these lamp patents and compared them with Edison’s claimed invention to create a count of noninfringing designs by filing date. Most of these noninfringing designs circumvented Edison’s patent claims by creating substitute technologies to enable participation in the market. Our forward citation analysis of these patents shows that some had introduced pioneering prior art for new fields. This indicates that invention around patents is not duplicative research and contributes to dynamic economic efficiency. We show that the Edison lamp patent did not suppress advance in electric lighting and the market power of the Edison patent owner weakened during this patent’s enforcement. We propose that investigation of the effects of design around patents is essential for establishing the degree of market power conferred by patents.

In a recent commentary, Katznelson highlights the procompetitive consumer welfare benefits of the Edison light bulb design around:

GE’s enforcement of the Edison patent by injunctions did not stifle competition nor did it endow GE with undue market power, let alone a “monopoly.” Instead, it resulted in clear and tangible consumer welfare benefits. Investments in design-arounds resulted in tangible and measurable dynamic economic efficiencies by (a) increased competition, (b) lamp price reductions, (c) larger choice of suppliers, (d) acceleration of downstream development of new electric illumination technologies, and (e) collateral creation of new technologies that would not have been developed for some time but for the need to design around Edison’s patent claims. These are all imparted benefits attributable to patent enforcement.

Katznelson further explains that “the mythical harm to innovation inflicted by enforcers of pioneer patents is not unique to the Edison case.” He cites additional research debunking claims that the Wright brothers’ pioneer airplane patent seriously retarded progress in aviation (“[a]ircraft manufacturing and investments grew at an even faster pace after the assertion of the Wright Brothers’ patent than before”) and debunking similar claims made about the early radio industry and the early automobile industry. He also notes strong research refuting the patent holdup conjecture regarding standard essential patents. He concludes by bemoaning “infringers’ rhetoric” that “suppresses information on the positive aspects of patent enforcement, such as the design-around effects that we study in this article.”

The Bayh-Dole Act: Licensing that Promotes New Technologies and Product Markets

The Bayh-Dole Act of 1980 has played an enormously important role in accelerating American technological innovation by creating a property rights-based incentive to use government labs. As this good summary from the Biotechnology Innovation Organization puts it, it “[e]mpowers universities, small businesses and non-profit institutions to take ownership [through patent rights] of inventions made during federally-funded research, so they can license these basic inventions for further applied research and development and broader public use.”

The act has continued to generate many new welfare-enhancing technologies and related high-tech business opportunities even during the “COVID slowdown year” of 2020, according to a newly released survey by a nonprofit organization representing the technology management community (see here):  

° The number of startup companies launched around academic inventions rose from 1,040 in 2019 to 1,117 in 2020. Almost 70% of these companies locate in the same state as the research institution that licensed them—making Bayh-Dole a critical driver of state and regional economic development;
° Invention disclosures went from 25,392 to 27,112 in 2020;
° New patent applications increased from 15,972 to 17,738;
° Licenses and options went from 9,751 in ’19 to 10,050 in ’20, with 60% of licenses going to small companies; and
° Most impressive of all—new products introduced to the market based on academic inventions jumped from 711 in 2019 to 933 in 2020.

Despite this continued record of success, the Biden Administration has taken actions that create uncertainty about the government’s support for Bayh-Dole.  

As explained by the Congressional Research Service, “march-in rights allow the government, in specified circumstances, to require the contractor or successors in title to the patent to grant a ‘nonexclusive, partially exclusive, or exclusive license’ to a ‘responsible applicant or applicants.’ If the patent owner refuses to do so, the government may grant the license itself.” Government march-in rights thus far have not been invoked, but a serious threat of their routine invocation would greatly disincentivize future use of Bayh-Dole, thereby undermining patent-backed innovation.

Despite this, the president’s July 9 Executive Order on Competition (noted above) instructed the U.S. Commerce Department to defer finalizing a regulation (see here) “that would have ensured that march-in rights under Bayh Dole would not be misused to allow the government to set prices, but utilized for its statutory intent of providing oversight so good faith efforts are being made to turn government-funded innovations into products. But that’s all up in the air now.”

What’s more, a new U.S. Energy Department policy that would more closely scrutinize Bayh-Dole patentees’ licensing transactions and acquisitions (apparently to encourage more domestic manufacturing) has raised questions in the Bayh-Dole community and may discourage licensing transactions (see here and here). Added to this is the fact that “prominent Members of Congress are pressing the Biden Administration to misconstrue the march-in rights clause to control prices of products arising from National Institutes of Health and Department of Defense funding.” All told, therefore, the outlook for continued patent-inspired innovation through Bayh-Dole processes appears to be worse than it has been in many years.

Conclusion

The patent system does far more than provide potential rewards to enhance incentives for particular individuals to invent. The system also creates a means to enhance welfare by facilitating the diffusion of technology through market processes (see here).

But it does even more than that. It actually drives new forms of dynamic competition by inducing third parties to design around new patents, to the benefit of consumers and the overall economy. As revealed by the Bayh-Dole Act, it also has facilitated the more efficient use of federal labs to generate innovation and new products and processes that would not otherwise have seen the light of day. Let us hope that the Biden administration pays heed to these benefits to the American economy and thinks again before taking steps that would further weaken our patent system.     

The U.S. economy survived the COVID-19 pandemic and associated government-imposed business shutdowns with a variety of innovations that facilitated online shopping, contactless payments, and reduced use and handling of cash, a known vector of disease transmission.

While many of these innovations were new, they would have been impossible but for their reliance on an established and ubiquitous technological infrastructure: the global credit and debit-card payments system. Not only did consumers prefer to use plastic instead of cash, the number of merchants going completely “cashless” quadrupled in the first two months of the pandemic alone. From food delivery to online shopping, many small businesses were able to survive largely because of payment cards.

But there are costs to maintain the global payment-card network that processes billions of transactions daily, and those costs are higher for online payments, which present elevated fraud and security risks. As a result, while the boom in online shopping over this past year kept many retailers and service providers afloat, that hasn’t prevented them from grousing about their increased card-processing costs.

So it is that retailers are now lobbying Washington to impose new regulations on payment-card markets designed to force down the fees they pay for accepting debit and credit cards. Called interchange fees, these fees are charged by banks that issue debit cards on each transaction, and they are part of a complex process that connects banks, card networks, merchants, and consumers.

Fig. 1: A basic illustration of the 3- and 4-party payment-processing networks that underlie the use of credit cards.

Regulation II—a provision of 2010’s Dodd–Frank Wall Street Reform and Consumer Protection Act commonly known as the “Durbin amendment,” after its primary sponsor, Senate Majority Whip Richard Durbin (D-Ill.)—placed price controls on interchange fees for debit cards issued by larger banks and credit unions (those with more than $10 billion in assets). It required all debit-card issuers to offer multiple networks for “routing” and processing card transactions. Merchants now want to expand these routing provisions to credit cards, as well. The consequences for consumers, especially low-income consumers, would be disastrous.

The price controls imposed by the Durbin amendment have led to a 52% decrease in the average per-transaction interchange fee, resulting in billions of dollars in revenue losses for covered depositories. But banks and credit unions have passed on these losses to consumers in the form of fewer free checking accounts, higher fees, and higher monthly minimums required to avoid those fees.

One empirical study found that the share of covered banks offering free checking accounts fell from 60% to 20%, the average monthly checking accounts fees increased from $4.34 to $7.44, and the minimum account balance required to avoid those fees increased by roughly 25%. Another study found that fees charged by covered institutions were 15% higher than they would have been absent the price regulation; those increases offset about 90% of the depositories’ lost revenue. Banks and credit unions also largely eliminated cash-back and other rewards on debit cards.

In fact, those who have been most harmed by the Durbin amendment’s consequences have been low-income consumers. Middle-class families hardly noticed the higher minimum balance requirements, or used their credit cards more often to offset the disappearance of debit-card rewards. Those with the smallest checking account balances, however, suffered the most from reduced availability of free banking and higher monthly maintenance and other fees. Priced out of the banking system, as many as 1 million people might have lost bank accounts in the wake of the Durbin amendment, forcing them to turn to such alternatives as prepaid cards, payday lenders, and pawn shops to make ends meet. Lacking bank accounts, these needy families weren’t even able to easily access their much-needed government stimulus funds at the onset of the pandemic without paying fees to alternative financial services providers.

In exchange for higher bank fees and reduced benefits, merchants promised lower prices at the pump and register. This has not been the case. Scholarship since  implementation of the Federal Reserve’s rule shows that whatever benefits have been gained have gone to merchants, with little pass-through to consumers. For instance, one study found that covered banks had their interchange revenue drop by 25%, but little evidence of a corresponding drop in prices from merchants.

Another study found that the benefits and costs to merchants have been unevenly distributed, with retailers who sell large-ticket items receiving a windfall, while those specializing in small-ticket items have often faced higher effective rates. Discounts previously offered to smaller merchants have been eliminated to offset reduced revenues from big-box stores. According to a 2014 Federal Reserve study, when acceptance fees increased, merchants hiked retail prices; but when fees were reduced, merchants pocketed the windfall.

Moreover, while the Durbin amendment’s proponents claimed it would only apply to big banks, the provisions that determine how transactions are routed on the payment networks apply to cards issued by credit unions and community banks, as well. As a result, smaller players have also seen average interchange fees beaten down, reducing this revenue stream even as they have been forced to cope with higher regulatory costs imposed by Dodd-Frank. Extending the Durbin amendment’s routing provisions to credit cards would further drive down interchange-fee revenue, creating the same negative spiral of higher consumer fees and reduced benefits that the original Durbin amendment spawned for debit cards.

More fundamentally, merchants believe it is their decision—not yours—as to which network will route your transaction. You may prefer Visa or Mastercard because of your confidence in their investments in security and anti-fraud detection, but later discover that the merchant has routed your transaction through a processor you’ve never heard of, simply because that network is cheaper for the merchant.

The resilience of the U.S. economy during this horrible viral contagion is due, in part, to the ubiquitous access of American families to credit and debit cards. That system has proved its mettle this past year, seamlessly adapting to the sudden shift to electronic payments. Yet, in the wake of this American success story, politicians and regulators, egged on by powerful special interests, instead want to meddle with this system just so big-box retailers can transfer their costs onto American families and small banks. As the economy and public health recovers, Congress and regulators should resist the impulse to impose new financial harm on working-class families.

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance