With the first day of summer less than a week away and political silly season just around the corner, we don’t have much time for hootenannies. Congress needs to channel the wisdom of Jerry Reed, who noted: “We’ve got a long way to go and a short time to get there.”

In early March, Congress allowed the Federal Communications Commission’s (FCC) spectrum-auction authority to lapse for the first time since the authority was granted to the FCC in 1994. A bill to renew auction authority passed out of the U.S. House earlier this year, but has yet to be taken up in the Senate. Aside from tabling future auctions, the lapse is stifling the deployment of some 5G spectrum that was already auctioned off.

T-Mobile paid $304 million for 7,156 licenses of 2.5 GHz spectrum in last summer’s auction. But according to Fierce Telecom, the company can’t deploy the spectrum it won. That’s because the FCC claims it cannot issue licenses for the 2.5 GHz spectrum to T-Mobile until its auction authority is reinstated.

While T-Mobile has attempted to obtain a waiver—known as special temporary authority—to deploy its newly purchased 2.5 GHz spectrum, and there is some debate over whether the lapse in authority scuttles deployment of already-won spectrum, the clearest path out of this logjam lies with congressional action to quickly restore the FCC’s auction authority.

But Wait, There’s More…

As we’ve noted earlier, the Affordable Connectivity Program (ACP) that provides broadband subsidies to low-income households is likely to run out of funding sometime in the first half of next year. 

Clearly, this is going to affect households who receive the subsidies. But just as importantly, the funding uncertainty may threaten anticipated broadband investments. At a Broadband Breakfast event, Brookings Institution Senior Fellow Blair Levin argued that the ACP encourages providers to invest in low-income communities. If there is less certainty that those ACP funds will continue into the future, then investments in those communities will become more risky. That added risk would, in turn, reduce the incentives to make those investments.

Because the ACP is funded with appropriations from Congress, rather than a dedicated source—such as the Universal Service Fund surcharge—only Congress can keep the program running past next year.

One More Thing…

Politico reports that “major battle lines” are forming among congressional Republicans over this year’s farm bill. Telecompetitor reports that NTCA—The Rural Broadband Association and WTA–Advocates for Rural Broadband have been pushing Congress to add the ReConnect broadband program into the farm bill as an authorized program, rather than as a congressionally appropriated program. Doing so would make ReConnect a permanent program, rather than one up for periodic renewal.

An International Center of Law & Economics (ICLE) policy brief took issue with some of the implementation features of the ReConnect program: in particular, the U.S. Department of Agriculture’s attempts to introduce back-door rate regulation. This includes giving preference to applicants who agree to abide by so-called “net neutrality” rules similar to those that the FCC had eliminated in 2018’s Restoring Internet Freedom Order. USDA also imposes a quality mandate (100/100 Mbps) and gives preference to applicants who offer a “low-cost option.”

Even so, WTA has been urging the next farm bill to prioritize fiber deployments, rather than focusing on minimum-speed requirements. Anything that gets away from de facto rate regulation is an improvement, but that’s moot if Congress doesn’t pass the bill.

‘What We Need Is Another Task Force,’ Said No One Ever

Earlier this week, the FCC announced the creation of a new task force to address privacy and data security. Chair Jessica Rosenworcel’s announcement at the Center for Democracy & Technology conveniently coincides with ongoing debates in Congress over privacy legislation that could potentially limit the FCC’s jurisdiction in these matters.

Rosenworcel claims that the task force will bring together experts from various fields within the agency to coordinate efforts and enforce existing policies. Leading the task force is Loyaan Egal, head of the agency’s enforcement bureau, who will work with the agency’s growing staff dedicated to these issues. 

According to Rosenworcel, the task force met for the first time this week. She detailed its responsibilities to include addressing data breaches, SIM-swapping fraud, and the implementation of the Safe Connections Act. She also mentioned ongoing investigations into carriers’ handling of geolocation data, hinting at possible enforcement actions against companies that compromise customer security. 

As Jerry Reed sings: “Ol’ Task Force’s got them ears on and they’re hot on your trail / They ain’t gonna rest ’til you’re in jail.”

These enforcement actions would require the support of at least two of the other three FCC commissioners. But the commission is, for the moment, evenly divided between Democratic and Republican appointees. If the Senate approves the nomination of Anna Gomez to fill the FCC’s open Democratic seat, the commission would have a Democratic majority that would make it easier for Rosenworcel to pursue her proposals. 

There’s never a dull week at the Telecom Hootenanny. Until next time, “keep that diesel truckin’.”

Jennifer Abruzzo, general counsel of the National Labor Relations Board (NLRB), recently issued a memo claiming that certain noncompete clauses in labor contracts are illegal, on grounds that they violate employees’ right to organize and negotiate better working conditions under Section 7 of the National Labor Relations Act (NLRA).

The NLRB isn’t the first Biden administration agency to signal harsher curtailment of employment terms it deems unfair. Earlier this year, the FTC also released a proposed blanket rule against noncompete clauses, which has been subject to substantial criticism on legal and economic grounds (see, for example, here, here, here, here, and here). Though Abruzzo’s memo is nonbinding and doesn’t create a new cause of action, it still encourages agency staff to investigate, examine and prosecute existing noncompetes.

An estimated 18.1% of U.S. workers—more than 28 million individuals—are currently employed under noncompetes. The NLRB memo potentially creates litigation risk for businesses, connoting substantial costs and uncertainty. Rather than protecting future workers, this could lead to the chilling of employment and training opportunities—even though the memo is based on tenuous legal interpretation and reasoning that may not survive a court challenge.

Likely Effects of the NLRB Memo

The memo’s central claim is that noncompetes are illegal unless they are narrowly drafted to protect specific interests, such as preventing an employee from using proprietary or trade secrets to which they have access. It argues that denying workers the ability to change jobs—by blocking their access to other employment opportunities for which they’re qualified for—violates workers’ rights under Section 7 of the NLRA because it chills their ability to concertedly threaten to resign or undertake other industrial action against employers as part of negotiating or seeking better working conditions.

The memo acknowledges that the NLRA doesn’t apply to employees who supervise others, and explicitly excludes noncompete clauses that clearly restrict only the worker’s managerial or ownership interests in a competing business. It also doesn’t apply to independent-contractor relationships, and focuses on low and middle-income workers. (It doesn’t, however, exclude workers with higher incomes.) Abruzzo even suggests that noncompete clauses drafted specifically to protect trade secrets and proprietary information may be illegal if courts or the NLRB concludes that there were alternative ways to achieve that objective.

Despite the memo’s nonbinding nature, its limits, and its untested validity in the courts or even at the NLRB, it’s still likely to influence business and employer conduct across the United States, due to companies wanting to avoid the risk and cost of litigation. Responses are likely to include a decline in investment in training and development of workers, as it will be harder for employers to ensure that employees who undertake training at their expense do not simply take those benefits to a competitor. This would make it harder for those entering the workforce to get a foot in the door for career advancement relative to their established peers, and could lead to more skilled jobs in mobile labor sectors like information technology shifting to jurisdictions that are more permissive about noncompetes.

It could also lead companies to become more reluctant to share trade secrets and proprietary information with workers, even where this would make those workers less productive than they otherwise would be. Workers could also suffer from a decline in opportunities for secure jobs, with employers shifting their preference toward recruiting independent contractors. The economic implications of banning or severely restricting noncompetes are discussed further in this public interest comment from my colleagues Alden Abbott and Liya Palagashvili.

The memo and its analysis could also mean that fiduciary-duty laws, no-recruit agreements, and even NDAs or confidentiality agreements are likely to be next on the chopping block, as all of these agreements appear to run afoul of the NLRB logic that they hinder workers from soliciting their coworkers to move to a competitor as part of a broader course of protected concerted activity. This would heighten the likely net effect of creating a less-favorable commercial climate with fewer opportunities for workers, even if those currently employed benefit from the increased ability to seek alternative opportunities in the short term.

Moreover, the risk of NLRB investigations and prosecutions could expose many employers to internal NLRB processes with which they are unfamiliar, including internal decision-making by administrative law judges (ALJs) and a politically appointed board. These daunting prospects would further deter employers from making suitable job offers, even when the noncompete clause they wish to include may not be found to violate the law.

The Memo Stands on Tenuous Legal Footing

So just how likely is it that the NLRB general counsel’s interpretation of the law will be upheld by the courts?

The central argument is that preventing an employee from working for a competitor for a period of time after they cease to work for their current employer “interfere[s] with, restrain[s], or coerce[s]” [Section 8(a)(1), NLRA] him or her from exercising their right to “engage in other concerted activities for the purpose of collective bargaining or other mutual aid or protection” [Section 7, NLRA], as restricting alternative employment opportunities would hinder employees from collectively resigning or threatening to resign to demand better working conditions.

Notably, even a broad noncompete clause doesn’t technically prevent anyone from resigning, threatening to resign, or inducing others to do so. It just makes that threat more difficult or costly to carry out, depending on the limitations of the specific noncompete. For instance, threatening to resign en masse to demand better working conditions is a more credible threat if workers bound by a noncompete are able to perform the same job for employers that don’t compete with their current one, such as those involved in different industries. For example, a quantitative analyst who has signed a noncompete with an investment bank under which they agree not to work for one of its competitors may still seek employment at any firm that doesn’t compete with their original employer.

In its 2017 Minteq International decision, the U.S. Circuit Court of Appeals for the D.C. Circuit canvassed established precedent and found that even clauses that don’t explicitly restrict a worker’s Section 7 (NLRA) rights can violate the law if workers would reasonably interpret them as prohibiting Section 7 activities. The majority concluded that NLRB determinations of whether an employer rule illegally interfered with Section 7 activities are to be given considerable deference, provided that it is “reasonably defensible.” Applying these rules—as well as prior precedent that deemed the organization of consumer boycotts by disgruntled employees as part of a labor dispute to be a protected activity—the circuit court found that an employer could not preclude employees from soliciting customers to boycott their employer as part of Section 7 concerted activity, upholding a determination made by the NLRB.

In a similar vein, courts have recognized threats of concerted resignation as part of a pre-existing labor dispute to be protected (see, e.g., Crescent Wharf & Warehouse Co.). They have also recognized as protected the solicitation of opportunities from alternative employers, albeit only where no contractual terms bar the employees to do so (see, e.g., QIC Corp.).

An important distinction, however, is that noncompete clauses do not prohibit concerted resignation or the threat of doing so. They typically apply only to direct competitors of the employer firm, and often within a specific geographic area. As such, the limitations placed on concerted activities connected to a labor dispute are far narrower in the case of even broad noncompetes than in the case of the prohibited conduct in the case law the NLRB’s Abruzzo cites. This indicates that while the reasoning she forwarded may be upheld with respect to specific noncompetes or in specific situations, this is a far cry from the broad net she casts and encourages other agency staff to cast in cracking down on these clauses.

Though Minteq International also found that a noncompete clause in the same contract violated Section 7, this was because noncompetes were deemed to be a “mandatory subject” of collective bargaining and because the union representing the workers hadn’t had the opportunity to negotiate on that clause—not because the clause otherwise violated Section 7. The question of whether noncompete questions in general—or even that particular one—violate Section 7 was left unaddressed.

Even if courts entertain the argument that noncompetes typically constrain protected Section 7 activities, Abruzzo argues that business-justification defenses—such as the protection of trade secrets, proprietary information, or investments in employee training—are unlikely to hold up, since these interests can be protected through other “narrowly tailored” means. She suggests, for example, that longevity bonuses may encourage workers who’ve benefited from training investments to remain with the firm. In addition to imposing significant new costs on employers, however, such incentives may not be sufficient to prevent employees from leaving.  Abruzzo also provides no specifics about alternative means to protect employees from using trade secrets or proprietary information as competitors. This could be because many of the possible alternatives, such as nondisclosure agreements, would be captured by the same arguments she cites against noncompetes.

Thus, expanding Section 8(a)(1)’s constraints on terms of employment to include noncompete clauses would uphold a standard that the NLRB and the courts have not adopted over the many decades of the NLRA’s existence, as well as one that Congress did not contemplate when it enacted the law. It would also call into question many other contractual restraints on employment, including fiduciary-duty laws, no-recruit agreements, and even NDAs or confidentiality agreements. It’s thus likely that courts will be reluctant to uphold this broad interpretation as “reasonably defensible,” even if the general counsel’s advice is ultimately reflected in an NLRB decision.

Conclusion

The net effect of the NLRB memo and its signaling of greater scrutiny of noncompete agreements is likely to chill the use of such clauses. While this would increase labor mobility in the short term, it would bring with it a range of likely adverse long-term consequences for opportunity and employment across various industries. A more narrowly tailored approach that singles out specific kinds of noncompetes—or noncompetes in specific situations, such as low-wage workers—may be more desirable, although this is a policy principle already reflected in the labor laws of various states.

Congress did not envisage the National Labor Relations Act being used to target noncompete clauses, which have existed in some form since well before the NLRA came into effect. Given the unprecedented nature of blanket rules against all or most noncompetes now contemplated by agencies like the FTC and NLRB, courts are unlikely to uphold the legal reasoning that Abruzzo forwards here. This is, however, unlikely to stop the NLRB’s push against noncompetes from modifying business conduct, while imposing significant costs on those who hire workers in a range of contexts.

The Oregon State Legislature is considering HB 3631, a bill that would ensure that consumers have a “right to repair” their electronics devices. The legislation would require that manufacturers provide consumers and independent repair shops access to relevant repair information, as well to make available any parts or tools necessary to carry out the repair.

This may seem like common sense. Many consumers believe they should have the right to have their devices repaired by themselves or by whomever they want. If you buy a mobile phone, you should be able to do anything you want with it. You can use it as intended, give it to a friend or even smash it with a hammer. So why can’t you have someone else repair it?

The short answer is that manufacturers have invested enormous sums to research and develop those devices, and even more time and money to build their brand’s reputation. When devices break or get damaged, they and their authorized repair shops are usually in the best position to repair them. Because their reputation depends on devices working as expected, they have incentives to ensure that the repairs are done properly. An independent iPhone repair shop cares much less about Apple’s reputation than Apple cares about its own. Too many shoddy iPhone repairs may confuse consumers into believing there is something wrong with phones themselves, rather than with the shop’s repair skills.

Perhaps even more importantly, independent repair shops may not have the same interest or ability to secure their customers’ privacy and data. Hunter Biden’s water-damaged Apple laptop would not be in the headlines if he had taken it to the Apple store for service, instead of an independent repair shop. Many manufacturers and their authorized service providers have built their brands on a reputation for maintaining consumer privacy and protecting consumers’ data.

More broadly, right-to-repair laws can negatively affect manufacturers’ ability to protect sensitive proprietary information about how their devices work. If a bill goes too far in forcing this information “out into the wild,” it can have a widespread chilling effect on future innovation and investments in such products. In the long run, this could harm consumers by restricting the market for electronic devices.

Stepping back, it’s also important to examine the context in which these bills operate. Obviously, consumers care about lower prices, but price is only one factor in consumer preferences. Consumers also value convenience and ease of use. A right-to-repair bill might lower repair bills in the short run, but could just as easily drive up prices in the long run. This happens by forcing manufacturers to complicate their production output to incorporate these mandates. Moreover, to the extent that the devices’ appeal to consumers relies on tight integration and slick design, the new mandates can render them less easy to use.

While self-appointed “consumer advocate” groups almost certainly believe what they are saying, they represent only one set of values. The typical consumer may not care much at all about the repair industry, and also may not greet the outcome of this legislation with nearly as much enthusiasm.

Given this context, Oregon lawmakers need to proceed judiciously when contemplating right-to-repair. There are some gains to be had, but these can easily be swamped by unintended consequences.

Grab a partner, find a group, and square up for Truth on the Market’s second Telecom Hootenanny. We’ve got spectrum auctions, broadband subsidies, and a European 5G tango. 

Former FCC Commissioners Have Some Thoughts

Writing with Kirk Arner in RealClearMarkets, Harold Furchtgott-Roth—formerly of the Federal Communications Commission (FCC)—comments on the Spectrum Auction Reauthorization Act, recently passed out of the House Energy and Commerce Committee. Arner and Furchtgott-Roth note that reauthorizing spectrum auctions is a “good and necessary idea,” but take issue with the “$23 billion Ponzi scheme” the reauthorization creates.

First, the bill would authorize the FCC to borrow more than $3 billion from the U.S. Treasury.  As for the NTIA, it would be allowed to borrow $200 million, also from the Treasury. 

Second, the bill would then authorize the FCC to sell large swaths of federally-owned radio spectrum rights, which are public assets currently owned [by] the government.  The proceeds from these auctions would be in the many, many billions of dollars.

Third, following the FCC auctions, the bill would require the proceeds to be distributed in a specific way.  Off the top, over $3.2 billion would be returned to the Treasury to essentially pay back the initial FCC and NTIA loans from step one.  Beyond that, $19.8 billion would be directed to NTIA, including $14.8 billion to implement a “new 911” program and $5 billion for a new “middle-mile” program.

Kristian Stout and I wrote last year here at Truth on the Market that “the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending.” Similarly, Furchtgott-Roth and Arner argue that the proceeds from spectrum auctions should go entirely to the U.S. Treasury, to be appropriated by Congress. But the authors conclude that the Reauthorization Act instead mandates those proceeds be spent on specific “pet projects.”

It’s a fairly open secret that the Affordable Connectivity Program (ACP) that provides broadband subsidies is expected to run out of money within the next year. At a Brookings Institution event, fellow former FCC Commissioner Michael O’Rielly argued that the FCC is “not well-suited” to distribute ACP funds.

Even so, O’Rielly defended the ACP as the “best structure we have to date” and supported congressional appropriations to fund the program. He’s likely correct, as I concluded in a recent Truth on the Market post:

Even if the ACP program is not perfect in itself, it goes a long way toward satisfying the need to make sure the least well-off stay connected, while also allowing private providers to continue their track record of providing high-speed, affordable broadband.

In a recent research note, Kathryn de Wit of Pew’s Broadband Access Initiative noted that the ACP is interconnected with several other federal broadband and that lack of funding for ACP could disrupt these other programs:

Failure to fund ACP could also jeopardize the success of other federal broadband access initiatives. The Broadband, Equity, Access, and Deployment (BEAD) program, which is providing $42 billion to states, requires that ISPs participate in the ACP. And the Treasury Department’s Capital Projects Fund program also requires that ISPs participate in the ACP.

There’s a lot of money swirling around in telecom these days. Let’s hope Congress can carve some time out of their upcoming re-election campaigns to make sure it’s put to good use.

EU’s 5G Cost-Sharing Proposal Goes Over Like a Lead Balloon

Meanwhile, in Europe, a majority of EU member states now oppose a proposal by large telecom operators to impose a network fee on tech giants such as Google, Apple, Facebook, Netflix, Amazon, and Microsoft to fund the rollout of 5G and broadband in Europe.

In Truth on the Market, Mikołaj Barczentewicz and Giuseppe Colangelo describe the proposal as “a curious phenomenon in which the commission revived the seemingly dead-and-buried idea of a legally mandated ‘sender pays’ network-traffic scheme.” 

The European Commission’s proposal is driven by large telecoms’ complaints that, in the name of “fairness,” large online platforms—whose users consume a significant chunk of bandwidth—should contribute more to the cost of telecom networks. Telecoms supporting the proposal include Deutsche Telekom, Orange, Telefonica, and Telecom Italia. Barczentewicz and Colangelo conclude that no such “fairness” problem exists and that none of the telecoms’ concerns can be addressed via the proposed “sender pays” or “network fees” scheme. 

Apparently, some EU countries agree. Telecom ministers from 18 countries expressed concern that the proposal might violate the EU’s “net neutrality” rules and would impose extra costs that would be passed on to consumers.

Rent seeking never rests, so it looks like this is just one of the first dances in Europe’s Network Fee Hootenanny.

The Federal Trade Commission (FTC) recently announced that it would sue to block Amgen’s proposed $27.8 billion acquisition of Horizon Therapeutics. The challenge represents a landmark in the history of pharmaceutical-industry antitrust enforcement, as the industry has largely been given license to engage in permissive mergers and acquisitions of smaller companies without challenge.

In Part One, I reviewed the basic structure and function of the pharmaceutical industry, as well as the theory of harm that the FTC is bringing. In this part, I take a much deeper dive into the economic literature to determine whether the FTC’s theory of harm is likely to hold up in court and whether the commission has picked the right forum in which to bring its claims.

The Economics of Loyalty Discounts

What are the economics behind the FTC’s theory of harm? Ultimately, the theory comes from a strand of the economics literature that analyzes exclusive deals and loyalty discounts.

As a brief review, the FTC is challenging the deal on grounds that Amgen might potentially engage in a practice known as “bundled rebating” after completing its purchase of Horizon. The  commission believes that Amgen will be able to offer discounts on its existing portfolio of blockbuster products in order to secure quasi-exclusive “preferred placement” on insurance-company formularies, thereby blocking potential rivals to Horizon’s new rare-disease drugs from entering the market. The economic theories most relevant to these practices are those that study practices known as “exclusive dealing” and “loyalty discounts.”

An exclusive deal is a conditional-pricing practice in which a firm and a buyer sign a contract specifying that 100% of the buyer’s purchases will be made from a specific seller, usually in exchange for a discount on the per-unit price. A loyalty discount is a less extreme form of exclusive deal in which the share of purchases that must be shared between the two parties is not 100%.

As is perhaps obvious, it should be noted that the practices of exclusive dealing and loyalty discounts are not identical to the bundled-rebate practice that the FTC suggests a combined Amgen-Horizon could engage in. For example, both exclusive deals and loyalty discounts are conditional-pricing practices that deal with only one market at a time, markedly different than the FTC’s case.

The literature on exclusive dealing and loyalty discounts is nonetheless useful, as many conditional-pricing practices have similar effects on consumer welfare, even if the details of the practices may differ. Moreover, the legal literature and the FTC report I mentioned in part one both note that courts should treat bundled discounts as exclusive deals, with similar requirements to prove illegality.

No matter the specific details of the conditional-pricing practice, for there to be any consumer harm, the practice has to stand up to the analysis of exclusive dealing that Robert Bork offered in his 1978 book “The Antitrust Paradox.” Bork pointed out that, in the presence of a second firm that could enter a market, it would be impossible for an existing incumbent to profitably exclude the entrant. It might be able to sign an exclusive contract, but to induce the buyer to forgo the benefit of competition, it would have to offer the buyer a payment large enough to offset the price increases it could expect from forgoing competitive entry. Another way to pose this critique is to ask: “why would the seller willingly agree to sign a contract that robs it of the price reductions it could expect from entry?”

Bork’s argument was intended to analyze true exclusive deals, but it’s easily applicable to bundled rebates, as well. Why would a pharmacy benefit manager (PBM) voluntarily agree to place Horizon’s drugs Krystexxa and Tepezza in a preferred section of the formulary and exclude the benefits of competition for those products if it did not receive rebates that fully offset those losses? There may, indeed, be exclusion—to the distaste of the rivals to Krystexxa and Tepezza—but that doesn’t matter if the PBM has been fully compensated by equivalent-magnitude rebates on other products.

Economists have offered many models since 1978 in an attempt to prove that exclusion that harms consumers and is profitable for sellers is, indeed, possible. One feature they all appear to have in common is that, to get around the Bork critique, they include some form of “contracting externality” that shifts the harm from exclusion onto others who are not parties to the exclusive contract. The models most relevant to the Amgen case are ones that incorporate a first-mover advantage, because Krystexxa and Tepezza both are the first drugs to treat their given conditions.

The first potentially relevant model was originally published by Eric B. Rasmussen, J. Mark Ramseyer, & John S. Wiley Jr. in 1991 and expanded upon in 2000 by Ilya R. Segal and Michael D. Whinston. It posits both a firm with a first-mover advantage and the presence of significant scale economies for the potential entrant. Due to the presence of scale economies, the entrant in this scenario would only join the market if there sufficient remaining customers for it to break even after the exclusivity clause has been signed. Therefore, signing an exclusive contract—in this case, a contract to make Krystexxa and/or Tepezza the preferred treatment in a particular PBM’s formulary—could make it unprofitable for the new firm to enter, stranding the PBMs not party to the exclusive contract with monopoly prices. Here, PBMs are faced with a collective-action problem; it would be better for everyone if no one signed an exclusive clause, but it’s better for each individual firm to sign one.

Is this model a plausible explanation of reality? In at least one way, yes! The most stringent requirement—the presence of scale economies—is clearly met. Developing a competitor to Tepezza or Krystexxa would require significant upfront investment by any firm, while the marginal manufacturing and distribution costs, while not insignificant, are small in comparison.

There are, however, at least three complications to this simple story.

The first is whether an agreement to place a drug in a formulary’s preferred tier truly acts as an “exclusive” deal. Does it? Unfortunately, we don’t have great empirical studies to quantify the effects of moving a drug from a preferred tier (with little-to-no co-payment from patients) to a less-preferred tier (with a much more significant out-of-pocket obligation for the patient). The data we do have suggests that demotion from a formulary’s preferred tiers may precipitate a large drop in volume, but not nearly large enough to constitute a near-exclusive clause.

The best of these studies is an ongoing one from Kate Ho & Robin Lee, who directly estimate that moving a branded drug from a preferred to non-preferred tier drops the expected market share of that drug for the given PBM by roughly 10%, and that these numbers are consistent with the observed rebates in the industry. This is likely an underestimate for the specific drugs at the center of the FTC’s case, given that the Ho & Lee study focus on a drug class (statins) where generic competition exists and the number of branded options is quite large. Contracts that banned a PBM from including a competitor’s drug on the formulary at all (true exclusive deals), on the other hand, cause a much larger drop in drug volume, on the order of  roughly 70%. But this evidence suggests that the effect of securing preferred placement is much smaller than a true exclusive deal.

A second important consideration is the market structure of the buyers—in this case, the PBM industry. The core reason for profitable exclusion in this case is because of coordination failures among the PBMs. The PBM market is highly concentrated and symmetric, however, with each of the “large” PBM having roughly equal market share. While this is often treated as problematic in antitrust analysis, it also makes it more likely that the PBMs could monitor the deals their rivals make with pharmaceutical manufacturers and thereby mitigate the coordination failure.

It’s also important to consider the effects of downstream competition. In the basic model, the buyers are the end-users. But in reality, PBMs also compete for the business of health insurers. Taking this competition into account requires acknowledging two forces at play. First, there is the important question of how much of the discount or rebate will the PBM get to keep for itself. Typically, it would be considered bad if PBMs kept rebates for themselves, rather than pass those discounts along to insurers (and, ultimately, patients) in the form of lower premiums. In this case, however, if the PBMs weren’t able to keep any of the benefits of a lower-priced drug, they’d be happy to exclude the entrant and keep their input prices high.

For example, let’s assume the price of one of Horizon’s drugs with an exclusive clause (inclusive of rebates) would be $700 per dose, and the price of a new entrant’s drug would be $300 per dose. PBMs only care if they get to keep some of the $400 per-dose difference for themselves. If they pass along every drop of savings to their customers, they have no incentive to fight hard for the $400 per-dose deal.

What does our knowledge about the PBM industry tell us about their pass-through rate? Unfortunately, this data is, once again, hard to come by in a notoriously opaque industry. We can, however, glean from the considerable outrage directed at PBMs in various fora that the pass-through rate is not 100%. Moreover, there is some—albeit hard to verify—data that suggests they keep about 10 to 15% of the rebates they generate. In this case, the ability of PBMs to keep the savings they negotiate suggests that they have incentives to prevent exclusion.

In summary, while the core requirements of an exclusive deal—the presence of scale economies—are met for the FTC’s case against Amgen, there are also alternative considerations that suggest PBMs may be able to avoid the problems of exclusive deals. The real issue with the FTC’s case, however, can be found if we take a broader look at the method that the commission has chosen to challenge this specific practice.

Why Challenge Bundled Rebates in a Merger?

The FTC’s concern with Amgen’s bundled-rebate practices, and with the broader practice of using rebates to secure preferred placement in formularies, is warranted. Economic theory supports the FTC’s fear that such practices could exclude rising competitors to drugs, hurting consumers in the process.

One question remains unanswered, however: why challenge this conduct in the context of a merger?

The problems with challenging bundled rebates in the context of merger enforcement are laid bare when examining another recent example: Pfizer’s $40 billion acquisition earlier this year of Seagen. Seagen is a maker of a new drug class called “antibody-drug conjugates,” which are used to treat cancer. Like Amgen, Pfizer is a large, diversified pharmaceutical company with multiple existing franchises. Seagen, like Horizon, is a much smaller biopharmaceutical company with a strong pipeline of drugs in development, and one to two core drugs on the market.

Sound familiar? As with the  Amgen-Horizon deal, the FTC was presented with the purchase of a small pharmaceutical company that has approvals in narrow indications by a well-diversified pharmaceutical giant with products in much broader indications. And while the specific products that Seagen manufactures are cancer medications, rather than medicines for thyroid disease or gout, if we consider the relevant industry characteristics and participants, the two deals are nearly identical.

Why, then, is the Pfizer-Seagen merger not being challenged? The same concern in the Amgen acquisition—that Pfizer could sign contracts to offer rebates on its current blockbusters in order to exclude future rivals to Seagen’s products—applies here too. In fact, there at least five other deals that have similar characteristics still pending from this year alone!

The Pfizer-Seagen comparison demonstrates the danger in the FTC’s approach to challenging bundled rebates. Not only is it challenging a merger based on potential conduct before any actual anticompetitive conduct has occurred, but also in a situation where it is impossible to specify narrowly under what circumstances the conduct is likely to occur. In choosing to confront the issue of bundled rebates, the FTC has inadvertently challenged an entire industry’s business model.

This wouldn’t necessarily be an issue if there were no plausible efficiency benefits to the business model itself. But while the FTC is adamant that bundled rebates in pharmaceutical markets have no plausible efficiency justifications, the same is not true for the broader biopharma M&A ecosystem. As noted in part one, we have considerable evidence that the current business model, in which biopharmaceutical firms specialize on specific parts of the clinical-development process, is more efficient than the alternatives.

An alternative approach—notably, challenging the practice of bundled rebates in the context of Section 2 of the Sherman Act—removes the risk of chilling beneficial M&A activity, while preserving (assuming the FTC wins) the deterrent effect on future attempts by pharmaceutical firms to use bundled rebates to exclude competitors anticompetitively. Firms could invest in purchasing attractive new drugs, but also know that, if they attempted to sign potentially exclusive rebate contracts after the fact, the FTC would bring (and likely win) a suit.

It’s also notable that, when similar practices have been challenged in other industries (such as LePage’s Inc. v. 3M), as well as in the biopharmaceutical industry itself, the relevant forum has been Section 2 of the Sherman Act.

Challenging bundled rebates in the context of a Section 2 case has the additional benefit of allowing the FTC to target its choice of case to areas where a victory will result in maximum consumer benefit, in a way that merger enforcement does not. Challenging bundled rebates in the context of a merger constrains the FTC to consideration of harm in markets that are directly implicated by the mergers.

In this case, for example, even if the FTC’s theory of harm was correct, it would only be granting consumer relief for patients in two rare-disease markets that together constitute less than 15,000 patients annually. While all consumer harm, particularly in health-care markets, is undesirable, other markets may represent much more consumer “bang for the buck.” For example, the private antitrust suit that Regeneron filed against Amgen over the drugs Repatha and Praulent (discussed in part one) could apply to a significant chunk of the estimated 40% of Americans with high cholesterol. Moreover, assuming that these practices are relatively common, there could be much larger markets where more harm to consumers is happening that the FTC is simply choosing not to confront.

There’s one last reason to consider confronting the practice of bundled rebates in the pharmaceutical industry through an alternate forum. Simply put, by placing its focus on possible bundling in a merger-enforcement case, the FTC is ignoring areas where possible antitrust victories could be the most valuable: existing products where rebates could hamper biosimilar competition.

Competition between branded drugs—as would occur when a competitor arises to Horizon’s drugs—is important, but it does not tend to lead to much price competition (although such entry could still increase consumer welfare through increased quality or variety). In contrast, in situations where bundled rebates prevent the entry of biosimilars, the potential for price reductions are much larger. When successful, biosimilar uptake can erode prices by 50%, which while can bring enormous benefits to patients. In focusing on acquisitions by large pharmaceutical firms to expand their pipeline of innovative medicines, the FTC is leaving large consumer benefits on the table, while simultaneously increasing the risk that their actions will chill beneficial practices.

Conclusion

The FTC’s suit to block the Amgen-Horizon merger represents a landmark antitrust case. The pharmaceutical industry, previously relatively immune from antitrust suits, is coming under scrutiny for the first time. That increased attention has caused worries among both early-stage venture investors, who often finance the target firms in these deals, as well as in the pharmaceutical industry itself, which relies on M&A activity to refill its pipelines with molecules for later-stage development.

For its part, the FTC adamantly insists that the practices it is challenging—the use of bundled rebates—harm competition and patients. Who is right? Unfortunately, the answer is not so clearcut. The facts of the case give both parties much to stand on.

On the one hand, despite some comments that the FTC’s theory of harm is overly novel, it is, in fact, well-grounded in economic theory. Bundled rebates, when viewed as a form of loyalty discount, can absolutely exclude competitors in an environment such as pharmaceutical development.

On the other hand, the pharmaceutical industry is also right to suggest that the M&A ecosystem has come to represent an important source of new medicines for patients and that damaging that ecosystem is likely to harm patients in the long run.

The tension between these views was not inevitable. As currently formatted, the FTC’s case generates harm by conflating two separate practices: the broad business model in which pharmaceutical firms purchase drugs synthesized by smaller companies, and the use of bundled rebates. The signal sent to society is unclear. Is it bundled rebates that should be viewed skeptically or M&A more generally?

An alternate enforcement route that relies on pursuing bundled-rebate activity under a Section 2 monopolization case would adequately balance the concerns of the FTC and of industry. Such a route could also ensure that the FTC protects consumers from the possible harms of bundled rebates, while preserving the value of the pharmaceutical M&A ecosystem to generate new treatments.

In a world in which so-called “Big Tech” has dominated antitrust discussions for a decade or more, who would’ve guessed that golf would grab the biggest headlines? The proposed merger of the PGA Tour and LIV Golf has some major headline-grabbing potential: sports, big money, big names, 9/11, human-rights abuses, and cringeworthy public-relations attempts.

Aside from those issues, the PGA-LIV link-up also presents some important issues for antitrust enforcers. 

Less than two years ago, LIV Golf launched as a competitor to the PGA. Less than a year ago, LIV held its first tournament at Trump National Doral Miami.

LIV entered the golf tournament as a powerhouse. Hall of Fame golfer and entrepreneur Greg Norman was hired as chief executive officer. Three-time Masters winner and Hall of Famer Phil Mickelson was signed on to LIV’s roster. According to National Club Golfer, LIV paid out $255 million in prize money and bonuses to players. The publication estimates 52 golfers earned at least $1 million last year after joining LIV. The organization innovated professional golf by introducing team play.

The PGA didn’t just sit on the sidelines, however. After LIV launched, the PGA boosted its payouts. The New York Times reports that golfers competed for $8.2 million in prize money at last year’s Phoenix Open. This year, the pool more than doubled to $20 million. 

Despite the eye-popping payouts to golfers, it’s not clear how much interest LIV has generated from fans. While the data on TV viewership is pretty shaky, LIV’s average audience is around 290,000, versus the PGA Tour’s 2.4 million viewers.

The entry of LIV has many of the hallmarks of the benefits of increased competition. Labor gained from boosted pay. Consumers benefited from more opportunities to watch top golfers play.

Perhaps that’s why (or, at least, one of the reasons why—the involvement of the Saudi Public Investment Fund is obviously also a big one) there is so much hand-wringing about the announced merger of PGA and LIV. Antitrust regulators are bound to speculate what will happen if the combination is consummated. Will golfer pay fall back to Earth? Will fans have fewer tournaments to watch? Will team play go away?

Some regulators will have a knee-jerk response that the merger of PGA and LIV would create a monopoly that will make many worse off than if the two organizations competed against each other.

But, this is the wrong way to look at it. Before LIV entered less than two years ago, the PGA was a monopoly with which few people (mostly just Phil Mickelson) had a problem. Would rolling back the clock a couple of years really impose irreparable harm? I doubt it.

Instead, it seems just as likely that the merged organization will incorporate the lessons it learned from their “LIVed” experience. Payout pools may drop relative to last year, but there is no evidence that those high payout pools were sustainable.

Instead, we will likely end up with payouts between the ridiculously high, one-year LIV numbers and pre-LIV numbers. Similarly, there are likely to be more tournaments than there were before LIV entered. Plus, there’s a good chance that team play will be here to stay. That’s as close to innovation as we get in sports.

Antitrust policymakers and regulators must be careful in crafting their counterfactuals. Too often, they rely on an unrealistic assumption that, but-for a merger, vigorous competition would be the natural state of the world. In reality, however, the world is imperfect.

A merged PGA-LIV organization may be worse than two competing organizations, but may also be much better than the pre-LIV PGA monopoly. Competition is often a matter of degrees, rather than an either-or.

In Susan Crawford’s 2013 book “Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age,” the Harvard Law School professor argued that the U.S. telecommunications industry had become dominated by a few powerful companies, leading to limited competition and negative consequences for consumers, especially for broadband internet.

Crawford’s ire was focused particularly on Comcast, AT&T, and Verizon, as she made the case that these three firms were essentially monopolies that had divided territories and set up roadblocks through mergers, vertical integration, and influence over regulators and franchisors to prevent competition and innovation. In particular, she noted the power Comcast commanded in securing access to live sports, allowing them to effectively prevent cord-cutting and limit competition from other cable companies.

According to Crawford, the consequences of this monopoly power were high prices for service, poor customer service, and limited access to high-speed internet in certain areas, particularly in rural and low-income communities. In effect, she saw no incentives for broadband companies to invest in high-speed and reliable internet. In response, she proposed increased competition and regulation, including the development of fiber-based municipal broadband to foster greater consumer choice, lower prices, and improved access to reliable internet service.

A decade later, the broadband market is far more dynamically competitive than critics like Crawford believed was possible. YouTube TV’s rights to NFL Sunday Ticket (as well as the massive amount of programming available online) suggests that Comcast did not have the control over important programming like live sports that would have enabled them to prevent cord-cutting or to limit competition. And the rise of 10G broadband also suggests that there is much more competition in the broadband market than Crawford believed was possible, as her “future proof” goal of symmetrical 1Gb Internet will soon be slower than what the market actually provides.

Innovation No. 1: Must-Have Programming Is All Online Now

In “Captive Audience,” Crawford made much of Comcast’s control over must-have programming—particularly live sports:

Brian Roberts knew that Comcast needed to maintain, as long as possible, its power to sell subscribers large bundles of programming that included “must-have” content—particularly live sports. To do that, he needed to make sure that live sports would not be available over the Internet on demand, at attractive prices, without a subscription. The programmers and networks had to be assured that they would make more money selling to cable distributors than directly to online consumers…

Brian Roberts’s favorite sports may be squash, but as a businessman he knows the real value in American television entertainment lies in controlling rights to football, basketball, and baseball games. If there was a guiding ethos to Comcast’s pursuit of NBC Universal, it was to gain control over more sports programming. Live sports is the one thing that people can get almost nowhere else—not on DVD, not online—the only options are pay-TV or a stadium seat.

While she made much of the strength of Comcast’s control over live-sports programming at the time, the world has changed rapidly in the decade since. For instance, NFL Sunday Ticket—the exclusive province of satellite company DirecTV from 1994 until 2022—will now be available over YouTube TV, starting with the 2023 season. While NFL Sunday Ticket has never carried a viewer’s local games, you never needed a cable package to access those, because you could even use a terrestrial antenna for a local CBS or Fox station. The National Football League was then and remains by far the most popular sports league in the United States, and is certainly high-demand content. As one article put it:

Sports continued to show why they are the most valuable programming on television in 2022, making another strong statement at a time when the entertainment industry faces challenges with attracting viewers to scripted programming.

The NFL and college football continued to draw the most viewers of any form of programming as viewers watched sports in record numbers. Overall, sports accounted for 94 of the 100 most-watched telecasts for the year. That was down just one number from last year, but up from 75 during the election-heavy 2020 docket, and from 92 telecasts in 2019.

The NFL put a record number of telecasts in the top 100 in 2022, with a stunning 82 games cracking the list. That figure is up from 75 in 2021, 72 in 2020 and 78 in 2019.

This was a major hole in her argument from the start. But with the move of NFL Sunday Ticket to YouTube TV—along with the availability of live sports through services like Hulu + Live TV, FuboTV, Sling TV, ESPN+, Paramount+, and Peacock—it’s easier than ever for cable subscribers who are sports fans to cut the cord, especially if they can afford to combine one of those services with offerings like NBA League Pass, MLB.TV, and/or NHL.TV.

For in-region games, which are usually not available for National Basketball Association, Major League Baseball, or the National Hockey League through their television packages (or over-the-air through an antenna), you must purchase access to a regional sports network (RSN). But even here, cable no longer enjoys monopoly power. For instance, you can access the vast majority of RSNs through DirecTV Stream and FuboTV.

Moreover, the proliferation of high-demand content through other over-the-top (OTT) media services like Netflix, Disney Plus, HBO Max, and Apple TV have also made it so that big cable companies no longer have anything like “monopoly power” over programming (if they ever did) to “force” bundles on users who demand high-speed internet. These developments alone significantly undercut Crawford’s original thesis that Americans’ desire for high-demand content make them a “captive audience” for cable providers like Comcast.

Innovation No. 2: 10G and Rapidly Increasing Speeds and Access

Back in 2013, Crawford argued that, because major cable providers had effectively divided up the marketplace for its captive audiences, “[t]here is no competitive pressure that would drive them to install next-generation fiber networks to make America globally competitive.” To Crawford, what America really needed was “access to reasonably priced 1-Gb symmetric fiber-to-the-home networks.”

As a policy solution to achieve this goal, she pointed to the “successes” of municipal broadband, but also promoted the building out of fiber infrastructure at the national level. Interestingly, and despite her analysis of the marketplace, just 10 years later, the market has provided internet options even faster than the 1Gb network she touted as the future.

Comcast (also known as Xfinity) is now touting 10G (building off the 5G branding of wireless) as the new technology, referring not to the tenth generation of technology but to 10Gb speeds (along with ultra-low latency and better WiFi) that will likely be available in coming years, over both cable and fiber networks. In fact, Comcast will be able to provide this without having to build out new networks at all, just by switching to DOCSIS 4.0 modems. And it will be available (eventually) to all consumers who currently have access to Xfinity. In fact, Comcast announced in February that:

Today, Comcast accelerates the nation’s largest and fastest multi-gig deployment and announced that its latest Xfinity 10G network upgrade will be launched to 10 million homes and businesses by the end of this month.

These locations now have the foundational network enhancements in place to begin deploying DOCSIS 4.0, setting the stage for the introduction of new symmetrical multi-gigabit Internet options before the end of 2023 that can be delivered across existing networks with less cost.

To date, and ahead of schedule, more than 40 markets across Comcast service areas have implemented network improvements including Atlanta, Boston, Chicago, Denver, Houston, Miami, Philadelphia, Salt Lake City, Seattle, San Francisco, Washington D.C., and others. The full deployment of these technical capabilities will reach more than 50 million homes and businesses by 2025.

To be fair, Crawford did write another book in 2018, “Fiber: The Coming Tech Revolution―and Why America Might Miss It,” in which she pointed to what she saw as the continued successes of municipal broadband vis-a-vis the market in providing increasingly fast speeds. For instance, Chattanooga’s municipal broadband network is working toward offering up to 25Gbps speeds.

But there is an economic reason that municipal broadband often offers higher speeds than private internet service providers: it isn’t subject to the profit-and-loss test of the marketplace. The presence of massive cross-subsidization from electrical utilities that actually have a captive audience is what makes it possible for municipal broadband to offer speeds far beyond what consumers demand, sometimes even at lower prices than private providers. But this is actually an example of predatory entry rather than competition, as these municipalities are ultimately able to pass on costs to the taxpayer or ratepayer.

In sum, far from not feeling any need to innovate and create high-speed networks, Comcast has actually upgraded its networks beyond what Crawford could even imagine just 10 years ago. This makes much more sense if the market is dynamically competitive than if it is a captive audience. Municipal broadband actually relies on real captive audiences to subsidize its fiber buildouts: electric-utility customers and taxpayers.

Conclusion

Ten years later, it is clear that cable providers do not have a “captive audience” of consumers who must bundle their exclusive must-have content with high-speed internet. Cutting the cord and accessing content, including live sports, has never been easier. But more than that, the market continues to evolve and innovate, and Comcast itself is upgrading its networks through DOCSIS 4.0 modems to accommodate internet speeds even faster than the municipal fiber networks that Crawford imagined would be state of the art for the foreseeable future just a decade ago.

The Federal Trade Commission (FTC) recently announced that it would seek to block Amgen’s proposed $27.8 billion acquisition of Horizon Therapeutics. The move was the culmination of several years’ worth of increased scrutiny from both Congress and the FTC into antitrust issues in the biopharmaceutical industry. While the FTC’s move didn’t elicit much public comment, it raised considerable alarm in various corners of the biopharmaceutical industry—specifically, that it would chill beneficial biopharmaceutical M&A activity.

This piece, which aims to shed light on the FTC’s theory of the harm in the case and its consequences for the industry, will be divided into two parts. This first post will discuss the overall biopharmaceutical market and the FTC’s stated theory of harm. In a subsequent post, I will dive more deeply into the economic theories that underpin the case and the risk-benefit tradeoff inherent in the FTCs decision to challenge the merger. 

The Biopharmaceuticals Market and the Importance of M&A

Amgen is one of the world’s largest biopharmaceutical firms, with 2022 sales of $24.8 billion. The firm’s origins date back to the 1980s, but in the decades since, it has transformed from a biotechnology startup focused on manufacturing recombinant proteins for use as drugs into a diversified pharmaceutical leader. Its current sales are mostly generated by a portfolio of nine blockbuster drugs that each earn more than $1 billion in annual sales, the most important of which are:

  • Enbrel (etanercept), which is used to treat a wide variety of autoimmune diseases;
  • Prolia (denosumab), which is used to treat osteoporosis in elderly patients; and
  • Otezla (apremilast), which is also used to treat autoimmune diseases, most notably psoriatic arthritis.

Horizon Therapeutics, on the other hand, is a small biopharmaceutical firm that got its start in a relatively mundane manner: combining multiple generic drugs into a single tablet. Horizon’s strategy shifted over time, however, with the firm seeking to purchase undervalued therapeutics from bankrupt pharmaceutical companies and then re-launch them. This strategy yielded the two drugs that the FTC’s case focuses on:

  • Tepezza (teprotumumab-trbw), which the company acquired in 2017 and treats the eye disease that is seen with Grave’s Disease, a form of hyperthyroidism; and
  • Krystexxa (pegloticase), which the company acquired in 2015 from a private-equity company and which is used to treat gout that has not responded to other “first-line” medications.

At first glance, the FTC’s decision to challenge this merger might seem relatively uninteresting—certainly not an action that would warrant the alarm that has been generated in the biopharmaceutical community. The industry’s concern makes more sense, however, when one appreciates the crucial role that mergers and acquisitions have come to play in the biopharmaceutical innovation process.

First, it’s worth stating that the kinds of acquisitions that have become an important part of the biopharma R&D ecosystem are not large horizontal mergers between existing biopharmaceutical companies. Examples of the latter include the Bristol-Myers-Squibb – Celgene merger in 2018, and the Merck-Wyeth merger of 2009. Rather, it is smaller acquisitions—in which large biopharmaceutical firms acquire small biopharmaceutical firms with no approved drugs, or at most one or two recently approved drugs—that are the focus.

Some data would be helpful to illustrate the scale of this type of activity and its importance to biopharmaceutical innovation. Biopharmaceuticals are an M&A-intense industry. In 2021, the industry engaged in 196 transactions worth $152 billion—more than any other U.S. industry. Moreover, this M&A activity is tightly linked to research and product development in ways that M&A activity in other industries simply isn’t.

For example, in 1997, the contribution of drugs that were “acquired” to the sales of large pharmaceutical firms was a mere 10%. That is, 90% of branded pharmaceutical revenue was sourced from drugs that pharmaceutical firms invented “in house.” Today, that number is just 37%, with 73% of revenue originating either from acquisitions of small companies or “partnered” deals with small companies. An analysis from McKinsey concurs, noting that “66 percent of the entire industry’s pipeline revenues were generated from [externally sourced drugs].” Another study performed in the early 2010s showed that of the roughly 170 new drugs approved by the FDA from 2011 to 2016, about 65% were originally discovered by smaller firms before being bought by larger pharmaceutical firms who continued to develop the drugs and bring them to market. (See also here, here, and here.)

Each of these datapoints come together to tell a very clear story. Specifically, over a period of about 20 years, the biopharmaceutical R&D model shifted from one in which established large pharmaceutical companies conducted every step in the drug-development value chain to one in which large firms have “outsourced” earlier steps in the R&D chain and concentrated their activities on running the large “Phase 3” trials that are needed to indisputably prove a drug’s efficacy.

Some have attempted to use this shift to paint large pharmaceutical firms as “uninnovative,” but it’s important to note that there are many reasons to believe that this specialization is likely to be efficient. Not only do economists generally tend to agree that specialization enhances productivity, but there’s also early evidence to verify this conclusion empirically in the context of pharmaceuticals, as compounds that are “partnered” (i.e., discovered by a different firm than the one that ultimately brings them to market) have a much higher probability of success than unpartnered compounds.

Taken together, the importance of M&A to the biopharmaceutical innovation engine should not be understated.

The FTC’s Theory of Harm

Given the importance of small-company M&A to biopharmaceutical innovation, why is the FTC challenging the Amgen-Horizon merger? Simply put, the FTC believes that, after the acquisition, Amgen will engage in a business practice known as “bundled rebates” in which it “provides greater rebates on one or more of its blockbuster products to secure favorable formulary placement for other medications in different product markets.” The commission believes this will “block smaller rivals from being able to compete on the merits.”

Specifically, the FTC argues that Amgen will provide discounts to pharmaceutical benefit managers (PBMs) on its broad set of existing products, like Enbrel, Prolia, and Otezla, in order to secure preferred formulary placement for the new drugs it is acquiring from Horizon, including Tepezza and Krystexxa. This, the argument goes, would thereby exclude any rising competitors to those drugs and thus harm consumers.

Rebates or discounts are a feature of nearly all markets, but they play a special role in the biopharmaceutical industry. In the United States, rebates are negotiated by PBMs, whose function is to define formularies on behalf of health insurers. These formularies are essentially “lists” of which medications an insurance plan is going to cover, the price the insurer will pay the manufacturer for each drug, and how much of each drug’s price is going to come out of the patient’s pocket. The “price” that is negotiated comes in the form of the aforementioned rebates off the drug’s “list” price. For example, a PBM and a pharmaceutical manufacturer might negotiate and end up settling on a 10% rebate off of a list price of $1,000 a dose, leading to a “net price” of $900 a dose. In essence, the PBM functions as a contracted agent to negotiate prices with pharmaceutical manufacturers on the behalf of the insurers, and the rebate governs the net price the insurer ends up paying the pharmaceutical company.

The broader question of the social utility of this rebate system has been ongoing for years. Supporters claim that the system helps to rein in drug spending by forcing pharmaceutical firms to offer rebates to secure formulary placement. Detractors argue that it is responsible for a growing “net-list price bubble,” in which list prices are artificially inflated to provide rebates to PBM customers, without any gain to insurers or patients in the form of lower premiums.

This debate has now spilled into policy discussions, with both regulators and Congress increasing their scrutiny of the rebate process. For example, Congress asked the FTC to investigate the PBM industry and the rebate process back in July 2020, and the FTC responded with both a 2021 report and a 2022 policy statement in which it voiced its concerns both the broader rebate process. While the FTC was broadly critical of the rebate process, however, it singled out a specific type of rebate practice that it would investigate even more closely: the selective use of rebates to foreclose competition from alternative drugs. This is the same practice that the FTC has centered on in its case against Amgen.

Rebates have also recently emerged as the focus of several private antitrust case. Among the two most significant was a 2016 complaint by Pfizer alleging that claimed that Johnson & Johnson—the maker of Remicade, a best-selling medication for autoimmune disease—threatened not to pay rebates to PBMs unless they excluded from their formulary Pfizer’s biosimilar imitator to Remicade: Inflectra. According to Pfizer, this limited Inflectra’s market share to just 4%.

Even more relevant to the Horizon merger is a 2022 lawsuit filed by Regeneron against Amgen that accused it of providing rebates on Otezla and Enbrel to secure preferred formulary status of its own PCKS9-inhibiting drug Praulent over Regeneron’s competing drug Repatha. This claim is identical to the one that the FTC makes today, and is referenced by the FTC itself in its filing.

Taken together, the intense regulatory scrutiny and private antitrust litigation surrounding the topic of bundled rebates in biopharmaceuticals warrants a detailed discussion of their economic effects.

Conclusion

Against this backdrop, the stage is set for a landmark antitrust case. On one side stands the pharmaceutical industry, long accustomed to a relatively permissive M&A environment that has allowed it to engage in specialized technology acquisition, buying small companies to acquire promising new drugs before pushing them across the finish line through large Phase 3 trials. On the other side stands the FTC, eager to demonstrate that these mergers might not be so benign as once thought.

Looming over both parties is the rise of the rebate process and the controversy it has engendered. In Part II, I will perform a detailed review of the economic theories and data that might be applicable to this landmark case.

Consistent with the neo-Brandeisian penchant for downplaying (some would say ignoring) consumer-welfare concerns, the Federal Trade Commission (FTC) recently touted its interest in “reinvigorating” enforcement of the Robinson-Patman Act (RPA). This would stand sensible antitrust-enforcement policy on its head, by devoting resources to actions that predictably would tend to diminish consumer welfare.

In the hope that FTC leadership might rethink such a disastrous policy pivot, my colleague Satya Marar and I prepared a Mercatus policy brief that highlights the sad history of the RPA: a special-interest protectionist law that was from the start intended to limit competition (and thereby harm consumers) by protecting small businesses from more efficient competitors. Specifically, by prohibiting a seller from charging competing buyers different prices for the same “commodity” (subject to a few complex exceptions), the RPA discourages efficient discount pricing, thereby tending to reduce competition and raise consumer prices.

Sound antitrust scholarship consistently has condemned the RPA. What’s more, in 2007, the bipartisan Antitrust Modernization Commission recommended its repeal, stating that the RPA “appears antithetical to core antitrust principles” and “punishes the very price discounting and innovation in distribution methods that the antitrust laws otherwise encourage.”

For those who want more details on the RPA:

Th[e] [Mercatus] brief charts the legislative, judicial, and enforcement history of the RPA. It critically appraises the potential consequences for consumers and competition of stricter and more zealous RPA enforcement by today’s FTC. The brief also assesses the justifications provided by proponents of renewed RPA enforcement and evaluates suggestions for alternative, pragmatic reforms to address the ability of small businesses and entrepreneurs to compete effectively.

The Mercatus brief concludes:

Net welfare is likely to be maximized by an outright repeal of the RPA, which will prevent ideologically motivated officials from expending public resources in RPA lawsuits that are likely to diminish consumer welfare and make the American economy less competitive. Failing to give due weighting to efficient business practices that benefit consumers is antithetical to the procompetition purpose of the antitrust laws. It is also unfair to consumers—and, in particular, to the vast majority of Americans in poverty, who benefit from the negotiating power of large, vertically integrated entities—and to the majority of entrepreneurs who serve them. Regulatory reform that reduces unnecessary government-imposed costs, not the RPA, is an appropriate means to promote the interests of small businesses in an economically efficient, welfare-promoting manner.

FTC Bureau of Economics Director Aviv Nevo is a distinguished and thoughtful economist. Chair Lina Khan stressed when announcing his FTC appointment that “[h]is insights and expertise will be an asset to our agency and to our work serving the American public.” It is to be hoped that Nevo will explain to the FTC commissioners the economic harm that would attend reinvigorated RPA enforcement, and that they will see the light, bowing to his insights and expertise.

One looks forward to an FTC press release stating that, in the interest of protecting consumer welfare, and in the exercise of its prosecutorial discretion, the FTC will not devote resources to RPA enforcement.

The European Commission’s recently concluded consultation on “the future of the electronic communications sector and its infrastructure” was a curious phenomenon in which the commission revived the seemingly dead-and-buried idea of a legally mandated “sender pays” network-traffic scheme, despite the fact that it remains as unpopular and discredited as it was when last discussed roughly a decade ago.

What’s more, despite public relations efforts to reframe the proposal, it remains obvious that this idea constitutes special pleading from a small group of large incumbent telecommunications operators. Giving prominence to this initiative is not a good look for EU Commissioner Thierry Breton, given that he used to be an executive of one of the companies that would potentially benefit from the scheme.

The commission’s proposal is driven by the belief that large online platforms, who consume a significant chunk of bandwidth, should contribute more to the cost of telecom networks. We recently co-authored a comment for the International Center for Law & Economics (ICLE) in which we firmly objected to its key assumption that there exists a “fairness” problem that can be addressed via the commission’s proposed means, i.e., a non-market, legally mandated “sender pays” or “network fees” scheme. Our contribution consisted of an issue brief (“Regulatory Myopia and the Fair Share of Network Costs: Learning from Net Neutrality’s Mistakes”) and some direct responses to the questions posed by the commission.

Responding to a query regarding whether major content providers should be obligated to make direct payments to some telecom operators, we agreed with the Body of European Regulators for Electronic Communications (BEREC) and virtually all independent stakeholders that such a proposal would be unjustified. The proposal would not address any purported market failure but would instead jeopardize market dynamics.

To the extent that there are issues of fairness in platforms’ contributions to telecom infrastructure, they are the result of the earlier regulatory intervention to adopt net neutrality. The so-called “fair share” proposal, however, is not well-suited to counteract any potential distortive effects of net-neutrality rules. And rather than call for relaxing the rules that caused alleged distortions, supporters of the proposal engage in rentseeking, lhoping to extract much more than they could have under market conditions (e.g., without net-neutrality rules).

More generally, protectionist interventions to impose financial obligations on successful players are unsuited to address the European competitive and industrial gap.

The proposal is at odds with both the legal obligations of and the economic rationale for net neutrality. By imposing a fee on service providers that transmit more than a certain threshold of data, the proposal discriminates against some online players and against some online services and content (“large” services and not small- or medium-sized providers). With regard to the economic rationale, internet service providers (ISPs) are assumed under EU net-neutrality rules to have insurmountable bargaining power. The “fair share” proposal instead presumes they are small, helpless entities, powerless before Big Tech. 

From an economic perspective, given that internet traffic is ultimately generated by consumers, the alternative of substituting direct payments with a payment into an “EU/national digital contribution or fund” would not change the welfare implications.

As we noted previously, there is little reason to believe that there are any fairness issues that could be addressed by either “direct payments” or by a “fund”. Both versions of the proposal are misguided attempts to provide state-mandated welfare transfers to legacy incumbents who would like to be more profitable than they already are.

The “fairness” demanded by the small number of incumbent telecom operators in question is really just special pleading from a group accustomed to extracting monopolist rents. The justification amounts to nothing more than acknowledging that there is a pot of money (that is, major content producers’ profits) available for the taking, and that some of those from whom the money could be taken are currently unpopular.

It is essential to not lose sight of the fact that the digital market is an evolving landscape. Introducing additional financial obligations on successful players, in the guise of “protecting” the market, will only undermine its inherent dynamism and competitiveness. It’s vital to recognize that consumer demand is what drives internet traffic. Therefore, shifting the cost burden onto content producers or establishing a digital contribution fund wouldn’t alter the fundamental economics.

In my last roundup, I puzzled over the Federal Trade Commission’s (FTC) suit to block Amgen’s acquisition of Horizon Therapeutics. The deal involved no product overlaps whatsoever (i.e., no horizontal competition), a target firm acknowledged to have no competitors for the orphan drugs at issue, and nobody poised to enter into competition either.

I won’t recapitulate the details of my confusion here, but I will point to a new piece by Bill MacLeod (a past chair of the American Bar Association’s Antitrust Section and a former FTC bureau director) and David Evans, in which they raise an issue I didn’t cover: “The Federal Trade Commission may have filed the first merger complaint in a generation that could be dismissed for failure to state a claim.” Which would not look good.  

Using UDAP the Right Way

Acknowledging that I carp a lot, and that it’s not all about the fish, here’s what looks like a bona fide consumer-protection case and a win for the FTC and consumers alike. On May 25, the commission announced that:

A federal court sided with the Federal Trade Commission, ruling that James D. Noland, Jr. illegally owned and operated two pyramid schemes—Success By Health (SBH) and VOZ Travel—in violation of the FTC Act and that Noland violated a previous federal court order barring him from pyramid schemes and from misrepresenting multilevel marketing participants’ income potential.

I don’t know all the details, but on a quick look at the matter (and the prior case), it appears they went after cons, not legitimate businesses. And that there were indeed serial violations of the law, including violations of an order dating to 2002. There are frauds and cons out there, and the FTC’s UDAP authority is supposed to address many of them. And perhaps should do so more often.   

Data Is Forever, Apparently

Something new from the cabinet of curiosities: a complaint about the end times, or the end of time, or something. In a complaint filed the last day of May, one of the commission’s allegations against Amazon is that:

Alexa’s default settings still save children’s (and adults’) voice recordings and transcripts forever, even when a child no longer uses his Alexa profile and it has been inactive for years.

I’ve been around a while, but forever seems like a really long time. Exactly how many recordings have been saved forever? And how do they know? More plausibly on time’s arrow they say that:

Amazon’s privacy disclosures assert that it designed Alexa with privacy in mind, that Amazon will delete users’ voice and geolocation data (and children’s voice data) upon request, and that Amazon carefully limits access to voice data. But until September 2019, Amazon retained children’s voice recordings and transcripts indefinitely unless a parent actively deleted them.

An indefinitely long time is not an infinite length of time.

There’s more, of course. The complaint alleges that representations about parents’ ability to have information deleted on request were not always and fully honored; that is, some sensitive information was maintained, in some form, notwithstanding deletion requests that should have applied to that information. And more broadly, it’s alleged that sensitive information was maintained longer than is “reasonably necessary,” independent of the question of whether there was a deletion request. 

Maybe. I don’t know all the ins and outs of the matter, even as alleged (and couldn’t possibly just by reading the relatively brief complaint). One thing that’s interesting is the theory of harm. So far as I can tell, there’s no allegation of a breach, much less one that led to concrete harms to certain kids or their families. And I don’t see any allegation that the information was improperly sold (or given) to third parties, much less that such disclosures led to further harm. Rather, it’s the possession of personal information—in some form—longer than is “reasonably necessary,” and in some cases, allegedly inconsistent with “representations that [Amazon’s] Alexa and Echo devices ‘are designed to protect your privacy’ and ‘provide transparency and control’ (emphasis in original)” that is deemed to be harmful. 

There’s a suggestion that a false (or misleading) assurance was made and—critically, under Section 5—that the assurance was material to consumers (parents), who were “substantially harmed” by the degree to which the firm allegedly failed to live up to those assurances. Perhaps, although in that case I find myself wanting much more texture than the complaint provides.

If a data tree stands in the forest, and somebody asked that some leaves or branches connected to their kids be pruned, but not all such leaves or branches were pruned, or a regulator thinks that certain leaves persist longer than they should, but nobody breaches the . . . ok, this version of the tree-falling-the-forest story is getting complicated. But I’m wondering, among other things, whether there’s a “reasonably necessary” standard that’s been set and, if so, where? And my annoyingly numerical inner child is wondering how one measures and computes damages.  

Taking Credit for Nothing in Particular

Same cabinet, different curiosity: on May 24, the FTC seemed to advertise a win in a merger matter: 

In response to the announcement that Boston Scientific Corporation and non-vascular stent manufacturer M.I. Tech Co., Ltd. have terminated their $230 million purchase agreement, Federal Trade Commission Bureau of Competition Director Holly Vedova issued this statement:

“I am pleased that Boston Scientific and M.I. Tech have abandoned their proposed transaction in response to investigations by FTC staff and our overseas enforcement partners. The FTC will not hesitate to take action in enforcing the antitrust laws to protect patients and doctors. I would like to thank the entire FTC team for their excellent work on this matter.”

Ok, but why? By all means, the FTC is charged to enforce the antitrust laws and, pace certain nontrivial disagreements about what those laws say, should not hesitate to do so. I’m certainly not arguing that the staff didn’t do excellent work. Many truly fine and experienced staff remain at the FTC (an alarming number of departures notwithstanding), and I’ve no reason to assume that staff in the Bureau of Competition did anything but excellent work here.

And I’m not arguing that the FTC was wrong to open the investigation or that they would have been wrong to file a complaint seeking to block the merger, had they done so. I have no idea one way or the other. But I don’t see anything in the press release about a complaint, much less a final decision. True, there’s work preliminary to opening an investigation, so there’s that, but I don’t see any allegation that the merger was likely to harm competition and consumers (“doctors and patients”) or that the commission had decided that it was (or had decided to authorize issuance of a complaint alleging that it was).

So . . . leaving aside the question of whether risk of antitrust liability is what drove the parties to abandon the merger, I wonder whether the announcement concerns a good result or an unfortunate one. If it’s a win for the FTC, was it a win for competition and consumers? We’ve not yet seen a policy statement suggesting that all mergers are anticompetitive, have we? Or is that in the new merger guidelines that will drop whenever they drop?

Erin Go Blech

There are goings-on at the U.S. Justice Department’s Antitrust Division, but this time, for another agency, it’s hands-across-the-water on the Ireland/EU Meta fine. My ICLE colleague Kristian Stout wrote about it here. My two quick takes: 

  1. Oy, he’s right to worry; and
  2. Are penalties supposed to bear some relationship to actual harms, or are they supposed to be arbitrary exercises of international taxation?

Over the ocean and into the pockets. 

An Interesting Query

Over on that Muskiest of platforms (mea culpa/s’lach lanu/my bad), my ICLE colleague Brian Albrecht tweets an inquiry:

It’s a fair question, but I don’t have much useful to say in response. I was at the commission through last August, but cannot share any nonpublic information I might have acquired during my time there. I can, however, point readers to the announcement of the inquiry (also termed a “study”), which contains links to the model 6(b) orders sent to manufacturers, distributors, and retailers (three of each, named by the FTC in the announcement).

And I can reply to my colleague’s question with another question: Do you think that any economists were harmed, or even mildly inconvenienced, in the design of that study? Not counting Marta Wosinska, who didn’t resign her post as director of the FTC’s Bureau of Economics until nearly several months later, and, according to rumors reported in Politico, over issues to do with a different inquiry entirely. 

Announced with the sort of breathless press release one might expect for the launch of a new product like Waystar Royco’s Living+, the Federal Communications Commission (FCC) has gone into full-blown spin mode over its latest broadband map

This is, to be clear, the map that the National Telecommunications and Information Administration (NTIA) will use to allocate $42.5 billion to states from NTIA’s Broadband Equity, Access, and Deployment (BEAD) program. Specific allocations are expected to be announced by June 30.

According to FCC Chair Jessica Rosenworcel, the new map is “light years better” than the last round. A light year is 5.88 trillion miles, or enough to circle the earth more than 236 million times, so that must be quite an improvement. But then the FCC’s release proceeds to walk that claim back, with the assertion that the latest map is merely “another step forward” in an “iterative effort” to develop accurate broadband maps.

To be fair, the new map is a substantial improvement. According to Fierce Telecom, the latest map attempts to identify every household and small business in the country that should have access to high-speed internet service. While this requires granular data down to the level of individual street addresses, earlier versions were limited to U.S. Census block information. The new location-based map has identified more than 114 million locations where fixed broadband could be installed, while prior maps had information for 8.1 million Census blocks. 

The biggest attention-grabbing factoid from the FCC is that “[m]ore than 8.3 million U.S. homes and businesses lack access to high-speed broadband.” This figure is at odds with Census surveys that indicate roughly 3 million households don’t have at-home Internet access. Perhaps “and businesses” is doing a lot of that heavy lifting in the new claim.

Telecompetitor reports that the 8.3 million number is, according to an FCC spokesperson, based on the NTIA’s definition of “unserved.” Under this definition, connections with speeds less than 25/3 Mbps are considered to be “unserved” by broadband. In addition, the NTIA considers locations with connections of greater than 25/3 Mbps to be “unserved” if the service is available only from a fixed wireless provider that uses unlicensed spectrum.

The Wireless Internet Service Providers Association (WISPA) has a more optimistic take on the map’s access estimates, arguing that:

[T]he FCC’s new broadband map tells the success story of the vibrant and growing ISP broadband industry — one working 24/7/365 to almost halve the number of unserved locations since 2020.  Down from nearly 14 million unserved to 8 million today.

In addition to the map’s detail of unserved locations, tech journalist Mike Conlow has calculated there are 3.6 million homes that are underserved (i.e., with speeds of less than 100/20 Mbps). His Substack account provides downloadable spreadsheets that are worth checking out.

PCMag suggests one noteworthy change from the last maps is a 50% downgrade in Starlink’s reported speeds. This is a well-known issue. Last year, the FCC rejected Starlink’s application to receive nearly $900 million in broadband funding, citing doubts that the company could provide the grant’s required speeds of 100/20 Mbps.

Writing in Broadband Breakfast, Tom Reid concludes that, despite the improvements, the new maps inflate the availability and speed of many locations:

Broadband improvements have been constrained for decades by inaccurate maps, yet the Federal Communications Commission continues to accept dramatically exaggerated availability and capacity claims from internet service providers. The cumbersome challenge process requires consumers and units of government to prove a negative—a logical fallacy.

Similarly, Joe Valandra, CEO of the Native American-owned firm Tribal Ready, notes that tribal data historically has been excluded or misinterpreted in broadband maps. He urged tribal governments to gather broadband-coverage data for the state mapping process to support grants under the BEAD program.

Based on the short period of time between the latest map’s release and the expected timing of BEAD grants to the states, it’s likely that the allocation of the BEAD funds has already been—or will soon be—established.

Conclusion

It is, of course, always difficult to read the tea leaves, but there are some important things to watch out for as the BEAD process moves forward. There are going to be complaints in various directions: missing locations, locations that don’t exist, incorrect speed data. Nevertheless, the results we have seen thus far are largely in line with what I and my colleagues have been writing for years: about 5-7% of U.S. households are unserved. 

On the one hand, public policy has been guided by a reasonable assumption that a small but significant share of the population would benefit from improved (or any) internet access. On the other hand, the latest map reinforces the point we’ve made consistently, which is that much of the story around broadband takeup (rather than access) is focused on that last hardcore of nonadopters. Even when service is available, some households will not adopt at any price.

Going forward, the next big challenge will be to make sure the huge wash of BEAD funding isn’t dissipated by waste, fraud, and abuse. Since these are block grants to the states, it will be very easy to lose sight of how the money is spent across the country. Spending that money well will be critical to closing the digital divide.