Archives For markets

Politicians have recently called for price controls to address the high costs of pharmaceuticals. Price controls are government-mandated limits on prices, or government-required discounts on prices. On the campaign trail, Hillary Clinton has called for price controls for lower-income Medicare patients while Donald Trump has recently joined Clinton, Bernie Sanders, and President Obama in calling for more government intervention in the Medicare Part D program. Before embarking upon additional price controls for the drug industry, policymakers and presidential candidates would do well to understand the impacts and problems arising from existing controls.

Unbeknownst to many, a vast array of price controls are already in place in the pharmaceutical market. Over 40 percent of outpatient drug spending is spent in public programs that use price controls. In order to sell drugs to consumers covered by these public programs, manufacturers must agree to offer certain rebates or discounts on drug prices. The calculations are generally based on the Average Manufacturer Price (AMP–the average price wholesalers pay manufacturers for drugs that are sold to retail pharmacies) or the Best Price (the lowest price the manufacturer offers the drug to any purchaser including all rebates and discounts). The most significant public programs using some form of price control are described below.

  1. Medicaid

The Medicaid program provides health insurance for low-income and medically needy individuals. The legally-required rebate depends on the specific category of drug; for example, brand manufacturers are required to sell drugs for the lesser of 23.1% off AMP or the best price offered to any purchaser.

The Affordable Care Act significantly expanded Medicaid eligibility so that in 2014, the program covered approximately 64.9 million individuals, or 20 percent of the U.S. population. State Medicaid data indicates that manufacturers paid an enormous sum — in excess of $16.7 billion — in Medicaid rebates in 2012.

  1. 340B Program

The “340B Program”, created by Congress in 1992, requires drug manufacturers to provide outpatient drugs at significantly reduced prices to 340B-eligible entities—entities that serve a high proportion of low-income or uninsured patients. Like Medicaid, the 340B discount must be at least 23.1 percent off AMP. However, the statutory formula calculates different discounts for different products and is estimated to produce discounts that average 45 percent off average prices. Surprisingly, the formulas can even result in a negative 340B selling price for a drug, in which case manufacturers are instructed to set the drug price at a penny.

The Affordable Care Act broadened the definition of qualified buyers to include many additional types of hospitals. As a result, both the number of 340B-eligible hospitals and the money spent on 340B drugs tripled between 2005 and 2014. By 2014, there were over 14,000 hospitals and affiliated sites in the 340B program, representing about one-third of all U.S. hospitals.

The 340B program has a glaring flaw that punishes the pharmaceutical industry without any offsetting benefits for low-income patients. The 340B statute does NOT require that providers only dispense 340B drugs to needy patients. In what amounts to merely shifting profits from pharmaceutical companies to other health care providers, providers may also sell drugs purchased at the steep 340B discount to non-qualified patients and pocket the difference between the 340B discounted price and the reimbursement of the non-qualified patients’ private insurance companies. About half of the 340B entities generate significant revenues from private insurer reimbursements that exceed 340B prices.

  1. Departments of Defense and Veterans Affairs Drug Programs

In order to sell drugs through the Medicaid program, drug manufacturers must also provide drugs to four government agencies—the VA, Department of Defense, Public Health Service and Coast Guard—at statutorily-imposed discounts. The required discounted price is the lesser of 24% off AMP or the lowest price manufacturers charge their most-favored nonfederal customers under comparable terms. Because of additional contracts that generate pricing concessions from specific vendors, studies indicate that VA and DOD pricing for brand pharmaceuticals was approximately 41-42% of the average wholesale price.

  1. Medicare Part D

An optional Medicare prescription drug benefit (Medicare Part D) was enacted in 2005 to offer coverage to many of the nation’s retirees and disabled persons. Unlike Medicaid and the 340B program, there is no statutory rebate level on prescription drugs covered under the program. Instead, private Medicare Part D plans, acting on behalf of the Medicare program, negotiate prices with pharmaceutical manufacturers and may obtain price concessions in the form of rebates. Manufacturers are willing to offer significant rebates and discounts in order to provide drugs to the millions of covered participants. The rebates often amount to as much as a 20-30 percent discount on brand medicines. CMS reported that manufacturers paid in excess of $10.3 billion in Part D rebates in 2012.

The Medicare Part D program does include direct price controls on drugs sold in the coverage gap. The coverage gap (or “donut hole”) is a spending level in which enrollees are responsible for a larger share of their total drug costs. For 2016, the coverage gap begins when the individual and the plan have spent $3,310 on covered drugs and ends when $7,515 has been spent. Medicare Part D requires brand drug manufacturers to offer 50 percent discounts on drugs sold during the coverage gap. These required discounts will cost drug manufacturers approximately $41 billion between 2012-2021.

While existing price controls do produce lower prices for some consumers, they may also result in increased prices for others, and in the long-term may drive up prices for all.  Many of the required rebates under Medicaid, the 340B program, and VA and DOD programs are based on drugs’ AMP.  Calculating rebates from average drug prices gives manufactures an incentive to charge higher prices to wholesalers and pharmacies in order to offset discounts. Moreover, with at least 40% of drugs sold under price controls, and some programs even requiring drugs to be sold for a penny, manufacturers are forced to sell many drugs at significant discounts.  This creates incentives to charge higher prices to other non-covered patients to offset the discounts.  Further price controls will only amplify these incentives and create inefficient market imbalances.

 

On January 12, 2016, the California state legislature will hold a hearing on AB 463: the Pharmaceutical Cost Transparency Act of 2016. The proposed bill would require drug manufacturers to disclose sensitive information about each drug with prices above a certain level.  The required disclosure includes various production costs including:  the costs of materials and manufacturing, the costs of research and development into the drug, the cost of clinical trials, the costs of any patents or licenses acquired in the development process, and the costs of marketing and advertising. Manufacturers would also be required to disclose a record of any pricing changes for the drug, the total profit attributable to the drug, the manufacturer’s aid to prescription assistance programs, and the costs of projects or drugs that fail during development.  Similar bills have been proposed in other states.

The stated goal of the proposed Act is to ‘make pharmaceutical pricing as transparent as the pricing in other sectors of the healthcare industry.’ However, by focusing almost exclusively on cost disclosure, the bill seemingly ignores the fact that market price is determined by both supply and demand factors. Although costs of development and production are certainly an important factor, pricing is also based on factors such as therapeutic value, market size, available substitutes, patent life, and many other factors.

Moreover, the bill does not clarify how drug manufacturers are to account for and disclose one of the most significant costs to pharmaceutical manufacturers: the cost of failed drugs that never make it to market. Data suggests that only around 10 percent of drugs that begin clinical trials are eventually approved by the FDA. Drug companies depend on the profits from these “hits” in order to stay in business; companies must use the profits from successful drugs to subsidize the significant losses from the 90 percent of drugs that fail.   AB 463 enables manufacturers to disclose the costs of failures, but is unclear if they are able to consider the total losses from the 90 percent of drugs that fail, or only failed drugs that were developed in conjunction with the drug in question. Moreover, even though profits from successful drugs are necessary to subsidize failures, AB 463 is silent on whether the losses from failures can be included in profit calculations.

It’s also worth pointing out that any evaluation of drug manufacturers’ profits should recognize the basic risk-return tradeoff. In order to willingly incur risk—and a 90 percent failure rate of drugs in development is a significant risk—investors and companies demand profits or returns greater than the return on less risky endeavors. That is, if investors or companies can make a 5% return on a safe, predictable investment that has little variation in returns, why would they ever engage in a risky endeavor (especially one with a 90% failure rate) if they don’t earn a substantially higher return?  The market resolves these issues by compensating risky endeavors with a higher expected return. Thus, we should expect companies engaged in the risky business of drug development to receive higher profits than businesses engaged in more conservative businesses.

It will also prove difficult, if not impossible, for drug manufacturers to disclose information about even the “hits” because many of the costs that manufacturers incur are difficult to attribute to a specific drug. Much pre-clinical research is for the purpose of generating dozens or hundreds of possible drug candidates; how should these very expensive research costs be attributed?  How should companies allocate the costs of meeting regulatory requirements; these are rarely incurred independently for each drug? And the overhead costs of operating a business with thousands of employees are also impossible to allocate to a specific drug.  By ignoring these shared costs, AB 463 does little to illuminate the full costs to drug manufacturers.

Instead of providing useful information to make drug pricing more transparent, AB 463 will impose extensive legal and regulatory costs on businesses. The additional disclosure directly increases costs for manufacturers as they collect, prepare, and present the required data. Manufacturers will also incur significant costs as they consult with lawyers and regulators to ensure that they are meeting the disclosure requirements. These costs will ultimately be passed on to consumers in the form of higher drug prices.

Finally, disclosure of such competitively-sensitive information as that required under AB 463 risks harming competition if it gets into the wrong hands. If the confidentiality provisions prove unclear or inadequate, AB 463 may permit the broader disclosure of sensitive information to competitors. This will, in turn, facilitate collusion, raise prices, and harm the very consumers AB 463 is designed to protect.

In sum, the incomplete disclosure required under AB 463 will provide little transparency to the public. The resources could be better used to foster innovation and develop new treatments that lower total health care costs in the long run.

A number of blockbuster mergers have received (often negative) attention from media and competition authorities in recent months. From the recently challenged Staples-Office Depot merger to the abandoned Comcast-Time Warner merger to the heavily scrutinized Aetna-Humana merger (among many others), there has been a wave of potential mega-mergers throughout the economy—many of them met with regulatory resistance. We’ve discussed several of these mergers at TOTM (see, e.g., here, here, here and here).

Many reporters, analysts, and even competition authorities have adopted various degrees of the usual stance that big is bad, and bigger is even badder. But worse yet, once this presumption applies, agencies have been skeptical of claimed efficiencies, placing a heightened burden on the merging parties to prove them and often ignoring them altogether. And, of course (and perhaps even worse still), there is the perennial problem of (often questionable) market definition — which tanked the Sysco/US Foods merger and which undergirds the FTC’s challenge of the Staples/Office Depot merger.

All of these issues are at play in the proposed acquisition of British aluminum can manufacturer Rexam PLC by American can manufacturer Ball Corp., which has likewise drawn the attention of competition authorities around the world — including those in Brazil, the European Union, and the United States.

But the Ball/Rexam merger has met with some important regulatory successes. Just recently the members of CADE, Brazil’s competition authority, unanimously approved the merger with limited divestitures. The most recent reports also indicate that the EU will likely approve it, as well. It’s now largely down to the FTC, which should approve the merger and not kill it or over-burden it with required divestitures on the basis of questionable antitrust economics.

The proposed merger raises a number of interesting issues in the surprisingly complex beverage container market. But this merger merits regulatory approval.

The International Center for Law & Economics recently released a research paper entitled, The Ball-Rexam Merger: The Case for a Competitive Can Market. The white paper offers an in-depth assessment of the economics of the beverage packaging industry; the place of the Ball-Rexam merger within this remarkably complex, global market; and the likely competitive effects of the deal.

The upshot is that the proposed merger is unlikely to have anticompetitive effects, and any competitive concerns that do arise can be readily addressed by a few targeted divestitures.

The bottom line

The production and distribution of aluminum cans is a surprisingly dynamic industry, characterized by evolving technology, shifting demand, complex bargaining dynamics, and significant changes in the costs of production and distribution. Despite the superficial appearance that the proposed merger will increase concentration in aluminum can manufacturing, we conclude that a proper understanding of the marketplace dynamics suggests that the merger is unlikely to have actual anticompetitive effects.

All told, and as we summarize in our Executive Summary, we found at least seven specific reasons for this conclusion:

  1. Because the appropriately defined product market includes not only stand-alone can manufacturers, but also vertically integrated beverage companies, as well as plastic and glass packaging manufacturers, the actual increase in concentration from the merger will be substantially less than suggested by the change in the number of nationwide aluminum can manufacturers.
  2. Moreover, in nearly all of the relevant geographic markets (which are much smaller than the typically nationwide markets from which concentration numbers are derived), the merger will not affect market concentration at all.
  3. While beverage packaging isn’t a typical, rapidly evolving, high-technology market, technological change is occurring. Coupled with shifting consumer demand (often driven by powerful beverage company marketing efforts), and considerable (and increasing) buyer power, historical beverage packaging market shares may have little predictive value going forward.
  4. The key importance of transportation costs and the effects of current input prices suggest that expanding demand can be effectively met only by expanding the geographic scope of production and by economizing on aluminum supply costs. These, in turn, suggest that increasing overall market concentration is consistent with increased, rather than decreased, competitiveness.
  5. The markets in which Ball and Rexam operate are dominated by a few large customers, who are themselves direct competitors in the upstream marketplace. These companies have shown a remarkable willingness and ability to invest in competing packaging supply capacity and to exert their substantial buyer power to discipline prices.
  6. For this same reason, complaints leveled against the proposed merger by these beverage giants — which are as much competitors as they are customers of the merging companies — should be viewed with skepticism.
  7. Finally, the merger should generate significant managerial and overhead efficiencies, and the merged firm’s expanded geographic footprint should allow it to service larger geographic areas for its multinational customers, thus lowering transaction costs and increasing its value to these customers.

Distinguishing Ardagh: The interchangeability of aluminum and glass

An important potential sticking point for the FTC’s review of the merger is its recent decision to challenge the Ardagh-Saint Gobain merger. The cases are superficially similar, in that they both involve beverage packaging. But Ardagh should not stand as a model for the Commission’s treatment of Ball/Rexam. The FTC made a number of mistakes in Ardagh (including market definition and the treatment of efficiencies — the latter of which brought out a strenuous dissent from Commissioner Wright). But even on its own (questionable) terms, Ardagh shouldn’t mean trouble for Ball/Rexam.

As we noted in our December 1st letter to the FTC on the Ball/Rexam merger, and as we discuss in detail in the paper, the situation in the aluminum can market is quite different than the (alleged) market for “(1) the manufacture and sale of glass containers to Brewers; and (2) the manufacture and sale of glass containers to Distillers” at issue in Ardagh.

Importantly, the FTC found (almost certainly incorrectly, at least for the brewers) that other container types (e.g., plastic bottles and aluminum cans) were not part of the relevant product market in Ardagh. But in the markets in which aluminum cans are a primary form of packaging (most notably, soda and beer), our research indicates that glass, plastic, and aluminum are most definitely substitutes.

The Big Four beverage companies (Coca-Cola, PepsiCo, Anheuser-Busch InBev, and MillerCoors), which collectively make up 80% of the U.S. market for Ball and Rexam, are all vertically integrated to some degree, and provide much of their own supply of containers (a situation significantly different than the distillers in Ardagh). These companies exert powerful price discipline on the aluminum packaging market by, among other things, increasing (or threatening to increase) their own container manufacturing capacity, sponsoring new entry, and shifting production (and, via marketing, consumer demand) to competing packaging types.

For soda, Ardagh is obviously inapposite, as soda packaging wasn’t at issue there. But the FTC’s conclusion in Ardagh that aluminum cans (which in fact make up 56% of the beer packaging market) don’t compete with glass bottles for beer packaging is also suspect.

For aluminum can manufacturers Ball and Rexam, aluminum can’t be excluded from the market (obviously), and much of the beer in the U.S. that is packaged in aluminum is quite clearly also packaged in glass. The FTC claimed in Ardagh that glass and aluminum are consumed in distinct situations, so they don’t exert price pressure on each other. But that ignores the considerable ability of beer manufacturers to influence consumption choices, as well as the reality that consumer preferences for each type of container (whether driven by beer company marketing efforts or not) are merging, with cost considerations dominating other factors.

In fact, consumers consume beer in both packaging types largely interchangeably (with a few limited exceptions — e.g., poolside drinking demands aluminum or plastic), and beer manufacturers readily switch between the two types of packaging as the relative production costs shift.

Craft brewers, to take one important example, are rapidly switching to aluminum from glass, despite a supposed stigma surrounding canned beers. Some craft brewers (particularly the larger ones) do package at least some of their beers in both types of containers, or simultaneously package some of their beers in glass and some of their beers in cans, while for many craft brewers it’s one or the other. Yet there’s no indication that craft beer consumption has fallen off because consumers won’t drink beer from cans in some situations — and obviously the prospect of this outcome hasn’t stopped craft brewers from abandoning bottles entirely in favor of more economical cans, nor has it induced them, as a general rule, to offer both types of packaging.

A very short time ago it might have seemed that aluminum wasn’t in the same market as glass for craft beer packaging. But, as recent trends have borne out, that differentiation wasn’t primarily a function of consumer preference (either at the brewer or end-consumer level). Rather, it was a function of bottling/canning costs (until recently the machinery required for canning was prohibitively expensive), materials costs (at various times glass has been cheaper than aluminum, depending on volume), and transportation costs (which cut against glass, but the relative attractiveness of different packaging materials is importantly a function of variable transportation costs). To be sure, consumer preference isn’t irrelevant, but the ease with which brewers have shifted consumer preferences suggests that it isn’t a strong constraint.

Transportation costs are key

Transportation costs, in fact, are a key part of the story — and of the conclusion that the Ball/Rexam merger is unlikely to have anticompetitive effects. First of all, transporting empty cans (or bottles, for that matter) is tremendously inefficient — which means that the relevant geographic markets for assessing the competitive effects of the Ball/Rexam merger are essentially the largely non-overlapping 200 mile circles around the companies’ manufacturing facilities. Because there are very few markets in which the two companies both have plants, the merger doesn’t change the extent of competition in the vast majority of relevant geographic markets.

But transportation costs are also relevant to the interchangeability of packaging materials. Glass is more expensive to transport than aluminum, and this is true not just for empty bottles, but for full ones, of course. So, among other things, by switching to cans (even if it entails up-front cost), smaller breweries can expand their geographic reach, potentially expanding sales enough to more than cover switching costs. The merger would further lower the costs of cans (and thus of geographic expansion) by enabling beverage companies to transact with a single company across a wider geographic range.

The reality is that the most important factor in packaging choice is cost, and that the packaging alternatives are functionally interchangeable. As a result, and given that the direct consumers of beverage packaging are beverage companies rather than end-consumers, relatively small cost changes readily spur changes in packaging choices. While there are some switching costs that might impede these shifts, they are readily overcome. For large beverage companies that already use multiple types and sizes of packaging for the same product, the costs are trivial: They already have packaging designs, marketing materials, distribution facilities and the like in place. For smaller companies, a shift can be more difficult, but innovations in labeling, mobile canning/bottling facilities, outsourced distribution and the like significantly reduce these costs.  

“There’s a great future in plastics”

All of this is even more true for plastic — even in the beer market. In fact, in 2010, 10% of the beer consumed in Europe was sold in plastic bottles, as was 15% of all beer consumed in South Korea. We weren’t able to find reliable numbers for the U.S., but particularly for cheaper beers, U.S. brewers are increasingly moving to plastic. And plastic bottles are the norm at stadiums and arenas. Whatever the exact numbers, clearly plastic holds a small fraction of the beer container market compared to glass and aluminum. But that number is just as clearly growing, and as cost considerations impel them (and technology enables them), giant, powerful brewers like AB InBev and MillerCoors are certainly willing and able to push consumers toward plastic.

Meanwhile soda companies like Coca-cola and Pepsi have successfully moved their markets so that today a majority of packaged soda is sold in plastic containers. There’s no evidence that this shift came about as a result of end-consumer demand, nor that the shift to plastic was delayed by a lack of demand elasticity; rather, it was primarily a function of these companies’ ability to realize bigger profits on sales in plastic containers (not least because they own their own plastic packaging production facilities).

And while it’s not at issue in Ball/Rexam because spirits are rarely sold in aluminum packaging, the FTC’s conclusion in Ardagh that

[n]on-glass packaging materials, such as plastic containers, are not in this relevant product market because not enough spirits customers would switch to non-glass packaging materials to make a SSNIP in glass containers to spirits customers unprofitable for a hypothetical monopolist

is highly suspect — which suggests the Commission may have gotten it wrong in other ways, too. For example, as one report notes:

But the most noteworthy inroads against glass have been made in distilled liquor. In terms of total units, plastic containers, almost all of them polyethylene terephthalate (PET), have surpassed glass and now hold a 56% share, which is projected to rise to 69% by 2017.

True, most of this must be tiny-volume airplane bottles, but by no means all of it is, and it’s clear that the cost advantages of plastic are driving a shift in distilled liquor packaging, as well. Some high-end brands are even moving to plastic. Whatever resistance (and this true for beer, too) that may have existed in the past because of glass’s “image,” is breaking down: Don’t forget that even high-quality wines are now often sold with screw-tops or even in boxes — something that was once thought impossible.

The overall point is that the beverage packaging market faced by can makers like Ball and Rexam is remarkably complex, and, crucially, the presence of powerful, vertically integrated customers means that past or current demand by end-users is a poor indicator of what the market will look like in the future as input costs and other considerations faced by these companies shift. Right now, for example, over 50% of the world’s soda is packaged in plastic bottles, and this margin is set to increase: The global plastic packaging market (not limited to just beverages) is expected to grow at a CAGR of 5.2% between 2014 and 2020, while aluminum packaging is expected to grow at just 2.9%.

A note on efficiencies

As noted above, the proposed Ball/Rexam merger also holds out the promise of substantial efficiencies (estimated at $300 million by the merging parties, due mainly to decreased transportation costs). There is a risk, however, that the FTC may effectively disregard those efficiencies, as it did in Ardagh (and in St. Luke’s before it), by saddling them with a higher burden of proof than it requires of its own prima facie claims. If the goal of antitrust law is to promote consumer welfare, competition authorities can’t ignore efficiencies in merger analysis.

In his Ardagh dissent, Commissioner Wright noted that:

Even when the same burden of proof is applied to anticompetitive effects and efficiencies, of course, reasonable minds can and often do differ when identifying and quantifying cognizable efficiencies as appears to have occurred in this case.  My own analysis of cognizable efficiencies in this matter indicates they are significant.   In my view, a critical issue highlighted by this case is whether, when, and to what extent the Commission will credit efficiencies generally, as well as whether the burden faced by the parties in establishing that proffered efficiencies are cognizable under the Merger Guidelines is higher than the burden of proof facing the agencies in establishing anticompetitive effects. After reviewing the record evidence on both anticompetitive effects and efficiencies in this case, my own view is that it would be impossible to come to the conclusions about each set forth in the Complaint and by the Commission — and particularly the conclusion that cognizable efficiencies are nearly zero — without applying asymmetric burdens.

The Commission shouldn’t make the same mistake here. In fact, here, where can manufacturers are squeezed between powerful companies both upstream (e.g., Alcoa) and downstream (e.g., AB InBev), and where transportation costs limit the opportunities for expanding the customer base of any particular plant, the ability to capitalize on economies of scale and geographic scope is essential to independent manufacturers’ abilities to efficiently meet rising demand.

Read our complete assessment of the merger’s effect here.

Today the International Center for Law & Economics (ICLE) submitted an amicus brief to the Supreme Court of the United States supporting Apple’s petition for certiorari in its e-books antitrust case. ICLE’s brief was signed by sixteen distinguished scholars of law, economics and public policy, including an Economics Nobel Laureate, a former FTC Commissioner, ten PhD economists and ten professors of law (see the complete list, below).

Background

Earlier this year a divided panel of the Second Circuit ruled that Apple “orchestrated a conspiracy among [five major book] publishers to raise ebook prices… in violation of § 1 of the Sherman Act.” Significantly, the court ruled that Apple’s conduct constituted a per se unlawful horizontal price-fixing conspiracy, meaning that the procompetitive benefits of Apple’s entry into the e-books market was irrelevant to the liability determination.

Apple filed a petition for certiorari with the Supreme Court seeking review of the ruling on the question of

Whether vertical conduct by a disruptive market entrant, aimed at securing suppliers for a new retail platform, should be condemned as per se illegal under Section 1 of the Sherman Act, rather than analyzed under the rule of reason, because such vertical activity also had the alleged effect of facilitating horizontal collusion among the suppliers.

Summary of Amicus Brief

The Second Circuit’s ruling is in direct conflict with the Supreme Court’s 2007 Leegin decision, and creates a circuit split with the Third Circuit based on that court’s Toledo Mack ruling. ICLE’s brief urges the Court to review the case in order to resolve the significant uncertainty created by the Second Circuit’s ruling, particularly for the multi-sided platform companies that epitomize the “New Economy.”

As ICLE’s brief discusses, the Second Circuit committed several important errors in its ruling:

First, As the Supreme Court held in Leegin, condemnation under the per se rule is appropriate “only for conduct that would always or almost always tend to restrict competition” and “only after courts have had considerable experience with the type of restraint at issue.” Neither is true in this case. Businesses often employ one or more forms of vertical restraints to make entry viable, and the Court has blessed such conduct, categorically holding in Leegin that “[v]ertical price restraints are to be judged according to the rule of reason.”

Furthermore, the conduct at issue in this case — the use of “Most-Favored Nation Clauses” in Apple’s contracts with the publishers and its adoption of the so-called “agency model” for e-book pricing — have never been reviewed by the courts in a setting like this one, let alone found to “always or almost always tend to restrict competition.” There is no support in the case law or economic literature for the proposition that agency models or MFNs used to facilitate entry by new competitors in platform markets like this one are anticompetitive.

Second, the negative consequences of the court’s ruling will be particularly acute for modern, high-technology sectors of the economy, where entrepreneurs planning to deploy new business models will now face exactly the sort of artificial deterrents that the Court condemned in Trinko: “Mistaken inferences and the resulting false condemnations are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” Absent review by the Supreme Court to correct the Second Circuit’s error, the result will be less-vigorous competition and a reduction in consumer welfare.

This case involves vertical conduct essentially indistinguishable from conduct that the Supreme Court has held to be subject to the rule of reason. But under the Second Circuit’s approach, the adoption of these sorts of efficient vertical restraints could be challenged as a per se unlawful effort to “facilitate” horizontal price fixing, significantly deterring their use. The lower court thus ignored the Supreme Court’s admonishment not to apply the antitrust laws in a way that makes the use of a particular business model “more attractive based on the per se rule” rather than on “real market conditions.”

Third, the court based its decision that per se review was appropriate largely on the fact that e-book prices increased following Apple’s entry into the market. But, contrary to the court’s suggestion, it has long been settled that such price increases do not make conduct per se unlawful. In fact, the Supreme Court has held that the per se rule is inappropriate where, as here, “prices can be increased in the course of promoting procompetitive effects.”  

Competition occurs on many dimensions other than just price; higher prices alone don’t necessarily suggest decreased competition or anticompetitive effects. Instead, higher prices may accompany welfare-enhancing competition on the merits, resulting in greater investment in product quality, reputation, innovation or distribution mechanisms.

The Second Circuit presumed that Amazon’s e-book prices before Apple’s entry were competitive, and thus that the price increases were anticompetitive. But there is no support in the record for that presumption, and it is not compelled by economic reasoning. In fact, it is at least as likely that the change in Amazon’s prices reflected the fact that Amazon’s business model pre-entry resulted in artificially low prices, and that the price increases following Apple’s entry were the product of a more competitive market.

Previous commentary on the case

For my previous writing and commentary on the the case, see:

  • “The Second Circuit’s Apple e-books decision: Debating the merits and the meaning,” American Bar Association debate with Fiona Scott-Morton, DOJ Chief Economist during the Apple trial, and Mark Ryan, the DOJ’s lead litigator in the case, recording here
  • Why I think the Apple e-books antitrust decision will (or at least should) be overturned, Truth on the Market, here
  • Why I think the government will have a tough time winning the Apple e-books antitrust case, Truth on the Market, here
  • The procompetitive story that could undermine the DOJ’s e-books antitrust case against Apple, Truth on the Market, here
  • How Apple can defeat the DOJ’s e-book antitrust suit, Forbes, here
  • The US e-books case against Apple: The procompetitive story, special issue of Concurrences on “E-books and the Boundaries of Antitrust,” here
  • Amazon vs. Macmillan: It’s all about control, Truth on the Market, here

Other TOTM authors have also weighed in. See, e.g.:

  • The Second Circuit Misapplies the Per Se Rule in U.S. v. Apple, Alden Abbott, here
  • The Apple E-Book Kerfuffle Meets Alfred Marshall’s Principles of Economics, Josh Wright, here
  • Apple and Amazon E-Book Most Favored Nation Clauses, Josh Wright, here

Amicus Signatories

  • Babette E. Boliek, Associate Professor of Law, Pepperdine University School of Law
  • Henry N. Butler, Dean and Professor of Law, George Mason University School of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, School of Management, University of Texas-Dallas
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Scott E. Masten, Professor of Business Economics & Public Policy, Stephen M. Ross School of Business, The University of Michigan
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Thomas D. Morgan, Professor Emeritus, George Washington University Law School
  • David S. Olson, Associate Professor of Law, Boston College Law School
  • Joanna Shepherd, Professor of Law, Emory University School of Law
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics,  The George L. Argyros School of Business and Economics and Professor of Economics and Law, Dale E. Fowler School of Law, Chapman University
  • Michael E. Sykuta, Associate Professor, Division of Applied Social Sciences, University of Missouri-Columbia
  • Alex Tabarrok, Bartley J. Madden Chair in Economics at the Mercatus Center and Professor of Economics, George Mason University
  • David J. Teece, Thomas W. Tusher Professor in Global Business and Director, Center for Global Strategy and Governance, Haas School of Business, University of California Berkeley
  • Alexander Volokh, Associate Professor of Law, Emory University School of Law
  • Joshua D. Wright, Professor of Law, George Mason University School of Law

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.

Conclusion

In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

The Heritage Foundation continues to do path-breaking work on the burden overregulation imposes on the American economy, and to promote comprehensive reform measures to reduce regulatory costs.  Overregulation, unfortunately, is a global problem, and one that is related to the problem of anticompetitive market distortions (ACMDs) – government-supported cronyist restrictions that weaken the competitive process, undermine free trade, slow economic growth, and harm consumers.  Shanker Singham and I have written about the importance of estimating the effects of and tackling ACMDs if international trade liberalization measures are to be successful in promoting economic growth and efficiency.

The key role of tackling ACMDs in spurring economic growth is highlighted by the highly publicized Greek economic crisis.  The Heritage Foundation recently assessed the issues of fiscal profligacy and over-taxation that need to be addressed by Greece.  While those issues are of central importance, Greece will not be able to fulfill its economic potential without also undertaking substantial regulatory reforms and eliminating ACMDs.  In that regard, a 2014 OECD report on competition-distorting rules and provisions in Greece, concluded that the elimination of barriers to competition would lead to increased productivity, stronger economic growth, and job creation.  That report, which focused on regulatory restrictions in just four sectors of the Greek economy (food processing, retail trade, building materials, and tourism), made 329 specific recommendations to mitigate harm to competition.  It estimated that the benefit to the Greek economy of implementing those reforms would be around EUR 5.2 billion – the equivalent of 2.5% of GDP –  due to increased purchasing power for consumers and efficiency gains for companies.  It also stressed that implementing those recommendations would have an even wider impact over time. Extended to all other sectors of the Greek economy (which are also plagued by overregulation and competitive distortions), the welfare gains from Greek regulatory reforms would be far larger.  The OECD’s Competition Assessment Toolkit provides a useful framework that Greece and other reform-minded nations could use to identify harmful regulatory restrictions.

Unfortunately, in Greece and elsewhere, merely identifying the sources of bad regulation is not enough – political will is needed to actually dismantle harmful regulatory barriers and cronyist rules.  As Shanker Singham pointed out yesterday in commenting on the prospects for Greek regulatory reform, “[t]here is enormous wealth locked away in the Greek economy, just as there is in every country, but distortions destroy it.  The Greek competition agency has done excellent work in promoting a more competitive market, but its political masters merely pay lip service to the concept. . . .  The Greeks have offered promises of reform, but very little acceptance of the major structural changes that are needed.”  The United States is not immune to this problem – consider the case of the Export-Import Bank, whose inefficient credit distortionary policies proved impervious to reform, as the Heritage Foundation explained.

What, then, can be done to reduce the burden of overregulation and ACMDs, in Greece, the United States, and other countries?  Consistent with Justice Louis Brandeis’s observation that “sunshine is the best disinfectant,” shining a public spotlight on the problem can, over time, help build public support for dismantling or reforming welfare-inimical restrictions.  In that regard, the Heritage Foundation’s Index of Economic Freedom takes into account “regulatory efficiency,” and, in particular, “the overall burden of regulation as well as the efficiency of government in the regulatory process”, in producing annual ordinal rankings of every nations’ degree of economic freedom.  Public concern has to translate into action to be effective, of course, and thus the Heritage Foundation has promulgated a list of legislative reforms that could help rein in federal regulatory excesses.  Although there is no “silver bullet,” the Heritage Foundation will continue to publicize regulatory overreach and ACMDs, and propose practical solutions to dismantle these harmful distortions.  This is a long-term fight (incentives for government to overregulate and engage in cronyism are not easily curbed), but well worth the candle.

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET

Moderator:

  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP

Speakers:

  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.

And:

[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.

Also:

The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Last year, Microsoft’s new CEO, Satya Nadella, seemed to break with the company’s longstanding “complain instead of compete” strategy to acknowledge that:

We’re going to innovate with a challenger mindset…. We’re not coming at this as some incumbent.

Among the first items on his agenda? Treating competing platforms like opportunities for innovation and expansion rather than obstacles to be torn down by any means possible:

We are absolutely committed to making our applications run what most people describe as cross platform…. There is no holding back of anything.

Earlier this week, at its Build Developer Conference, Microsoft announced its most significant initiative yet to bring about this reality: code built into its Windows 10 OS that will enable Android and iOS developers to port apps into the Windows ecosystem more easily.

To make this possible… Windows phones “will include an Android subsystem” meant to play nice with the Java and C++ code developers have already crafted to run on a rival’s operating system…. iOS developers can compile their Objective C code right from Microsoft’s Visual Studio, and turn it into a full-fledged Windows 10 app.

Microsoft also announced that its new browser, rebranded as “Edge,” will run Chrome and Firefox extensions, and that its Office suite would enable a range of third-party services to integrate with Office on Windows, iOS, Android and Mac.

Consumers, developers and Microsoft itself should all benefit from the increased competition that these moves are certain to facilitate.

Most obviously, more consumers may be willing to switch to phones and tablets with the Windows 10 operating system if they can continue to enjoy the apps and extensions they’ve come to rely on when using Google and Apple products. As one commenter said of the move:

I left Windows phone due to the lack of apps. I love the OS though, so if this means all my favorite apps will be on the platform I’ll jump back onto the WP bandwagon in a heartbeat.

And developers should invest more in development when they can expect additional revenue from yet another platform running their apps and extensions, with minimal additional development required.

It’s win-win-win. Except perhaps for Microsoft’s lingering regulatory strategy to hobble Google.

That strategy is built primarily on antitrust claims, most recently rooted in arguments that consumers, developers and competitors alike are harmed by Google’s conduct around Android which, it is alleged, makes it difficult for OS makers (like Cyanogen) and app developers (like Microsoft Bing) to compete.

But Microsoft’s interoperability announcements (along with a host of other rapidly evolving market characteristics) actually serve to undermine the antitrust arguments that Microsoft, through groups like FairSearch and ICOMP, has largely been responsible for pushing in the EU against Google/Android.

The reality is that, with innovations like the one Microsoft announced this week, Microsoft, Google and Apple (and Samsung, Nokia, Tizen, Cyanogen…) are competing more vigorously on several fronts. Such competition is evidence of a vibrant marketplace that is simply not in need of antitrust intervention.

The supreme irony in this is that such a move represents a (further) nail in the coffin of the supposed “applications barrier to entry” that was central to the US DOJ’s antitrust suit against Microsoft and that factors into the contemporary Android antitrust arguments against Google.

Frankly, the argument was never very convincing. Absent unjustified and anticompetitive efforts to prop up such a barrier, the “applications barrier to entry” is just a synonym for “big.” Admittedly, the DC Court of Appeals in Microsoft was careful — far more careful than the district court — to locate specific, narrow conduct beyond the mere existence of the alleged barrier that it believed amounted to anticompetitive monopoly maintenance. But central to the imposition of liability was the finding that some of Microsoft’s conduct deterred application developers from effectively accessing other platforms, without procompetitive justification.

With the implementation of initiatives like the one Microsoft has now undertaken in Windows 10, however, it appears that such concerns regarding Google and mobile app developers are unsupportable.

Of greatest significance to the current Android-related accusations against Google, the appeals court in Microsoft also reversed the district court’s finding of liability based on tying, noting in particular that:

If OS vendors without market power also sell their software bundled with a browser, the natural inference is that sale of the items as a bundle serves consumer demand and that unbundled sale would not.

Of course this is exactly what Microsoft Windows Phone (which decidedly does not have market power) does, suggesting that the bundling of mobile OS’s with proprietary apps is procompetitive.

Similarly, in reviewing the eventual consent decree in Microsoft, the appeals court upheld the conditions that allowed the integration of OS and browser code, and rejected the plaintiff’s assertion that a prohibition on such technological commingling was required by law.

The appeals court praised the district court’s recognition that an appropriate remedy “must place paramount significance upon addressing the exclusionary effect of the commingling, rather than the mere conduct which gives rise to the effect,” as well as the district court’s acknowledgement that “it is not a proper task for the Court to undertake to redesign products.”  Said the appeals court, “addressing the applications barrier to entry in a manner likely to harm consumers is not self-evidently an appropriate way to remedy an antitrust violation.”

Today, claims that the integration of Google Mobile Services (GMS) into Google’s version of the Android OS is anticompetitive are misplaced for the same reason:

But making Android competitive with its tightly controlled competitors [e.g., Apple iOS and Windows Phone] requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

In fact, some commenters have even suggested that, by effectively making the OS more “open,” Microsoft’s new Windows 10 initiative might undermine the Windows experience in exactly this fashion:

As a Windows Phone developer, I think this could easily turn into a horrible idea…. [I]t might break the whole Windows user experience Microsoft has been building in the past few years. Modern UI design is a different approach from both Android and iOS. We risk having a very unhomogenic [sic] store with lots of apps using different design patterns, and Modern UI is in my opinion, one of the strongest points of Windows Phone.

But just because Microsoft may be willing to take this risk doesn’t mean that any sensible conception of competition law and economics should require Google (or anyone else) to do so, as well.

Most significantly, Microsoft’s recent announcement is further evidence that both technological and contractual innovations can (potentially — the initiative is too new to know its effect) transform competition, undermine static market definitions and weaken theories of anticompetitive harm.

When apps and their functionality are routinely built into some OS’s or set as defaults; when mobile apps are also available for the desktop and are seamlessly integrated to permit identical functions to be performed on multiple platforms; and when new form factors like Apple MacBook Air and Microsoft Surface blur the lines between mobile and desktop, traditional, static anticompetitive theories are out the window (no pun intended).

Of course, it’s always been possible for new entrants to overcome network effects and scale impediments by a range of means. Microsoft itself has in the past offered to pay app developers to write for its mobile platform. Similarly, it offers inducements to attract users to its Bing search engine and it has devised several creative mechanisms to overcome its claimed scale inferiority in search.

A further irony (and market complication) is that now some of these apps — the ones with network effects of their own — threaten in turn to challenge the reigning mobile operating systems, exactly as Netscape was purported to threaten Microsoft’s OS (and lead to its anticompetitive conduct) back in the day. Facebook, for example, now offers not only its core social media function, but also search, messaging, video calls, mobile payments, photo editing and sharing, and other functionality that compete with many of the core functions built into mobile OS’s.

But the desire by apps like Facebook to expand their networks by being on multiple platforms, and the desire by these platforms to offer popular apps in order to attract users, ensure that Facebook is ubiquitous, even without any antitrust intervention. As Timothy Bresnahan, Joe Orsini and Pai-Ling Yin demonstrate:

(1) The distribution of app attractiveness to consumers is skewed, with a small minority of apps drawing the vast majority of consumer demand. (2) Apps which are highly demanded on one platform tend also to be highly demanded on the other platform. (3) These highly demanded apps have a strong tendency to multihome, writing for both platforms. As a result, the presence or absence of apps offers little reason for consumers to choose a platform. A consumer can choose either platform and have access to the most attractive apps.

Of course, even before Microsoft’s announcement, cross-platform app development was common, and third-party platforms like Xamarin facilitated cross-platform development. As Daniel O’Connor noted last year:

Even if one ecosystem has a majority of the market share, software developers will release versions for different operating systems if it is cheap/easy enough to do so…. As [Torsten] Körber documents [here], building mobile applications is much easier and cheaper than building PC software. Therefore, it is more common for programmers to write programs for multiple OSes…. 73 percent of apps developers design apps for at least two different mobiles OSes, while 62 percent support 3 or more.

Whether Microsoft’s interoperability efforts prove to be “perfect” or not (and some commenters are skeptical), they seem destined to at least further decrease the cost of cross-platform development, thus reducing any “application barrier to entry” that might impede Microsoft’s ability to compete with its much larger rivals.

Moreover, one of the most interesting things about the announcement is that it will enable Android and iOS apps to run not only on Windows phones, but also on Windows computers. Some 1.3 billion PCs run Windows. Forget Windows’ tiny share of mobile phone OS’s; that massive potential PC market (of which Microsoft still has 91 percent) presents an enormous ready-made market for mobile app developers that won’t be ignored.

It also points up the increasing absurdity of compartmentalizing these markets for antitrust purposes. As the relevant distinctions between mobile and desktop markets break down, the idea of Google (or any other company) “leveraging its dominance” in one market to monopolize a “neighboring” or “related” market is increasingly unsustainable. As I wrote earlier this week:

Mobile and social media have transformed search, too…. This revolution has migrated to the computer, which has itself become “app-ified.” Now there are desktop apps and browser extensions that take users directly to Google competitors such as Kayak, eBay and Amazon, or that pull and present information from these sites.

In the end, intentionally or not, Microsoft is (again) undermining its own case. And it is doing so by innovating and competing — those Schumpeterian concepts that were always destined to undermine antitrust cases in the high-tech sector.

If we’re lucky, Microsoft’s new initiatives are the leading edge of a sea change for Microsoft — a different and welcome mindset built on competing in the marketplace rather than at regulators’ doors.

Last week the International Center for Law & Economics, joined by TechFreedom, filed comments with the Federal Aviation Administration (FAA) in its Operation and Certification of Small Unmanned Aircraft Systems (“UAS” — i.e, drones) proceeding to establish rules for the operation of small drones in the National Airspace System.

We believe that the FAA has failed to appropriately weigh the costs and benefits, as well as the First Amendment implications, of its proposed rules.

The FAA’s proposed drones rules fail to meet (or even undertake) adequate cost/benefit analysis

FAA regulations are subject to Executive Order 12866, which, among other things, requires that agencies:

  • “consider incentives for innovation,”
  • “propose or adopt a regulation only upon a reasoned determination that the benefits of the intended regulation justify its costs”;
  • “base [their] decisions on the best reasonably obtainable scientific, technical, economic, and other information”; and
  • “tailor [their} regulations to impose the least burden on society,”

The FAA’s proposed drone rules fail to meet these requirements.

An important, and fundamental, problem is that the proposed rules often seem to import “scientific, technical, economic, and other information” regarding traditional manned aircraft, rather than such knowledge specifically applicable to drones and their uses — what FTC Commissioner Maureen Ohlhausen has dubbed “The Procrustean Problem with Prescriptive Regulation.”

As such, not only do the rules often not make sense as a practical matter, they also seek to simply adapt existing standards, rules and understandings promulgated for manned aircraft to regulate drones — insufficiently tailoring the rules to “impose the least burden on society.”

In some cases the rules would effectively ban obviously valuable uses outright, disregarding the rules’ effect on innovation (to say nothing of their effect on current uses of drones) without adequately defending such prohibitions as necessary to protect public safety.

Importantly, the proposed rules would effectively prohibit the use of commercial drones for long-distance services (like package delivery and scouting large agricultural plots) and for uses in populated areas — undermining what may well be drones’ most economically valuable uses.

As our comments note:

By prohibiting UAS operation over people who are not directly involved in the drone’s operation, the rules dramatically limit the geographic scope in which UAS may operate, essentially limiting commercial drone operations to unpopulated or extremely sparsely populated areas. While that may be sufficient for important agricultural and forestry uses, for example, it effectively precludes all possible uses in more urban areas, including journalism, broadcasting, surveying, package delivery and the like. Even in nonurban areas, such a restriction imposes potentially insurmountable costs.

Mandating that operators not fly over other individuals not involved in the UAS operation is, in fact, the nail in the coffin of drone deliveries, an industry that is likely to offer a significant fraction of this technology’s potential economic benefit. Imposing such a blanket ban thus improperly ignores the important “incentives for innovation” suggested by Executive Order 12866 without apparent corresponding benefit.

The FAA’s proposed drone rules fail under First Amendment scrutiny

The FAA’s failure to tailor the rules according to an appropriate analysis of their costs and benefits also causes them to violate the First Amendment. Without proper tailoring based on the unique technological characteristics of drones and a careful assessment of their likely uses, the rules are considerably more broad than the Supreme Court’s “time, place and manner” standard would allow.

Several of the rules constitute a de facto ban on most — indeed, nearly all — of the potential uses of drones that most clearly involve the collection of information and/or the expression of speech protected by the First Amendment. As we note in our comments:

While the FAA’s proposed rules appear to be content-neutral, and will thus avoid the most-exacting Constitutional scrutiny, the FAA will nevertheless have a difficult time demonstrating that some of them are narrowly drawn and adequately tailored time, place, and manner restrictions.

Indeed, many of the rules likely amount to a prior restraint on protected commercial and non-commercial activity, both for obvious existing applications like news gathering and for currently unanticipated future uses.

Our friends Eli Dourado, Adam Thierer and Ryan Hagemann at Mercatus also filed comments in the proceeding, raising similar and analogous concerns:

As far as possible, we advocate an environment of “permissionless innovation” to reap the greatest benefit from our airspace. The FAA’s rules do not foster this environment. In addition, we believe the FAA has fallen short of its obligations under Executive Order 12866 to provide thorough benefit-cost analysis.

The full Mercatus comments, available here, are also recommended reading.

Read the full ICLE/TechFreedom comments here.