Archives For economics

On January 12, 2016, the California state legislature will hold a hearing on AB 463: the Pharmaceutical Cost Transparency Act of 2016. The proposed bill would require drug manufacturers to disclose sensitive information about each drug with prices above a certain level.  The required disclosure includes various production costs including:  the costs of materials and manufacturing, the costs of research and development into the drug, the cost of clinical trials, the costs of any patents or licenses acquired in the development process, and the costs of marketing and advertising. Manufacturers would also be required to disclose a record of any pricing changes for the drug, the total profit attributable to the drug, the manufacturer’s aid to prescription assistance programs, and the costs of projects or drugs that fail during development.  Similar bills have been proposed in other states.

The stated goal of the proposed Act is to ‘make pharmaceutical pricing as transparent as the pricing in other sectors of the healthcare industry.’ However, by focusing almost exclusively on cost disclosure, the bill seemingly ignores the fact that market price is determined by both supply and demand factors. Although costs of development and production are certainly an important factor, pricing is also based on factors such as therapeutic value, market size, available substitutes, patent life, and many other factors.

Moreover, the bill does not clarify how drug manufacturers are to account for and disclose one of the most significant costs to pharmaceutical manufacturers: the cost of failed drugs that never make it to market. Data suggests that only around 10 percent of drugs that begin clinical trials are eventually approved by the FDA. Drug companies depend on the profits from these “hits” in order to stay in business; companies must use the profits from successful drugs to subsidize the significant losses from the 90 percent of drugs that fail.   AB 463 enables manufacturers to disclose the costs of failures, but is unclear if they are able to consider the total losses from the 90 percent of drugs that fail, or only failed drugs that were developed in conjunction with the drug in question. Moreover, even though profits from successful drugs are necessary to subsidize failures, AB 463 is silent on whether the losses from failures can be included in profit calculations.

It’s also worth pointing out that any evaluation of drug manufacturers’ profits should recognize the basic risk-return tradeoff. In order to willingly incur risk—and a 90 percent failure rate of drugs in development is a significant risk—investors and companies demand profits or returns greater than the return on less risky endeavors. That is, if investors or companies can make a 5% return on a safe, predictable investment that has little variation in returns, why would they ever engage in a risky endeavor (especially one with a 90% failure rate) if they don’t earn a substantially higher return?  The market resolves these issues by compensating risky endeavors with a higher expected return. Thus, we should expect companies engaged in the risky business of drug development to receive higher profits than businesses engaged in more conservative businesses.

It will also prove difficult, if not impossible, for drug manufacturers to disclose information about even the “hits” because many of the costs that manufacturers incur are difficult to attribute to a specific drug. Much pre-clinical research is for the purpose of generating dozens or hundreds of possible drug candidates; how should these very expensive research costs be attributed?  How should companies allocate the costs of meeting regulatory requirements; these are rarely incurred independently for each drug? And the overhead costs of operating a business with thousands of employees are also impossible to allocate to a specific drug.  By ignoring these shared costs, AB 463 does little to illuminate the full costs to drug manufacturers.

Instead of providing useful information to make drug pricing more transparent, AB 463 will impose extensive legal and regulatory costs on businesses. The additional disclosure directly increases costs for manufacturers as they collect, prepare, and present the required data. Manufacturers will also incur significant costs as they consult with lawyers and regulators to ensure that they are meeting the disclosure requirements. These costs will ultimately be passed on to consumers in the form of higher drug prices.

Finally, disclosure of such competitively-sensitive information as that required under AB 463 risks harming competition if it gets into the wrong hands. If the confidentiality provisions prove unclear or inadequate, AB 463 may permit the broader disclosure of sensitive information to competitors. This will, in turn, facilitate collusion, raise prices, and harm the very consumers AB 463 is designed to protect.

In sum, the incomplete disclosure required under AB 463 will provide little transparency to the public. The resources could be better used to foster innovation and develop new treatments that lower total health care costs in the long run.

This blurb published yesterday by Competition Policy International nicely illustrates the problem with the growing focus on unilateral conduct investigations by the European Commission (EC) and other leading competition agencies:

EU: Qualcomm to face antitrust complaint on predatory pricing

Dec 03, 2015

The European Union is preparing an antitrust complaint against Qualcomm Inc. over suspected predatory pricing tactics that could hobble smaller rivals, according to three people familiar with the probe.

Regulators are in the final stages of preparing a so-called statement of objections, based on a complaint by a unit of Nvidia Corp., that asked the EU to act against predatory pricing for mobile-phone chips, the people said. Qualcomm designs chipsets that power most of the world’s smartphones, licensing its technology across the industry.

Qualcomm would add to a growing list of U.S. technology companies to face EU antitrust action, following probes into Google, Microsoft Corp. and Intel Corp. A statement of objections may lead to fines, capped at 10 percent of yearly global revenue, which can be avoided if a company agrees to make changes to business behavior.

Regulators are less advanced with another probe into whether the company grants payments, rebates or other financial incentives to customers in returning for buying Qualcomm chipsets. Another case that focused on complaints that the company was charging excessive royalties on patents was dropped in 2009.

“Predatory pricing” complaints by competitors of successful innovators are typically aimed at hobbling efficient rivals and reducing aggressive competition.  If and when successful, such rent-seeking complaints attenuate competitive vigor (thereby disincentivizing innovation) and tend to raise prices to consumers – a result inimical with antitrust’s overarching goal, consumer welfare promotion.  Although I admittedly am not privy to the facts at issue in the Qualcomm predatory pricing investigation, Nvidia is not a firm that fits the model of a rival being decimated by economic predation (given its overall success and its rapid growth and high profitability in smartchip markets).  In this competitive and dynamic industry, the likelihood that Qualcomm could recoup short-term losses from predation through sustainable monopoly pricing following Nvidia’s exit from the market would seem to be infinitesimally small or non-existent (even assuming pricing below average variable cost or average avoidable cost could be shown).  Thus, there is good reason to doubt the wisdom of the EC’s apparent decision to issue a statement of objections to Qualcomm regarding predatory pricing for mobile phone chips.

The investigation of (presumably loyalty) payments and rebates to buyers of Qualcomm chipsets also is unlikely to enhance consumer welfare.  As a general matter, such financial incentives lower costs to loyal customers, and may promote efficiencies such as guaranteed purchase volumes under favorable terms.  Although theoretically loyalty payments might be structured to effectuate anticompetitive exclusion of competitors under very special circumstances, as a general matter such payments – which like alleged “predatory” pricing typically benefit consumers – should not be a high priority for investigation by competition agencies.  This conclusion applies in spades to chipset markets, which are characterized by vigorous competition among successful firms.  Rebate schemes in dynamic markets of this sort are almost certainly a symptom of creative, welfare-enhancing competitive vigor, rather than inefficient exclusionary behavior.

A pattern of investigating price reductions and discounting plans in highly dynamic and innovative industries, exemplified by the EC’s Qualcomm investigations summarized above, is troubling in at least two respects.

First, it creates regulatory disincentives to aggressive welfare-enhancing competition aimed at capturing the customer’s favor.  Companies like Qualcomm, after being suitably chastised, may well “take the cue” and decide to avoid future trouble by “playing nice” and avoiding innovative discounting, to the detriment of future consumers and industry efficiency.

Second, the dedication of enforcement resources to investigating discounting practices by successful firms that (based on first principles and industry conditions) are highly likely to be procompetitive points to a severe misallocation of resources by the responsible competition agencies.  Such agencies should seek to optimize the use of their scarce resources by allocating them to the highest-valued targets in welfare terms, such as anticompetitive government restraints on competition and hard-core cartel conduct.  Spending any resources on chasing down what is almost certainly efficient unilateral pricing conduct not only sends a bad signal to industry (see point one), it suggests that agency priorities are badly misplaced.  (Admittedly, a problem faced by the EC and many other competition authorities is that they are required to respond to third party complaints, but the nature of that response and the resources allocated could be better calibrated to the likely merit of such complaints.  Whether the law should be changed to grant such competition authorities broad prosecutorial discretion to ignore clearly non-meritorious complaints (such as the wide discretion enjoyed by U.S. antitrust enforcers) is beyond the scope of this commentary, and merits separate treatment.)

A proper application of decision theory and its error cost approach could help the EC and other competition enforcers avoid the problem of inefficiently chasing down procompetitive unilateral conduct.  Such an approach would focus intensively on highly welfare inimical conduct that lacks credible efficiencies (thus minimizing false positives in enforcement) that can be pursued with a relatively low expenditure of administrative costs (given the lack of credible efficiency justifications that need to be evaluated).  As indicated above, a substantial allocation of resources to hard core cartel conduct, bid rigging, and anticompetitive government-imposed market distortions (including poorly designed regulations and state aids) would be consistent with such an approach.  Relatedly, investigating single firm conduct, which is central to spurring a dynamic competitive process and is often misdiagnosed as anticompetitive (thereby imposing false positive costs), should be deemphasized.  (Obviously, even under a decision-theoretic framework, certain agency resources would continue to be devoted to mandatory merger reviews and other core legally required agency functions.)

Today the International Center for Law & Economics (ICLE) submitted an amicus brief to the Supreme Court of the United States supporting Apple’s petition for certiorari in its e-books antitrust case. ICLE’s brief was signed by sixteen distinguished scholars of law, economics and public policy, including an Economics Nobel Laureate, a former FTC Commissioner, ten PhD economists and ten professors of law (see the complete list, below).

Background

Earlier this year a divided panel of the Second Circuit ruled that Apple “orchestrated a conspiracy among [five major book] publishers to raise ebook prices… in violation of § 1 of the Sherman Act.” Significantly, the court ruled that Apple’s conduct constituted a per se unlawful horizontal price-fixing conspiracy, meaning that the procompetitive benefits of Apple’s entry into the e-books market was irrelevant to the liability determination.

Apple filed a petition for certiorari with the Supreme Court seeking review of the ruling on the question of

Whether vertical conduct by a disruptive market entrant, aimed at securing suppliers for a new retail platform, should be condemned as per se illegal under Section 1 of the Sherman Act, rather than analyzed under the rule of reason, because such vertical activity also had the alleged effect of facilitating horizontal collusion among the suppliers.

Summary of Amicus Brief

The Second Circuit’s ruling is in direct conflict with the Supreme Court’s 2007 Leegin decision, and creates a circuit split with the Third Circuit based on that court’s Toledo Mack ruling. ICLE’s brief urges the Court to review the case in order to resolve the significant uncertainty created by the Second Circuit’s ruling, particularly for the multi-sided platform companies that epitomize the “New Economy.”

As ICLE’s brief discusses, the Second Circuit committed several important errors in its ruling:

First, As the Supreme Court held in Leegin, condemnation under the per se rule is appropriate “only for conduct that would always or almost always tend to restrict competition” and “only after courts have had considerable experience with the type of restraint at issue.” Neither is true in this case. Businesses often employ one or more forms of vertical restraints to make entry viable, and the Court has blessed such conduct, categorically holding in Leegin that “[v]ertical price restraints are to be judged according to the rule of reason.”

Furthermore, the conduct at issue in this case — the use of “Most-Favored Nation Clauses” in Apple’s contracts with the publishers and its adoption of the so-called “agency model” for e-book pricing — have never been reviewed by the courts in a setting like this one, let alone found to “always or almost always tend to restrict competition.” There is no support in the case law or economic literature for the proposition that agency models or MFNs used to facilitate entry by new competitors in platform markets like this one are anticompetitive.

Second, the negative consequences of the court’s ruling will be particularly acute for modern, high-technology sectors of the economy, where entrepreneurs planning to deploy new business models will now face exactly the sort of artificial deterrents that the Court condemned in Trinko: “Mistaken inferences and the resulting false condemnations are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” Absent review by the Supreme Court to correct the Second Circuit’s error, the result will be less-vigorous competition and a reduction in consumer welfare.

This case involves vertical conduct essentially indistinguishable from conduct that the Supreme Court has held to be subject to the rule of reason. But under the Second Circuit’s approach, the adoption of these sorts of efficient vertical restraints could be challenged as a per se unlawful effort to “facilitate” horizontal price fixing, significantly deterring their use. The lower court thus ignored the Supreme Court’s admonishment not to apply the antitrust laws in a way that makes the use of a particular business model “more attractive based on the per se rule” rather than on “real market conditions.”

Third, the court based its decision that per se review was appropriate largely on the fact that e-book prices increased following Apple’s entry into the market. But, contrary to the court’s suggestion, it has long been settled that such price increases do not make conduct per se unlawful. In fact, the Supreme Court has held that the per se rule is inappropriate where, as here, “prices can be increased in the course of promoting procompetitive effects.”  

Competition occurs on many dimensions other than just price; higher prices alone don’t necessarily suggest decreased competition or anticompetitive effects. Instead, higher prices may accompany welfare-enhancing competition on the merits, resulting in greater investment in product quality, reputation, innovation or distribution mechanisms.

The Second Circuit presumed that Amazon’s e-book prices before Apple’s entry were competitive, and thus that the price increases were anticompetitive. But there is no support in the record for that presumption, and it is not compelled by economic reasoning. In fact, it is at least as likely that the change in Amazon’s prices reflected the fact that Amazon’s business model pre-entry resulted in artificially low prices, and that the price increases following Apple’s entry were the product of a more competitive market.

Previous commentary on the case

For my previous writing and commentary on the the case, see:

  • “The Second Circuit’s Apple e-books decision: Debating the merits and the meaning,” American Bar Association debate with Fiona Scott-Morton, DOJ Chief Economist during the Apple trial, and Mark Ryan, the DOJ’s lead litigator in the case, recording here
  • Why I think the Apple e-books antitrust decision will (or at least should) be overturned, Truth on the Market, here
  • Why I think the government will have a tough time winning the Apple e-books antitrust case, Truth on the Market, here
  • The procompetitive story that could undermine the DOJ’s e-books antitrust case against Apple, Truth on the Market, here
  • How Apple can defeat the DOJ’s e-book antitrust suit, Forbes, here
  • The US e-books case against Apple: The procompetitive story, special issue of Concurrences on “E-books and the Boundaries of Antitrust,” here
  • Amazon vs. Macmillan: It’s all about control, Truth on the Market, here

Other TOTM authors have also weighed in. See, e.g.:

  • The Second Circuit Misapplies the Per Se Rule in U.S. v. Apple, Alden Abbott, here
  • The Apple E-Book Kerfuffle Meets Alfred Marshall’s Principles of Economics, Josh Wright, here
  • Apple and Amazon E-Book Most Favored Nation Clauses, Josh Wright, here

Amicus Signatories

  • Babette E. Boliek, Associate Professor of Law, Pepperdine University School of Law
  • Henry N. Butler, Dean and Professor of Law, George Mason University School of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, School of Management, University of Texas-Dallas
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Scott E. Masten, Professor of Business Economics & Public Policy, Stephen M. Ross School of Business, The University of Michigan
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Thomas D. Morgan, Professor Emeritus, George Washington University Law School
  • David S. Olson, Associate Professor of Law, Boston College Law School
  • Joanna Shepherd, Professor of Law, Emory University School of Law
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics,  The George L. Argyros School of Business and Economics and Professor of Economics and Law, Dale E. Fowler School of Law, Chapman University
  • Michael E. Sykuta, Associate Professor, Division of Applied Social Sciences, University of Missouri-Columbia
  • Alex Tabarrok, Bartley J. Madden Chair in Economics at the Mercatus Center and Professor of Economics, George Mason University
  • David J. Teece, Thomas W. Tusher Professor in Global Business and Director, Center for Global Strategy and Governance, Haas School of Business, University of California Berkeley
  • Alexander Volokh, Associate Professor of Law, Emory University School of Law
  • Joshua D. Wright, Professor of Law, George Mason University School of Law

I received word today that Douglass North passed away yesterday at the age of 95 (obit here). Professor North shared the Nobel Prize in Economic with Robert Fogel in 1993 for his work in economic history on the role of institutions in shaping economic development and performance.

Doug was one of my first professors in graduate school at Washington University. Many of us in our first year crammed into Doug’s economic history class for fear that he might retire and we not get the chance to study under him. Little did we expect that he would continue teaching into his DoughNorth_color_300-doc80s. The text for our class was the pre-publication manuscript of his book, Institutions, Institutional Change and Economic Performance. Doug’s course offered an interesting juxtaposition to the traditional neoclassical microeconomics course for first-year PhD students. His work challenged the simplifying assumptions of the neoclassical system and shed a whole new light on understanding economic history, development and performance. I still remember that day in October 1993 when the department was abuzz with the announcement that Doug had received the Nobel Prize. It was affirming and inspiring.

As I started work on my dissertation, I had hoped to incorporate a historical component on the early development of crude oil futures trading in the 1930s so I could get Doug involved on my committee. Unfortunately, there was not enough information still available to provide any analysis (there was one news reference to a new crude futures exchange, but nothing more–and the historical records of the NY Mercantile Exchange had been lost in a fire).and I had to focus solely on the deregulatory period of the late 1970s and early 1980s. I remember joking at one of our economic history workshops that I wasn’t sure if it counted as economic history since it happened during Doug’s lifetime.

Doug was one of the founding conspirators for the International Society for New Institutional Economics (now the Society for Institutional & Organizational Economics) in 1997, along with Ronald Coase and Oliver Williamson. Although the three had strong differences of opinions concerning certain aspects of their respective theoretical approaches, they understood the generally complementary nature of their work and its importance not just for the economic profession, but for understanding how societies and organizations perform and evolve and the role institutions play in that process.

The opportunity to work around these individuals, particularly with North and Coase, strongly shaped and influenced my understanding not only of economics, but of why a broader perspective of economics is so important for understanding the world around us. That experience profoundly affected my own research interests and my teaching of economics. Some of Doug’s papers continue to play an important role in courses I teach on economic policy. Students, especially international students, continue to be inspired by his explanation of the roles of institutions, how they affect markets and societies, and the forces that lead to institutional change.

As we prepare to celebrate Thanksgiving in the States, Doug’s passing is a reminder of how much I have to be thankful for over my career. I’m grateful for having had the opportunity to know and to work with Doug. I’m grateful that we had an opportunity to bring him to Mizzou in 2003 for our CORI Seminar series, at which he spoke on Understanding the Process of Economic Change (the title of his next book at the time). And I’m especially thankful for the influence he had on my understanding of economics and that his ideas will continue to shape economic thinking and economic policy for years to come.

Last June, in Michigan v. EPA, the Supreme Court commendably recognized cost-benefit analysis as critical to any reasoned evaluation of regulatory proposals by federal agencies.  (For more on the merits and limitations of this holding, see my June 29 blog.)  The White House (Office of Management and Budget) office that evaluates proposed federal regulations, the Office of Information and Regulatory Affairs (OIRA), does not, however, currently assess independent agencies’ regulations (the Heritage Foundation has argued that independent agencies should be subjected to Executive Branch regulatory review).  This is most unfortunate, because the economic impact of independent agencies’ regulations (such as those promulgated by the Federal Communications Commission, the Consumer Financial Protection Bureau, among many other “independent” entities) is enormous.

Recent research lends strong support to the case for OIRA review of independent agency regulations.  As former OIRA Administrator Susan Dudley (currently Director of the George Washington University Regulatory Studies Center) explained in recent testimony before the Senate Homeland Security and Government Affairs Committee, independent agencies have done an extremely poor job in evaluating the economic effects of their regulatory initiatives:

“The Administrative Conference of the United States recommended in 2013 that independent regulatory agencies adopt more transparent and rigorous regulatory analyses practices for major rules.  OIRA observed in its most recent regulatory report to Congress that “the independent agencies still continue to struggle in providing monetized estimates of benefits and costs of regulation.”  According to available government data, more than 40 percent of the rules developed by independent agencies over the last 10 years provided no information on either the costs or the benefits expected from their implementation.”

This poor record provides strong justification for legislative proposals (such as, the Independent Agency Regulatory Analysis Act of 2015 (S. 1607), which explicitly authorizes presidents to require independent regulatory agencies to comply with regulatory analysis requirements.  They also lend further support to congressional proposals (such as the REINS Act, which passed the House in August 2015) that would require congressional approval of new “major” regulations promulgated by federal agencies, including independent agencies.  For a more extensive discussion of the costs of overregulation and needed regulatory reforms, see the Heritage Foundation’s memorandum “Red Tape Rising: Six Years of Escalating Regulation Under Obama.

There is also a substantial constitutional argument that pursuant to the U.S. Constitution’s Executive Vesting Clause (Article II, Section 1, Clause 1) and Take Care Clause (Article II, Section 3), the President could direct that OIRA review independent agencies’ regulatory proposals, but an assessment of that interesting proposition is beyond the scope of this commentary.

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.

Conclusion

In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

Last week, FCC General Counsel Jonathan Sallet pulled back the curtain on the FCC staff’s analysis behind its decision to block Comcast’s acquisition of Time Warner Cable. As the FCC staff sets out on its reported Rainbow Tour to reassure regulated companies that it’s not “hostile to the industries it regulates,” Sallet’s remarks suggest it will have an uphill climb. Unfortunately, the staff’s analysis appears to have been unduly speculative, disconnected from critical market realities, and decidedly biased — not characteristics in a regulator that tend to offer much reassurance.

Merger analysis is inherently speculative, but, as courts have repeatedly had occasion to find, the FCC has a penchant for stretching speculation beyond the breaking point, adopting theories of harm that are vaguely possible, even if unlikely and inconsistent with past practice, and poorly supported by empirical evidence. The FCC’s approach here seems to fit this description.

The FCC’s fundamental theory of anticompetitive harm

To begin with, as he must, Sallet acknowledged that there was no direct competitive overlap in the areas served by Comcast and Time Warner Cable, and no consumer would have seen the number of providers available to her changed by the deal.

But the FCC staff viewed this critical fact as “not outcome determinative.” Instead, Sallet explained that the staff’s opposition was based primarily on a concern that the deal might enable Comcast to harm “nascent” OVD competitors in order to protect its video (MVPD) business:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively, especially through the use of new business models.

The justification for the concern boiled down to an assumption that the addition of TWC’s subscriber base would be sufficient to render an otherwise too-costly anticompetitive campaign against OVDs worthwhile:

Without the merger, a company taking action against OVDs for the benefit of the Pay TV system as a whole would incur costs but gain additional sales – or protect existing sales — only within its footprint. But the combined entity, having a larger footprint, would internalize more of the external “benefits” provided to other industry members.

The FCC theorized that, by acquiring a larger footprint, Comcast would gain enough bargaining power and leverage, as well as the means to profit from an exclusionary strategy, leading it to employ a range of harmful tactics — such as impairing the quality/speed of OVD streams, imposing data caps, limiting OVD access to TV-connected devices, imposing higher interconnection fees, and saddling OVDs with higher programming costs. It’s difficult to see how such conduct would be permitted under the FCC’s Open Internet Order/Title II regime, but, nevertheless, the staff apparently believed that Comcast would possess a powerful “toolkit” with which to harm OVDs post-transaction.

Comcast’s share of the MVPD market wouldn’t have changed enough to justify the FCC’s purported fears

First, the analysis turned on what Comcast could and would do if it were larger. But Comcast was already the largest ISP and MVPD (now second largest MVPD, post AT&T/DIRECTV) in the nation, and presumably it has approximately the same incentives and ability to disadvantage OVDs today.

In fact, there’s no reason to believe that the growth of Comcast’s MVPD business would cause any material change in its incentives with respect to OVDs. Whatever nefarious incentives the merger allegedly would have created by increasing Comcast’s share of the MVPD market (which is where the purported benefits in the FCC staff’s anticompetitive story would be realized), those incentives would be proportional to the size of increase in Comcast’s national MVPD market share — which, here, would be about eight percentage points: from 22% to under 30% of the national market.

It’s difficult to believe that Comcast would gain the wherewithal to engage in this costly strategy by adding such a relatively small fraction of the MVPD market (which would still leave other MVPDs serving fully 70% of the market to reap the purported benefits instead of Comcast), but wouldn’t have it at its current size – and there’s no evidence that it has ever employed such strategies with its current market share.

It bears highlighting that the D.C. Circuit has already twice rejected FCC efforts to impose a 30% market cap on MVPDs, based on the Commission’s inability to demonstrate that a greater-than-30% share would create competitive problems, especially given the highly dynamic nature of the MVPD market. In vacating the FCC’s most recent effort to do so in 2009, the D.C. Circuit was resolute in its condemnation of the agency, noting:

In sum, the Commission has failed to demonstrate that allowing a cable operator to serve more than 30% of all [MVPD] subscribers would threaten to reduce either competition or diversity in programming.

The extent of competition and the amount of available programming (including original programming distributed by OVDs themselves) has increased substantially since 2009; this makes the FCC’s competitive claims even less sustainable today.

It’s damning enough to the FCC’s case that there is no marketplace evidence of such conduct or its anticompetitive effects in today’s market. But it’s truly impossible to square the FCC’s assertions about Comcast’s anticompetitive incentives with the fact that, over the past decade, Comcast has made massive investments in broadband, steadily increased broadband speeds, and freely licensed its programming, among other things that have served to enhance OVDs’ long-term viability and growth. Chalk it up to the threat of regulatory intervention or corporate incompetence if you can’t believe that competition alone could be responsible for this largesse, but, whatever the reason, the FCC staff’s fears appear completely unfounded in a marketplace not significantly different than the landscape that would have existed post-merger.

OVDs aren’t vulnerable, and don’t need the FCC’s “help”

After describing the “new entrants” in the market — such unfamiliar and powerless players as Dish, Sony, HBO, and CBS — Sallet claimed that the staff was principally animated by the understanding that

Entrants are particularly vulnerable when competition is nascent. Thus, staff was particularly concerned that this transaction could damage competition in the video distribution industry.

Sallet’s description of OVDs makes them sound like struggling entrepreneurs working in garages. But, in fact, OVDs have radically reshaped the media business and wield enormous clout in the marketplace.

Netflix, for example, describes itself as “the world’s leading Internet television network with over 65 million members in over 50 countries.” New services like Sony Vue and Sling TV are affiliated with giant, well-established media conglomerates. And whatever new offerings emerge from the FCC-approved AT&T/DIRECTV merger will be as well-positioned as any in the market.

In fact, we already know that the concerns of the FCC are off-base because they are of a piece with the misguided assumptions that underlie the Chairman’s recent NPRM to rewrite the MVPD rules to “protect” just these sorts of companies. But the OVDs themselves — the ones with real money and their competitive futures on the line — don’t see the world the way the FCC does, and they’ve resolutely rejected the Chairman’s proposal. Notably, the proposed rules would “protect” these services from exactly the sort of conduct that Sallet claims would have been a consequence of the Comcast-TWC merger.

If they don’t want or need broad protection from such “harms” in the form of revised industry-wide rules, there is surely no justification for the FCC to throttle a merger based on speculation that the same conduct could conceivably arise in the future.

The realities of the broadband market post-merger wouldn’t have supported the FCC’s argument, either

While a larger Comcast might be in a position to realize more of the benefits from the exclusionary strategy Sallet described, it would also incur more of the costs — likely in direct proportion to the increased size of its subscriber base.

Think of it this way: To the extent that an MVPD can possibly constrain an OVD’s scope of distribution for programming, doing so also necessarily makes the MVPD’s own broadband offering less attractive, forcing it to incur a cost that would increase in proportion to the size of the distributor’s broadband market. In this case, as noted, Comcast would have gained MVPD subscribers — but it would have also gained broadband subscribers. In a world where cable is consistently losing video subscribers (as Sallet acknowledged), and where broadband offers higher margins and faster growth, it makes no economic sense that Comcast would have valued the trade-off the way the FCC claims it would have.

Moreover, in light of the existing conditions imposed on Comcast under the Comcast/NBCU merger order from 2011 (which last for a few more years) and the restrictions adopted in the Open Internet Order, Comcast’s ability to engage in the sort of exclusionary conduct described by Sallet would be severely limited, if not non-existent. Nor, of course, is there any guarantee that former or would-be OVD subscribers would choose to subscribe to, or pay more for, any MVPD in lieu of OVDs. Meanwhile, many of the relevant substitutes in the MVPD market (like AT&T and Verizon FiOS) also offer broadband services – thereby increasing the costs that would be incurred in the broadband market even more, as many subscribers would shift not only their MVPD, but also their broadband service, in response to Comcast degrading OVDs.

And speaking of the Open Internet Order — wasn’t that supposed to prevent ISPs like Comcast from acting on their alleged incentives to impede the quality of, or access to, edge providers like OVDs? Why is merger enforcement necessary to accomplish the same thing once Title II and the rest of the Open Internet Order are in place? And if the argument is that the Open Internet Order might be defeated, aside from the completely speculative nature of such a claim, why wouldn’t a merger condition that imposed the same constraints on Comcast – as was done in the Comcast/NBCU merger order by imposing the former net neutrality rules on Comcast – be perfectly sufficient?

While the FCC staff analysis accepted as true (again, contrary to current marketplace evidence) that a bigger Comcast would have more incentive to harm OVDs post-merger, it rejected arguments that there could be countervailing benefits to OVDs and others from this same increase in scale. Thus, things like incremental broadband investments and speed increases, a larger Wi-Fi network, and greater business services market competition – things that Comcast is already doing and would have done on a greater and more-accelerated scale in the acquired territories post-transaction – were deemed insufficient to outweigh the expected costs of the staff’s entirely speculative anticompetitive theory.

In reality, however, not only OVDs, but consumers – and especially TWC subscribers – would have benefitted from the merger by access to Comcast’s faster broadband speeds, its new investments, and its superior video offerings on the X1 platform, among other things. Many low-income families would have benefitted from expansion of Comcast’s Internet Essentials program, and many businesses would have benefited from the addition of a more effective competitor to the incumbent providers that currently dominate the business services market. Yet these and other verifiable benefits were given short shrift in the agency’s analysis because they “were viewed by staff as incapable of outweighing the potential harms.”

The assumptions underlying the FCC staff’s analysis of the broadband market are arbitrary and unsupportable

Sallet’s claim that the combined firm would have 60% of all high-speed broadband subscribers in the U.S. necessarily assumes a national broadband market measured at 25 Mbps or higher, which is a red herring.

The FCC has not explained why 25 Mbps is a meaningful benchmark for antitrust analysis. The FCC itself endorsed a 10 Mbps baseline for its Connect America fund last December, noting that over 70% of current broadband users subscribe to speeds less than 25 Mbps, even in areas where faster speeds are available. And streaming online video, the most oft-cited reason for needing high bandwidth, doesn’t require 25 Mbps: Netflix says that 5 Mbps is the most that’s required for an HD stream, and the same goes for Amazon (3.5 Mbps) and Hulu (1.5 Mbps).

What’s more, by choosing an arbitrary, faster speed to define the scope of the broadband market (in an effort to assert the non-competitiveness of the market, and thereby justify its broadband regulations), the agency has – without proper analysis or grounding, in my view – unjustifiably shrunk the size of the relevant market. But, as it happens, doing so also shrinks the size of the increase in “national market share” that the merger would have brought about.

Recall that the staff’s theory was premised on the idea that the merger would give Comcast control over enough of the broadband market that it could unilaterally impose costs on OVDs sufficient to impair their ability to reach or sustain minimum viable scale. But Comcast would have added only one percent of this invented “market” as a result of the merger. It strains credulity to assert that there could be any transaction-specific harm from an increase in market share equivalent to a rounding error.

In any case, basing its rejection of the merger on a manufactured 25 Mbps relevant market creates perverse incentives and will likely do far more to harm OVDs than realization of even the staff’s worst fears about the merger ever could have.

The FCC says it wants higher speeds, and it wants firms to invest in faster broadband. But here Comcast did just that, and then was punished for it. Rather than acknowledging Comcast’s ongoing broadband investments as strong indication that the FCC staff’s analysis might be on the wrong track, the FCC leadership simply sidestepped that inconvenient truth by redefining the market.

The lesson is that if you make your product too good, you’ll end up with an impermissibly high share of the market you create and be punished for it. This can’t possibly promote the public interest.

Furthermore, the staff’s analysis of competitive effects even in this ersatz market aren’t likely supportable. As noted, most subscribers access OVDs on connections that deliver content at speeds well below the invented 25 Mbps benchmark, and they pay the same prices for OVD subscriptions as subscribers who receive their content at 25 Mbps. Confronted with the choice to consume content at 25 Mbps or 10 Mbps (or less), the majority of consumers voluntarily opt for slower speeds — and they purchase service from Netflix and other OVDs in droves, nonetheless.

The upshot? Contrary to the implications on which the staff’s analysis rests, if Comcast were to somehow “degrade” OVD content on the 25 Mbps networks so that it was delivered with characteristics of video content delivered over a 10-Mbps network, real-world, observed consumer preferences suggest it wouldn’t harm OVDs’ access to consumers at all. This is especially true given that OVDs often have a global focus and reach (again, Netflix has 65 million subscribers in over 50 countries), making any claims that Comcast could successfully foreclose them from the relevant market even more suspect.

At the same time, while the staff apparently viewed the broadband alternatives as “limited,” the reality is that Comcast, as well as other broadband providers, are surrounded by capable competitors, including, among others, AT&T, Verizon, CenturyLink, Google Fiber, many advanced VDSL and fiber-based Internet service providers, and high-speed mobile wireless providers. The FCC understated the complex impact of this robust, dynamic, and ever-increasing competition, and its analysis entirely ignored rapidly growing mobile wireless broadband competition.

Finally, as noted, Sallet claimed that the staff determined that merger conditions would be insufficient to remedy its concerns, without any further explanation. Yet the Commission identified similar concerns about OVDs in both the Comcast/NBCUniversal and AT&T/DIRECTV transactions, and adopted remedies to address those concerns. We know the agency is capable of drafting behavioral conditions, and we know they have teeth, as demonstrated by prior FCC enforcement actions. It’s hard to understand why similar, adequate conditions could not have been fashioned for this transaction.

In the end, while I appreciate Sallet’s attempt to explain the FCC’s decision to reject the Comcast/TWC merger, based on the foregoing I’m not sure that Comcast could have made any argument or showing that would have dissuaded the FCC from challenging the merger. Comcast presented a strong economic analysis answering the staff’s concerns discussed above, all to no avail. It’s difficult to escape the conclusion that this was a politically-driven result, and not one rigorously based on the facts or marketplace reality.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Imagine

totmauthor —  27 August 2015

by Michael Baye, Bert Elwert Professor of Business at the Kelley School of Business, Indiana University, and former Director of the Bureau of Economics, FTC

Imagine a world where competition and consumer protection authorities base their final decisions on scientific evidence of potential harm. Imagine a world where well-intentioned policymakers do not use “possibility theorems” to rationalize decisions that are, in reality, based on idiosyncratic biases or beliefs. Imagine a world where “harm” is measured using a scientific yardstick that accounts for the economic benefits and costs of attempting to remedy potentially harmful business practices.

Many economists—conservatives and liberals alike—have the luxury of pondering this world in the safe confines of ivory towers; they publish in journals read by a like-minded audience that also relies on the scientific method.

Congratulations and thanks, Josh, for superbly articulating these messages in the more relevant—but more hostile—world outside of the ivory tower.

To those of you who might disagree with a few (or all) of Josh’s decisions, I challenge you to examine honestly whether your views on a particular matter are based on objective (scientific) evidence, or on your personal, subjective beliefs. Evidence-based policymaking can be discomforting: It sometimes induces those with philosophical biases in favor of intervention to make laissez-faire decisions, and it sometimes induces people with a bias for non-intervention to make decisions to intervene.

by Berin Szoka, President, TechFreedom

Josh Wright will doubtless be remembered for transforming how FTC polices competition. Between finally defining Unfair Methods of Competition (UMC), and his twelve dissents and multiple speeches about competition matters, he re-grounded competition policy in the error-cost framework: weighing not only costs against benefits, but also the likelihood of getting it wrong against the likelihood of getting it right.

Yet Wright may be remembered as much for what he started as what he finished: reforming the Commission’s Unfair and Deceptive Acts and Practices (UDAP) work. His consumer protection work is relatively slender: four dissents on high tech matters plus four relatively brief concurrences and one dissent on more traditional advertising substantiation cases. But together, these offer all the building blocks of an economic, error-cost-based approach to consumer protection. All that remains is for another FTC Commissioner to pick up where Wright left off.

Apple: Unfairness & Cost-Benefit Analysis

In January 2014, Wright issued a blistering, 17 page dissent from the Commission’s decision to bring, and settle, an enforcement action against Apple regarding the design of its app store. Wright dissented, not from the conclusion necessarily, but from the methodology by which the Commission arrived there. In essence, he argued for an error-cost approach to unfairness:

The Commission, under the rubric of “unfair acts and practices,” substitutes its own judgment for a private firm’s decisions as to how to design its product to satisfy as many users as possible, and requires a company to revamp an otherwise indisputably legitimate business practice. Given the apparent benefits to some consumers and to competition from Apple’s allegedly unfair practices, I believe the Commission should have conducted a much more robust analysis to determine whether the injury to this small group of consumers justifies the finding of unfairness and the imposition of a remedy.

…. although Apple’s allegedly unfair act or practice has harmed some consumers, I do not believe the Commission has demonstrated the injury is substantial. More importantly, any injury to consumers flowing from Apple’s choice of disclosure and billing practices is outweighed considerably by the benefits to competition and to consumers that flow from the same practice.

The majority insisted that the burden on consumers or Apple from its remedy “is de minimis,” and therefore “it was unnecessary for the Commission to undertake a study of how consumers react to different disclosures before issuing its complaint against Apple, as Commissioner Wright suggests.”

Wright responded: “Apple has apparently determined that most consumers do not want to experience excessive disclosures or to be inconvenienced by having to enter their passwords every time they make a purchase.” In essence, he argued, that the FTC should not presume to know better than Apple how to manage the subtle trade-offs between convenience and usability.

Wright was channeling Hayek’s famous quip: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” The last thing the FTC should be doing is designing digital products — even by hovering over Apple’s shoulder.

The Data Broker Report

Wright next took the Commission to task for the lack of economic analysis in its May 2013 report, “Data Brokers: A Call for Transparency and Accountability.” In just four footnotes, Wright extended his analysis of Apple. For example:

Footnote 85: Commissioner Wright agrees that Congress should consider legislation that would provide for consumer access to the information collected by data brokers. However, he does not believe that at this time there is enough evidence that the benefits to consumers of requiring data brokers to provide them with the ability to opt out of the sharing of all consumer information for marketing purposes outweighs the costs of imposing such a restriction. Finally… he believes that the Commission should engage in a rigorous study of consumer preferences sufficient to establish that consumers would likely benefit from such a portal prior to making such a recommendation.

Footnote 88: Commissioner Wright believes that in enacting statutes such as the Fair Credit Reporting Act, Congress undertook efforts to balance [costs and benefits]. In the instant case, Commissioner Wright is wary of extending FCRA-like coverage to other uses and categories of information without first performing a more robust balancing of the benefits and costs associated with imposing these requirements

The Internet of Things Report

This January, in a 4-page dissent from the FTC’s staff report on “The Internet of Things: Privacy and Security in a Connected World,” Wright lamented that the report neither represented serious economic analysis of the issues discussed nor synthesized the FTC’s workshop on the topic:

A record that consists of a one-day workshop, its accompanying public comments, and the staff’s impressions of those proceedings, however well-intended, is neither likely to result in a representative sample of viewpoints nor to generate information sufficient to support legislative or policy recommendations.

His attack on the report’s methodology was blistering:

The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs. Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace…. I support the well-established Commission view that companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of “security by design” and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.

Ouch.

Nomi: Deception & Materiality Analysis

In April, Wright turned his analytical artillery from unfairness to deception, long the more uncontroversial half of UDAP. In a five-page dissent, Wright accused the Commission of essentially dispensing with the core limiting principle of the 1983 Deception Policy Statement: materiality. As Wright explained:

The materiality inquiry is critical because the Commission’s construct of “deception” uses materiality as an evidentiary proxy for consumer injury…. Deception causes consumer harm because it influences consumer behavior — that is, the deceptive statement is one that is not merely misleading in the abstract but one that causes consumers to make choices to their detriment that they would not have otherwise made. This essential link between materiality and consumer injury ensures the Commission’s deception authority is employed to deter only conduct that is likely to harm consumers and does not chill business conduct that makes consumers better off.

As in Apple, Wright did not argue that there might not be a role for the FTC; merely that the FTC had failed to justify bringing, let alone settling, an enforcement action without establishing that the key promise at issue — to provide in-store opt-out — was material.

The Chamber Speech: A Call for Economic Analysis

In May, Wright gave a speech to the Chamber of Commerce on “How to Regulate the Internet of Things Without Harming its Future: Some Do’s and Don’ts”:

Perhaps it is because I am an economist who likes to deal with hard data, but when it comes to data and privacy regulation, the tendency to rely upon anecdote to motivate policy is a serious problem. Instead of developing a proper factual record that documents cognizable and actual harms, regulators can sometimes be tempted merely to explore anecdotal and other hypothetical examples and end up just offering speculations about the possibility of harm.

And on privacy in particular:

What I have seen instead is what appears to be a generalized apprehension about the collection and use of data — whether or not the data is actually personally identifiable or sensitive — along with a corresponding, and arguably crippling, fear about the possible misuse of such data.  …. Any sensible approach to regulating the collection and use of data will take into account the risk of abuses that will harm consumers. But those risks must be weighed with as much precision as possible, as is the case with potential consumer benefits, in order to guide sensible policy for data collection and use. The appropriate calibration, of course, turns on our best estimates of how policy changes will actually impact consumers on the margin….

Wright concedes that the “vast majority of work that the Consumer Protection Bureau performs simply does not require significant economic analysis because they involve business practices that create substantial risk of consumer harm but little or nothing in the way of consumer benefits.” Yet he notes that the Internet has made the need for cost-benefit analysis far more acute, at least where conduct is ambiguous as its effects on consumers, as in Apple, to avoid “squelching innovation and depriving consumers of these benefits.”

The Wrightian Reform Agenda for UDAP Enforcement

Wright left all the building blocks his successor will need to bring “Wrightian” reform to how the Bureau of Consumer Protection works:

  1. Wright’s successor should work to require economic analysis for consent decrees, as Wright proposed in his last major address as a Commissioner. BE might not to issue a statement at all in run-of-the-mill deception cases, but it should certainly have to say something about unfairness cases.
  2. The FTC needs to systematically assess its enforcement process to understand the incentives causing companies to settle UDAP cases nearly every time — resulting in what Chairman Ramirez and Commissioner Brill frequently call the FTC’s “common law of consent decrees.”
  3. As Wright says in his Nomi dissent “While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint.” This point should be uncontroversial, yet the Commission has never addressed it. Wright’s successor (and the FTC) should, at a minimum, propose a standard for settling cases.
  4. Just as Josh succeeded in getting the FTC to issue a UMC policy statement, his successor should re-assess the FTC’s two UDAP policy statements. Wright’s successor needs to make the case for finally codifying the DPS — and ensuring that the FTC stops bypassing materiality, as in Nomi.
  5. The Commission should develop a rigorous methodology for each of the required elements of unfairness and deception to justify bringing cases (or making report recommendations). This will be a great deal harder than merely attacking the lack of such methodology in dissents.
  6. The FTC has, in recent years, increasingly used reports to make de facto policy — by inventing what Wright calls, in his Chamber speech, “slogans and catchphrases” like “privacy by design,” and then using them as boilerplate requirements for consent decrees; by pressuring companies into adopting the FTC’s best practices; by calling for legislation; and so on. At a minimum, these reports must be grounded in careful economic analysis.
  7. The Commission should apply far greater rigor in setting standards for substantiating claims about health benefits. In two dissents, Genelink et al and HCG Platinum, Wright demolished arguments for a clear, bright line requiring two randomized clinical trials, and made the case for “a more flexible substantiation requirement” instead.

Conclusion: Big Shoes to Fill

It’s a testament to Wright’s analytical clarity that he managed to say so much about consumer protection in so few words. That his UDAP work has received so little attention, relative to his competition work, says just as much about the far greater need for someone to do for consumer protection what Wright did for competition enforcement and policy at the FTC.

Wright’s successor, if she’s going to finish what Wright started, will need something approaching Wright’s sheer intellect, his deep internalization of the error-costs approach, and his knack for brokering bipartisan compromise around major issues — plus the kind of passion for UDAP matters Wright had for competition matters. And, of course, that person needs to be able to continue his legacy on competition matters…

Compared to the difficulty of finding that person, actually implementing these reforms may be the easy part.