Archives For antitrust

On April 17, the Federal Trade Commission (FTC) voted three-to-two to enter into a consent agreement In the Matter of Cardinal Health, Inc., requiring Cardinal Health to disgorge funds as part of the settlement in this monopolization case.  As ably explained by dissenting Commissioners Josh Wright and Maureen Ohlhausen, the U.S. Federal Trade Commission (FTC) wrongly required the disgorgement of funds in this case.  The settlement reflects an overzealous application of antitrust enforcement to unilateral conduct that may well be efficient.  It also manifests a highly inappropriate application of antitrust monetary relief that stands to increase private uncertainty, to the detriment of economic welfare.

The basic facts and allegations in this matter, drawn from the FTC’s statement accompanying the settlement, are as follows.  Through separate acquisitions in 2003 and 2004, Cardinal Health became the largest operator of radiopharmacies in the United States and the sole radiopharmacy operator in 25 relevant markets addressed by this settlement.  Radiopharmacies distribute and sell radiopharmaceuticals, which are drugs containing radioactive isotopes, used by hospitals and clinics to diagnose and treat diseases.  Notably, they typically derive at least of 60% of their revenues from the sale of heart perfusion agents (“HPAs”), a type of radiopharmaceutical that healthcare providers use to conduct heart stress tests.  A practical consequence is that radiopharmacies cannot operate a financially viable and competitive business without access to an HPA.  Between 2003 and 2008, Cardinal allegedly employed various tactics to induce the only two manufacturers of HPAs in the United States, BMS and GEAmersham, to withhold HPA distribution rights from would-be radiopharmacy market entrants in violation of Section 2 of the Sherman Act.  Through these tactics Cardinal allegedly maintained exclusive dealing rights, denied its customers the benefits of competition, and profited from the monopoly prices it charged for all radiopharmaceuticals, including HPAs, in the relevant markets.  Importantly, according to the FTC, there was no efficiency benefit or legitimate business justification for Cardinal simultaneously maintaining exclusive distribution rights to the only two HPAs then available in the relevant markets.

This settlement raises two types of problems.

First, this was a single firm conduct exclusive dealing case involving (at best) questionable anticompetitive effectsAs Josh Wright (citing the economics literature) pointed out in his dissent, “there are numerous plausible efficiency justifications for such [exclusive dealing] restraints.”  (Moreover, as Josh Wright and I stressed in an article on tying and exclusive dealing, “[e]xisting empirical evidence of the impact of exclusive dealing is scarce but generally favors the view that exclusive dealing is output‐enhancing”, suggesting that a (rebuttable) presumption of legality would be appropriate in this area.)  Indeed, in this case, Commissioner Wright explained that “[t]he tactics the Commission challenges could have been output-enhancing” in various markets.  Furthermore, Commissioner Wright emphasized that the data analysis showing that Cardinal charged higher prices in monopoly markets was “very fragile.  The data show that the impact of a second competitor on Cardinal’s prices is small, borderline statistically significant, and not robust to minor changes in specification.”  Commissioner Ohlhausen’s dissent reinforced Commissioner Wright’s critique of the majority’s exclusive dealing theory.  As she put it:

“[E]even if the Commission could establish that Cardinal achieved some type of de facto exclusivity with both Bristol-Myers Squibb and General Electric Co. during the relevant time period (and that is less than clear), it is entirely unclear that such exclusivity – rather than, for example, insufficient demand for more than one radiopharmacy – caused the lack of entry within each of the relevant markets. That alternative explanation seems especially likely in the six relevant markets in which ‘Cardinal remains the sole or dominant radiopharmacy,’ notwithstanding the fact that whatever exclusivity Cardinal may have achieved admittedly expired in early 2008.  The complaint provides no basis for the assertion that Cardinal’s conduct during the 2003-2008 period has caused the lack of entry in those six markets during the past seven years.”

Furthermore, Commissioner Ohlhausen underscored Commissioner Wright’s critique of the empirical evidence in this case:  “[T]he evidence of anticompetitive effects in the relevant markets at issue is significantly lacking.  It is largely based on non-market-specific documentary evidence. The market-specific empirical evidence we do have implies very small (i.e. low single-digit) and often statistically insignificant price increases or no price increases at all.”

Second, the FTC’s requirement that Cardinal Health disgorge $26.8 million into a fund for allegedly injured consumers is unmeritorious and inappropriately chills potentially procompetitive behavior.  Commissioner Ohlhausen focused on how this case ran afoul of the FTC’s 2003 Policy Statement on Monetary Equitable Remedies in Competition Cases (Policy Statement) (withdrawn by the FTC in 2012, over Commissioner Ohlhausen’s dissent), which reserves disgorgement for cases in which the underlying violation is clear and there is a reasonable basis for calculating the amount of a remedial payment.  As Ohlhausen explained, this case violates those principles because (1) it does not involve a clear violation of the antitrust laws (see above) and, given the lack of anticompetitive effects evidence (see above), (2) there is no reasonable basis for calculating the disgorgement amount (indeed, there is “the real possibility of no ill-gotten gains for Cardinal”).  Furthermore:

“The lack of guidance from the Commission on the use of its disgorgement authority [following withdrawal of the Policy Statement] makes any such use inherently unpredictable and thus unfair. . . .  The Commission therefore ought to   reinstate the Policy Statement – either in its original form or in some modified form that the current Commissioners can agree on – or provide some additional guidance on when it plans to seek the extraordinary remedy of disgorgement in antitrust cases.”

In his critique of disgorgement, Commissioner Wright deployed law and economics analysis (and, in particular, optimal deterrence theory).  He explained that regulators should be primarily concerned with over-deterrence in single-firm conduct cases such as this one, which raise the possibility of private treble damage actions.  Wright stressed:

“I would . . . pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single-firm conduct, only if the monopolist’s conduct violates the Sherman Act and has no plausible efficiency justification. . . .  This case does not belong in that category. Declining to pursue disgorgement in most cases involving vertical restraints has the virtue of taking the remedy off the table – and thus reducing the risk of over-deterrence – in the cases that present the most difficulty in distinguishing between anticompetitive conduct that harms consumers and procompetitive conduct that benefits them, such as the present case.”

Commissioner Wright also shared Commissioner Ohlhausen’s concern about the lack of meaningful FTC guidance regarding when and whether it will seek disgorgement, and agreed with her that the FTC should reinstate the Policy Statement or provide new specific guidance in this area.  (See my 2012 ABA Antitrust Source article for a more fulsome critique of the antitrust error costs, chilling effects, and harmful international ramifications associated with the withdrawal of the Policy Statement.)

In sum, one may hope that in the future the FTC:  (1) will be more attentive to the potential efficiencies of exclusive dealing; (2) will proceed far more cautiously before proposing an enforcement action in the exclusive dealing area; (3) will avoid applying disgorgement in exclusive dealing cases; and (4) will promulgate a new disgorgement policy statement that reserves disgorgement for unequivocally illegal antitrust offenses in which economic harm can readily be calculated with a high degree of certainty.

The FCC’s proposed “Open Internet Order,” which would impose heavy-handed “common carrier” regulation of Internet service providers (the Order is being appealed in federal court and there are good arguments for striking it down) in order to promote “net neutrality,” is fundamentally misconceived.  If upheld, it will slow innovation, impose substantial costs, and harm consumers (see Heritage Foundation commentaries on FCC Internet regulation here, here, here, and here).  What’s more, it is not needed to protect consumers and competition from potential future abuse by Internet firms.  As I explain in a Heritage Foundation Legal Memorandum published yesterday, should the Open Internet Order be struck down, the U.S. Federal Trade Commission (FTC) has ample authority under Section 5 of the Federal Trade Commission Act (FTC Act) to challenge any harmful conduct by entities involved in Internet broadband services markets when such conduct undermines competition or harms consumers.

Section 5 of the FTC Act authorizes the FTC to prevent persons, partnerships, or corporations from engaging in “unfair methods of competition” or “unfair or deceptive acts or practices” in or affecting commerce.  This gives it ample authority to challenge Internet abuses raising antitrust (unfair methods) and consumer protection (unfair acts or practices) issues.

On the antitrust side, in evaluating individual business restraints under a “rule of reason,” the FTC relies on objective fact-specific analyses of the actual economic and consumer protection implications of a particular restraint.  Thus, FTC evaluations of broadband industry restrictions are likely to be more objective and predictable than highly subjective “public interest” assessments by the FCC, leading to reduced error and lower planning costs for purveyors of broadband and related services.  Appropriate antitrust evaluation should accord broad leeway to most broadband contracts.  As FTC Commissioner Josh Wright put it in testifying before Congress, “fundamental observation and market experience [demonstrate] that the business practices at the heart of the net neutrality debate are generally procompetitive.”  This suggests application of a rule of reason that will fully weigh efficiencies but not shy away from challenging broadband-related contractual arrangements that undermine the competitive process.

On the consumer protection side, the FTC can attack statements made by businesses that mislead and thereby impose harm on consumers (including business purchasers) who are acting reasonably.  It can also challenge practices that, though not literally false or deceptive, impose substantial harm on consumers (including business purchasers) that they cannot reasonably avoid, assuming the harm is greater than any countervailing benefits.  These are carefully designed and cabined sources of authority that require the FTC to determine the presence of actual consumer harm before acting.  Application of the FTC’s unfairness and deception powers therefore lacks the uncertainty associated with the FCC’s uncabined and vague “public interest” standard of evaluation.  As in the case of antitrust, the existence of greater clarity and a well-defined analytic methodology suggests that reliance on FTC rather than FCC enforcement in this area is preferable from a policy standpoint.

Finally, arguments for relying on FTC Internet policing are based on experience as well – the FTC is no Internet policy novice.  It closely monitors Internet activity and, over the years, it has developed substantial expertise in Internet topics through research, hearings, and enforcement actions.

Most recently, for example, the FTC sued AT&T in federal court for allegedly slowing wireless customers’ Internet speeds, although the customers had subscribed to “unlimited” data usage plans.  The FTC asserted that in offering renewals to unlimited-plan customers, AT&T did not adequately inform them of a new policy to “throttle” (drastically reduce the speed of) customer data service once a certain monthly data usage cap was met. The direct harm of throttling was in addition to the high early termination fees that dissatisfied customers would face for early termination of their services.  The FTC characterized this behavior as both “unfair” and “deceptive.”  Moreover, the commission claimed that throttling-related speed reductions and data restrictions were not determined by real-time network congestion and thus did not even qualify as reasonable network management activity.  This case illustrates that the FTC is perfectly capable of challenging potential “network neutrality” violations that harm consumer welfare (since “throttled” customers are provided service that is inferior to the service afforded customers on “tiered” service plans) and thus FCC involvement is unwarranted.

In sum, if a court strikes down the latest FCC effort to regulate the Internet, the FTC has ample authority to address competition and consumer protection problems in the area of broadband, including questions related to net neutrality.  The FTC’s highly structured, analytic, fact-based approach to these issues is superior to FCC net neutrality regulation based on vague and unfocused notions of the public interest.  If a court does not act, Congress might wish to consider legislation to prohibit FCC Internet regulation and leave oversight of potential competitive and consumer abuses to the FTC.

In a recent post, I presented an overview of the ICN’s recent Annual Conference in Sydney, Australia.  Today I briefly summarize and critique a key product approved by the Conference, a new chapter 6 of the ICN’s Workbook on Unilateral Conduct, devoted to tying and bundling.  (My analysis is based on a hard copy final version of the chapter, which shortly will be posted online at internationalcompetitionnetwork.org.)

Chapter 6 is the latest installment in the ICN’s continuing effort to present an overview of how different types of single firm conduct might be assessed by competition authorities, taking into account potential efficiencies as well as potential theories of competitive harm.  In particular, chapter 6 defines tying and bundling; focuses primarily on theories of exclusionary anticompetitive effects; lays out potential evaluative criteria (for example, when tying is efficient, it is likely to be employed by a dominant firm’s significant competitors); and discusses the characteristics of tying/bundling.  It then turns to theories of anticompetitive leveraging and foreclosure and price discrimination (avoiding taking a position as to whether price discrimination is a basis for condemning tying), and discusses how possible and actual anticompetitive effects might be observed.  It then turns to justifications and defenses for tying and bundling, including reduced manufacturing and distribution costs; reduced customer transaction and search costs; improved product performance or convenience; and quality and safety assurance.  The chapter then proclaims that “[t]he burden of demonstrating the likelihood and magnitude of actual or potential efficiencies generally is placed on an accused infringer”; states that “agencies must examine whether those claimed efficiencies actually arise from the tying arrangement, and whether there are ways to achieve the claimed efficiencies through less restrictive means”; and implicitly lends support to rule of reason balancing, noting that, “[i]n many jurisdictions if the party imposing the tie can establish that its claimed efficiencies would outweigh the anticompetitive effects then the conduct would not be deemed an infringement.”  The chapter ends with a normative suggestion:  “When the harm is likely materially greater than the efficiencies, the practice should be condemned. When the harm and the efficiencies both seem likely to be at the same rough magnitude, the general principle of non-interference in the market place may suggest that the practice not be condemned.”

Overall, chapter 6 presents a generally helpful discussion of tying and bundling, avoiding the misguided condemnations of these frequently efficient practices that characterized antitrust enforcement prior to the incorporation of modern economic analysis.  This good chapter, however, could be enhanced by drawing upon sources that explore the actual effects of tying, such as a literature review that explains there is very little empirical support for the proposition that tying or bundling are actually anticompetitive.  Chapter 6 could also benefit by setting forth a broader set of efficiency explanations for these practices, and by addressing the fact that using tying or bundling to gain market share at rivals’ expense need not imply consumer harm (the literature review noted above also addresses these points).  If chapter 6 is revised, it should discuss these issues, and also include footnote and bibliographic evidence to the extensive law and economics literature on bundling and tying.

More generally, chapter 6, and the entire Workbook, could benefit by evincing greater recognition of the limits of antitrust enforcement, in particular, the inevitability of error costs in enforcement (especially since welfare-enhancing unilateral practices may well be misunderstood by enforcers), and the general desirability of avoiding false positives that discourage aggressive but efficiency-enhancing unilateral conduct.  In this regard, chapter 6 could be improved by taking a page from the discussion of error costs in the U.S. Justice Department’s 2008 Report on Single Firm Conduct (withdrawn in 2009 by the Obama Administration).  The 2008 Report also stated, with regard to tying, “that when actual or probable harm to competition is shown, tying should be illegal only when (1) it has no procompetitive benefits, or (2) if there are procompetitive benefits, the tie produces harms substantially disproportionate to those benefits.”  As the 2008 Report further explained, the disproportionality test would make a good “default” standard for those forms of unilateral conduct that lack specific tests of illegality.  Moving toward a default disproportionality standard, however, is a long-term project, which requires rethinking of unilateral conduct enforcement policy in the United States and most other jurisdictions.

The ICN’s 14 Annual Conference, held in Sydney, Australia, from April 28th through May 1st, as usual, provided a forum for highlighting the work of ICN working groups on cartels, mergers, unilateral conduct, agency effectiveness, and advocacy.  The Conference approved multiple working group products, including a guidance document on investigative process that reflects key investigative tools and procedural fairness principles; a new chapter for the ICN Anti-Cartel Enforcement Manual on the relationship between competition agencies and public procurement bodies; a practical guide to international cooperation in mergers; a workbook chapter on tying and bundling (more on this in a future Truth on the Market commentary); and a report on developing an effective competition culture.  Efforts to promote greater openness and procedural due process in competition agency investigations (a U.S. Government priority) – and to reduce transaction costs and unnecessary burdens in merger reviews – continue to make slow but steady progress.  The host Australian agency’s “special project,” a report based on a survey of how agencies treat vertical restraints in online commerce, fortunately was descriptive, not normative, and hopefully will not prompt follow-up initiatives.  (There is no sound reason to believe that vertical restraints of any kind should be given high enforcement priority.)

Most significant from a consumer welfare standard, however, were the signs that competition advocacy is being given a higher profile within the ICN.  Competition advocacy seeks to dismantle, or prevent the creation of new, government regulations that harm the competitive process, such as rules that create barriers to entry or other inefficiencies that have a disparate impact on differently-situated firms.  The harm stemming from such distortions (described as “anticompetitive market distortions” or “ACMDs” in the recent literature) swamps the effects of purely private restraints, and merits the highest priority from public officials who seek to promote consumer welfare.  In the plenary event on the Conference’s closing day (moderated by former UK Office of Fair Trading head John Fingleton), the leaders of the competition agencies of France, Mexico, and Singapore, joined by an Italian Competition Commissioner, addressed the theme of “credible advocacy,” specifically, means by which competition agencies can highlight the harm from government impediments to competition.  Representatives of the World Bank and OECD participated in the Sydney Conference discussions of competition advocacy, reflecting a growing interest in this topic by international economic institutions.  The newly approved ICN report on developing a competition culture pointed the way toward promoting greater public acceptance of procompetitive policies – a prerequisite for the broad-scale dismantling of existing (and blocking of newly proposed) ACMDs.

Notably, in a follow-up breakout session on advocacy toward policymakers, former Mexican competition chief (and head of the ICN Executive Steering Committee) Eduardo Perez Motta cited the example of his agency’s convincing the Mexican Commerce Ministry not to adopt new non-tariff barriers that would have effectively blocked steel imports – a result that would have imposed major harm on both Mexican businesses that utilize steel inputs and many ultimate consumers.  (The proposed steel restraint, a prime example of an ACMD, represented a manifestation of crony capitalism – a growing problem in industrialized economies, including the United States.)  This example vividly demonstrates that competition agencies may occasionally prove successful in the fight to curb ACMDs (and crony capitalism in general), if they have sufficient political influence and are given the correct tools to spot and highlight for the public the costs of such harmful government restraints.

A powerful way to build public support against ACMDs is to highlight their costs.  Scholars from Babson College (Shanker Singham and Srinivasa Rangan), Northeastern University (Robert Bradley), and I have developed a metric that seeks to estimate the negative effects of ACMDs on national productivity.  Our paper, which presents quantitative estimates on how various institutional factors affect productivity, draws upon existing indices of economic liberty, including the World Economic Forum Global Competitiveness Index, the Fraser Index, and the Heritage Foundation Index of Economic Freedom.  We will present this paper at a World Bank-OECD Conference on Competition Policy, Shared Prosperity and Inclusive Growth, to be held next month at World Bank Headquarters in Washington, D.C.  (Hopefully this will lead to annual joint World Bank-OECD conferences exploring this topic.)  Stay tuned for additional information on ongoing efforts by the ICN and other international economic institutions to bolster competition advocacy – and for more details on my co-authored paper.

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.

And:

[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.

Also:

The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Last year, Microsoft’s new CEO, Satya Nadella, seemed to break with the company’s longstanding “complain instead of compete” strategy to acknowledge that:

We’re going to innovate with a challenger mindset…. We’re not coming at this as some incumbent.

Among the first items on his agenda? Treating competing platforms like opportunities for innovation and expansion rather than obstacles to be torn down by any means possible:

We are absolutely committed to making our applications run what most people describe as cross platform…. There is no holding back of anything.

Earlier this week, at its Build Developer Conference, Microsoft announced its most significant initiative yet to bring about this reality: code built into its Windows 10 OS that will enable Android and iOS developers to port apps into the Windows ecosystem more easily.

To make this possible… Windows phones “will include an Android subsystem” meant to play nice with the Java and C++ code developers have already crafted to run on a rival’s operating system…. iOS developers can compile their Objective C code right from Microsoft’s Visual Studio, and turn it into a full-fledged Windows 10 app.

Microsoft also announced that its new browser, rebranded as “Edge,” will run Chrome and Firefox extensions, and that its Office suite would enable a range of third-party services to integrate with Office on Windows, iOS, Android and Mac.

Consumers, developers and Microsoft itself should all benefit from the increased competition that these moves are certain to facilitate.

Most obviously, more consumers may be willing to switch to phones and tablets with the Windows 10 operating system if they can continue to enjoy the apps and extensions they’ve come to rely on when using Google and Apple products. As one commenter said of the move:

I left Windows phone due to the lack of apps. I love the OS though, so if this means all my favorite apps will be on the platform I’ll jump back onto the WP bandwagon in a heartbeat.

And developers should invest more in development when they can expect additional revenue from yet another platform running their apps and extensions, with minimal additional development required.

It’s win-win-win. Except perhaps for Microsoft’s lingering regulatory strategy to hobble Google.

That strategy is built primarily on antitrust claims, most recently rooted in arguments that consumers, developers and competitors alike are harmed by Google’s conduct around Android which, it is alleged, makes it difficult for OS makers (like Cyanogen) and app developers (like Microsoft Bing) to compete.

But Microsoft’s interoperability announcements (along with a host of other rapidly evolving market characteristics) actually serve to undermine the antitrust arguments that Microsoft, through groups like FairSearch and ICOMP, has largely been responsible for pushing in the EU against Google/Android.

The reality is that, with innovations like the one Microsoft announced this week, Microsoft, Google and Apple (and Samsung, Nokia, Tizen, Cyanogen…) are competing more vigorously on several fronts. Such competition is evidence of a vibrant marketplace that is simply not in need of antitrust intervention.

The supreme irony in this is that such a move represents a (further) nail in the coffin of the supposed “applications barrier to entry” that was central to the US DOJ’s antitrust suit against Microsoft and that factors into the contemporary Android antitrust arguments against Google.

Frankly, the argument was never very convincing. Absent unjustified and anticompetitive efforts to prop up such a barrier, the “applications barrier to entry” is just a synonym for “big.” Admittedly, the DC Court of Appeals in Microsoft was careful — far more careful than the district court — to locate specific, narrow conduct beyond the mere existence of the alleged barrier that it believed amounted to anticompetitive monopoly maintenance. But central to the imposition of liability was the finding that some of Microsoft’s conduct deterred application developers from effectively accessing other platforms, without procompetitive justification.

With the implementation of initiatives like the one Microsoft has now undertaken in Windows 10, however, it appears that such concerns regarding Google and mobile app developers are unsupportable.

Of greatest significance to the current Android-related accusations against Google, the appeals court in Microsoft also reversed the district court’s finding of liability based on tying, noting in particular that:

If OS vendors without market power also sell their software bundled with a browser, the natural inference is that sale of the items as a bundle serves consumer demand and that unbundled sale would not.

Of course this is exactly what Microsoft Windows Phone (which decidedly does not have market power) does, suggesting that the bundling of mobile OS’s with proprietary apps is procompetitive.

Similarly, in reviewing the eventual consent decree in Microsoft, the appeals court upheld the conditions that allowed the integration of OS and browser code, and rejected the plaintiff’s assertion that a prohibition on such technological commingling was required by law.

The appeals court praised the district court’s recognition that an appropriate remedy “must place paramount significance upon addressing the exclusionary effect of the commingling, rather than the mere conduct which gives rise to the effect,” as well as the district court’s acknowledgement that “it is not a proper task for the Court to undertake to redesign products.”  Said the appeals court, “addressing the applications barrier to entry in a manner likely to harm consumers is not self-evidently an appropriate way to remedy an antitrust violation.”

Today, claims that the integration of Google Mobile Services (GMS) into Google’s version of the Android OS is anticompetitive are misplaced for the same reason:

But making Android competitive with its tightly controlled competitors [e.g., Apple iOS and Windows Phone] requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

In fact, some commenters have even suggested that, by effectively making the OS more “open,” Microsoft’s new Windows 10 initiative might undermine the Windows experience in exactly this fashion:

As a Windows Phone developer, I think this could easily turn into a horrible idea…. [I]t might break the whole Windows user experience Microsoft has been building in the past few years. Modern UI design is a different approach from both Android and iOS. We risk having a very unhomogenic [sic] store with lots of apps using different design patterns, and Modern UI is in my opinion, one of the strongest points of Windows Phone.

But just because Microsoft may be willing to take this risk doesn’t mean that any sensible conception of competition law and economics should require Google (or anyone else) to do so, as well.

Most significantly, Microsoft’s recent announcement is further evidence that both technological and contractual innovations can (potentially — the initiative is too new to know its effect) transform competition, undermine static market definitions and weaken theories of anticompetitive harm.

When apps and their functionality are routinely built into some OS’s or set as defaults; when mobile apps are also available for the desktop and are seamlessly integrated to permit identical functions to be performed on multiple platforms; and when new form factors like Apple MacBook Air and Microsoft Surface blur the lines between mobile and desktop, traditional, static anticompetitive theories are out the window (no pun intended).

Of course, it’s always been possible for new entrants to overcome network effects and scale impediments by a range of means. Microsoft itself has in the past offered to pay app developers to write for its mobile platform. Similarly, it offers inducements to attract users to its Bing search engine and it has devised several creative mechanisms to overcome its claimed scale inferiority in search.

A further irony (and market complication) is that now some of these apps — the ones with network effects of their own — threaten in turn to challenge the reigning mobile operating systems, exactly as Netscape was purported to threaten Microsoft’s OS (and lead to its anticompetitive conduct) back in the day. Facebook, for example, now offers not only its core social media function, but also search, messaging, video calls, mobile payments, photo editing and sharing, and other functionality that compete with many of the core functions built into mobile OS’s.

But the desire by apps like Facebook to expand their networks by being on multiple platforms, and the desire by these platforms to offer popular apps in order to attract users, ensure that Facebook is ubiquitous, even without any antitrust intervention. As Timothy Bresnahan, Joe Orsini and Pai-Ling Yin demonstrate:

(1) The distribution of app attractiveness to consumers is skewed, with a small minority of apps drawing the vast majority of consumer demand. (2) Apps which are highly demanded on one platform tend also to be highly demanded on the other platform. (3) These highly demanded apps have a strong tendency to multihome, writing for both platforms. As a result, the presence or absence of apps offers little reason for consumers to choose a platform. A consumer can choose either platform and have access to the most attractive apps.

Of course, even before Microsoft’s announcement, cross-platform app development was common, and third-party platforms like Xamarin facilitated cross-platform development. As Daniel O’Connor noted last year:

Even if one ecosystem has a majority of the market share, software developers will release versions for different operating systems if it is cheap/easy enough to do so…. As [Torsten] Körber documents [here], building mobile applications is much easier and cheaper than building PC software. Therefore, it is more common for programmers to write programs for multiple OSes…. 73 percent of apps developers design apps for at least two different mobiles OSes, while 62 percent support 3 or more.

Whether Microsoft’s interoperability efforts prove to be “perfect” or not (and some commenters are skeptical), they seem destined to at least further decrease the cost of cross-platform development, thus reducing any “application barrier to entry” that might impede Microsoft’s ability to compete with its much larger rivals.

Moreover, one of the most interesting things about the announcement is that it will enable Android and iOS apps to run not only on Windows phones, but also on Windows computers. Some 1.3 billion PCs run Windows. Forget Windows’ tiny share of mobile phone OS’s; that massive potential PC market (of which Microsoft still has 91 percent) presents an enormous ready-made market for mobile app developers that won’t be ignored.

It also points up the increasing absurdity of compartmentalizing these markets for antitrust purposes. As the relevant distinctions between mobile and desktop markets break down, the idea of Google (or any other company) “leveraging its dominance” in one market to monopolize a “neighboring” or “related” market is increasingly unsustainable. As I wrote earlier this week:

Mobile and social media have transformed search, too…. This revolution has migrated to the computer, which has itself become “app-ified.” Now there are desktop apps and browser extensions that take users directly to Google competitors such as Kayak, eBay and Amazon, or that pull and present information from these sites.

In the end, intentionally or not, Microsoft is (again) undermining its own case. And it is doing so by innovating and competing — those Schumpeterian concepts that were always destined to undermine antitrust cases in the high-tech sector.

If we’re lucky, Microsoft’s new initiatives are the leading edge of a sea change for Microsoft — a different and welcome mindset built on competing in the marketplace rather than at regulators’ doors.

Last week, the FTC announced its complaint and consent decree with Nomi Technologies for failing to allow consumers to opt-out of cell phone tracking while shopping in retail stores. Whatever one thinks about Nomi itself, the FTC’s enforcement action represents another step in the dubious application of its enforcement authority against deceptive statements.

In response, Geoffrey Manne, Ben Sperry, and Berin Szoka have written a new ICLE White Paper, titled, In the Matter of Nomi, Technologies, Inc.: The Dark Side of the FTC’s Latest Feel-Good Case.

Nomi Technologies offers retailers an innovative way to observe how customers move through their stores, how often they return, what products they browse and for how long (among other things) by tracking the Wi-Fi addresses broadcast by customers’ mobile phones. This allows stores to do what websites do all the time: tweak their configuration, pricing, purchasing and the like in response to real-time analytics — instead of just eyeballing what works. Nomi anonymized the data it collected so that retailers couldn’t track specific individuals. Recognizing that some customers might still object, even to “anonymized” tracking, Nomi allowed anyone to opt-out of all Nomi tracking on its website.

The FTC, though, seized upon a promise made within Nomi’s privacy policy to provide an additional, in-store opt out and argued that Nomi’s failure to make good on this promise — and/or notify customers of which stores used the technology — made its privacy policy deceptive. Commissioner Wright dissented, noting that the majority failed to consider evidence that showed the promise was not material, arguing that the inaccurate statement was not important enough to actually affect consumers’ behavior because they could opt-out on the website anyway. Both Commissioners Wright’s and Commissioner Ohlhausen’s dissents argued that the FTC majority’s enforcement decision in Nomi amounted to prosecutorial overreach, imposing an overly stringent standard of review without any actual indication of consumer harm.

The FTC’s deception authority is supposed to provide the agency with the authority to remedy consumer harms not effectively handled by common law torts and contracts — but it’s not a blank check. The 1983 Deception Policy Statement requires the FTC to demonstrate:

  1. There is a representation, omission or practice that is likely to mislead the consumer;
  2. A consumer’s interpretation of the representation, omission, or practice is considered reasonable under the circumstances; and
  3. The misleading representation, omission, or practice is material (meaning the inaccurate statement was important enough to actually affect consumers’ behavior).

Under the DPS, certain types of claims are treated as presumptively material, although the FTC is always supposed to “consider relevant and competent evidence offered to rebut presumptions of materiality.” The Nomi majority failed to do exactly that in its analysis of the company’s claims, as Commissioner Wright noted in his dissent:

the Commission failed to discharge its commitment to duly consider relevant and competent evidence that squarely rebuts the presumption that Nomi’s failure to implement an additional, retail-level opt out was material to consumers. In other words, the Commission neglects to take into account evidence demonstrating consumers would not “have chosen differently” but for the allegedly deceptive representation.

As we discuss in detail in the white paper, we believe that the Commission committed several additional legal errors in its application of the Deception Policy Statement in Nomi, over and above its failure to adequately weigh exculpatory evidence. Exceeding the legal constraints of the DPS isn’t just a legal problem: in this case, it’s led the FTC to bring an enforcement action that will likely have the very opposite of its intended result, discouraging rather than encouraging further disclosure.

Moreover, as we write in the white paper:

Nomi is the latest in a long string of recent cases in which the FTC has pushed back against both legislative and self-imposed constraints on its discretion. By small increments (unadjudicated consent decrees), but consistently and with apparent purpose, the FTC seems to be reverting to the sweeping conception of its power to police deception and unfairness that led the FTC to a titanic clash with Congress back in 1980.

The Nomi case presents yet another example of the need for FTC process reforms. Those reforms could ensure the FTC focuses on cases that actually make consumers better off. But given the FTC majority’s unwavering dedication to maximizing its discretion, such reforms will likely have to come from Congress.

Find the full white paper here.

The precise details underlying the European Commission’s (EC) April 15 Statement of Objections (SO), the EC’s equivalent of an antitrust complaint, against Google, centered on the company’s promotion of its comparison shopping service (CSS), “Google Shopping,” have not yet been made public.  Nevertheless, the EC’s fact sheet describing the theory of the case is most discouraging to anyone who believes in economically sound, consumer welfare-oriented antitrust enforcement.   Put simply, the SO alleges that Google is “abusing its dominant position” in online search services throughout Europe by systematically positioning and prominently displaying its CSS in its general search result pages, “irrespective of its merits,” causing the Google CSS to achieve higher rates of growth than CSSs promoted by rivals.  According to the EC, this behavior “has a negative impact on consumers and innovation”.  Why so?  Because this “means that users do not necessarily see the most relevant shopping results in response to their queries, and that incentives to innovate from rivals are lowered as they know that however good their product, they will not benefit from the same prominence as Google’s product.”  (Emphasis added.)  The EC’s proposed solution?  “Google should treat its own comparison shopping services and those of rivals in the same way.”

The EC’s latest action may represent only “the tip of a Google EC antitrust iceberg,” since the EC has stated that it is continuing to investigate other aspects of Google’s behavior, including Google agreements with respect to the Android operating system, plus “the favourable treatment by Google in its general search results of other specialised search services, and concerns with regard to copying of rivals’ web content (known as ‘scraping’), advertising exclusivity and undue restrictions on advertisers.”  For today, I focus on the tip, leaving consideration of the bulk of the iceberg to future commentaries, as warranted.  (Truth on the Market has addressed Google-related antitrust issues previously — see, for example, here, here, and here.)

The EC’s April 15 Google SO is troublesome in multiple ways.

First, the claim that Google does not “necessarily” array the most relevant search results in a manner desired by consumers appears to be in tension with the findings of an exhaustive U.S. antitrust investigation of the company.  As U.S. Federal Trade Commissioner Josh Wright pointed out in a recent speech, the FTC’s 2013 “closing statement [in its Google investigation] indicates that Google’s so-called search bias did not, in fact, harm consumers; to the contrary, the evidence suggested that ‘Google likely benefited consumers by prominently displaying its vertical content on its search results page.’  The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior – so-called ‘click through’ data – which showed how consumers reacted to Google’s promotion of its vertical properties.”

Second, even assuming that Google’s search engine practices have weakened competing CSSs, that would not justify EC enforcement action against Google.  As Commissioner Wright also explained, the FTC “accepted arguments made by competing websites that Google’s practices injured them and strengthened Google’s market position, but correctly found that these were not relevant considerations in a proper antitrust analysis focused upon consumer welfare rather than harm to competitors.”  The EC should keep this in mind, given that, as former EC Competition Commissioner Joaquin Almunia emphasized, “[c]onsumer welfare is not just a catchy phrase.  It is the cornerstone, the guiding principle of EU competition policy.”

Third, and perhaps most fundamentally, although EC disclaims an interest in “interfere[ing] with” Google’s search engine algorithm, dictating an “equal treatment of competitors” result implicitly would require intrusive micromanagement of Google’s search engine – a search engine which is at the heart of the company’s success and has bestowed enormous welfare benefits on consumers and producers alike.  There is no reason to believe that EC policing of EC CSS listings to promote an “equal protection of competitors” mandate would result in a search experience that better serves consumers than the current Google policy.  Consistent with this point, in its 2013 Google closing statement, the FTC observed that it lacked the ability to “second-guess” product improvements that plausibly benefit consumers, and it stressed that “condemning legitimate product improvements risks harming consumers.”

Fourth, competing CSSs have every incentive to inform consumers if they believe that Google search results are somehow “inferior” to their offerings.  They are free to advertise and publicize the merits of their services, and third party intermediaries that rate browsers may be expected to report if Google Shopping consistently offers suboptimal consumer services.  In short, “the word will get out.”  Even in the absence of perfect information, consumers can readily at low cost browse alternative CSSs to determine whether they prefer their services to Google’s – “help is only a click away.”

Fifth, the most likely outcome of an EC “victory” in this case would be a reduced incentive for Google to invest in improving its search engine, knowing that its ability to monetize search engine improvements could be compromised by future EC decisions to prevent an improved search engine from harming rivals.  What’s worse, other developers of service platforms and other innovative business improvements would similarly “get the message” that it would not be worth their while to innovate to the point of dominance, because their returns to such innovation would be constrained.  In sum, companies in a wide variety of sectors would have less of an incentive to innovate, and this in turn would lead to reduced welfare gains and benefits to consumers.  This would yield (as the EC’s fact sheet put it) “a negative impact on consumers and innovation”, because companies across industries operating in Europe would know that if their product were too good, they would attract the EC’s attention and be put in their place.  In other words, a successful EC intervention here could spawn the very welfare losses (magnified across sectors) that the Commission cited as justification for reining in Google in the first place!

Finally, it should come as no surprise that a coalition of purveyors of competing search engines and online shopping sites lobbied hard for EC antitrust action against Google.  When government intervenes heavily and often in markets to “correct” perceived “abuses,” private actors have a strong incentive to expend resources on achieving government actions that disadvantage their rivals – resources that could otherwise have been used to compete more vigorously and effectively.  In short, the very existence of expansive regulatory schemes disincentivizes competition on the merits, and in that regard tends to undermine welfare.  Government officials should keep that firmly in mind when private actors urge them to act decisively to “cure” marketplace imperfections by limiting a rival’s freedom of action.

Let us hope that the EC takes these concerns to heart before taking further action against Google.

By a 3-2 vote, the Federal Communications Commission (FCC) decided on February 26 to preempt state laws in North Carolina and Tennessee that bar municipally-owned broadband providers from providing services beyond their geographic boundaries.  This decision raises substantial legal issues and threatens economic harm to state taxpayers and consumers.

The narrow FCC majority rested its decision on its authority to remove broadband investment barriers, citing Section 706 of the Telecommunications Act of 1996.  Section 706 requires the FCC to encourage the deployment of broadband to all Americans by using “measures that promote competition in the local telecommunications market, or other regulating methods that remove barriers to infrastructure investment.”  As dissenting Commissioner Ajit Pai pointed out, however, Section 706 contains no specific language empowering it to preempt state laws, and the FCC’s action trenches upon the sovereign power of the states to control their subordinate governmental entities.  Moreover, it is far from clear that authorizing government-owned broadband companies to expand into new territories promotes competition or eliminates broadband investment barriers.  Indeed, the opposite is more likely to be the case.

Simply put, government-owned networks artificially displace market forces and are an affront to a reliance on free competition to provide the goods and services consumers demand – including broadband communications.  Government-owned networks use local taxpayer monies and federal grants (also taxpayer funded, of course) to compete unfairly with existing private sector providers.  Those taxpayer subsidies put privately funded networks at a competitive disadvantage, creating barriers to new private sector entry or expansion, as private businesses decide they cannot fairly compete against government-backed enterprises.  In turn, reduced private sector investment tends to diminish quality and effective consumer choice.

These conclusions are based on hard facts, not mere theory.  There is no evidence that municipal broadband is needed because “market failure” has deterred private sector provision of broadband – indeed, firms such as Verizon, AT&T, and Comcast spend many billions of dollars annually to maintain, upgrade, and expand their broadband networks.  Indeed, far more serious is the risk of “government failure.”  Municipal corporations, free from market discipline and accountability due to their public funding, may be expected to be bureaucratic, inefficient, and slow to react to changing market conditions.  Consistent with this observation, an economic study of government-operated municipal broadband networks reveals failures to achieve universal service in areas that they serve; lack of cost-benefit analysis that has caused costs to outweigh benefits; the inefficient use of scarce resources; the inability to cover costs; anticompetitive behavior fueled by unfair competitive advantages; the inefficient allocation of limited tax revenues that are denied to more essential public services; and the stifling of private firm innovation.  In a time of tight budget constraints, the waste of taxpayer funds and competitive harm stemming from municipal broadband activities is particularly unfortunate.  In short, real world evidence demonstrates that “[i]n a dynamic market such as broadband services, government ownership has proven to be an abject failure.”  What is required is not more government involvement, but, rather, fewer governmental constraints on private sector broadband activities.

Finally, what’s worse, the FCC’s decision has harmful constitutional overtones.  The Chattanooga, Tennessee and Wilson, North Carolina municipal broadband networks that requested FCC preemption impose troublesome speech limitations as conditions of service.  The utility that operates the Chattanooga network may “reject or remove any material residing on or transmitted to or through” the network that violates its “Accepted Use Policy.”  That Policy, among other things, prohibits using the network to send materials that are “threatening, abusive or hateful” or that offend “the privacy, publicity, or other personal rights of others.”  It also bars the posting of messages that are “intended to annoy or harass others.”  In a similar vein, the Wilson network bars transmission of materials that are “harassing, abusive, libelous or obscene” and “activities or actions intended to withhold or cloak any user’s identity or contact information.”  Content-based prohibitions of this type broadly restrict carriage of constitutionally protected speech and, thus, raise serious First Amendment questions.  Other municipal broadband systems may, of course, elect to adopt similarly questionable censorship-based policies.

In short, the FCC’s broadband preemption decision is likely to harm economic welfare and is highly problematic on legal grounds to boot.  The FCC should rescind that decision.  If it fails to do so, and if the courts do not strike the decision down, Congress should consider legislation to bar the FCC from meddling in state oversight of municipal broadband.

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.