Archives For antitrust

PayPal co-founder Peter Thiel has a terrific essay in the Review section of today’s Wall Street Journal.  The essay, Competition Is for Losers, is adapted from Mr. Thiel’s soon-to-be-released book, Zero to One: Notes on Startups, or How to Build the Future.  Based on the title of the book, I assume it is primarily a how-to guide for entrepreneurs.  But if the rest of the book is anything like the essay in today’s Journal, it will also offer lots of guidance to policy makers–antitrust officials in particular.

We antitrusters usually begin with the assumption that monopoly is bad and perfect competition is good. That’s the starting point for most antitrust courses: the professor lays out the model of perfect competition, points to all the wealth it creates and how that wealth is distributed (more to consumers than to producers), and contrasts it to the monopoly pricing model, with its steep marginal revenue curve, hideous “deadweight loss” triangle, and unseemly redistribution of surplus from consumers to producers. Which is better, kids?  Why, perfect competition, of course!

Mr. Thiel makes the excellent and oft-neglected point that monopoly power is not necessarily a bad thing. First, monopolists can do certain good things that perfect competitors can’t do:

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google’s motto–“Don’t be evil”–is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence.  In business, money is either an important thing or it is everything. Monopolists can think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

Fair enough, Thiel. But what about consumers? That model we learned shows us that they’re worse off under monopoly.  And what about the deadweight loss triangle–don’t forget about that ugly thing! 

So a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsize profits come at the expense of the rest of society? Actually, yes: Profits come out of customers’ wallets, and monopolies deserve their bad reputations–but only in a world where nothing changes.

Wait a minute, Thiel. Why do you think things are different when we inject “change” into the analysis?

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: Deeds are shuffled around from player to player, but the board never changes. There is no way to win by inventing a better kind of real estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up.

But the world we live in is dynamic: We can invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

Even the government knows this: That is why one of the departments works hard to create monopolies (by granting patents to new inventions) even though another part hunts them down (by prosecuting antitrust cases). It is possible to question whether anyone should really be rewarded a monopoly simply for having been the first to think of something like a mobile software design. But something like Apple’s monopoly profits from designing, producing and marketing the iPhone were clearly the reward for creating greater abundance, not artificial scarcity: Customers were happy to finally have the choice of paying high prices to get a smartphone that actually works. The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decadeslong operating system dominance.

…If the tendency of monopoly businesses was to hold back progress, they would be dangerous, and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and finance the ambitious research projects that firms locked in competition can’t dream of.

Geez, Thiel.  You know who you sound like?  Justice Scalia. Here’s how he once explained your idea (to shrieks and howls from many in the antitrust establishment!):

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices–at least for a short period–is what attracts “business acumen” in the first place. It induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Sounds like you and Scalia are calling for us antitrusters to update our models.  Is that it?

So why are economists obsessed with competition as an ideal state? It is a relic of history. Economists copied their mathematics from the work of 19th-century physicists: They see individuals and businesses as interchangeable atoms, not as unique creators. Their theories describe an equilibrium state of perfect competition because that is what’s easy to model, not because it represents the best of business.

C’mon now, Thiel. Surely you don’t expect us antitrusters to defer to you over all these learned economists when it comes to business.

A century ago Congress enacted the Clayton Act, which prohibits acquisitions that may substantially lessen competition. For years, the antitrust enforcement Agencies looked at only one part of the ledger – the potential for price increases. Agencies didn’t take into account the potential efficiencies in cost savings, better products, services, and innovation. One of the major reforms of the Clinton Administration was to fully incorporate efficiencies in merger analysis, helping to develop sound enforcement standards for the 21st Century.

But the current approach of the Federal Trade Commission (“FTC”), especially in hospital mergers, appears to be taking a major step backwards by failing to fully consider efficiencies and arguing for legal thresholds inconsistent with sound competition policy. The FTC’s approach used primarily in hospital mergers seems uniquely misguided since there is a tremendous need for smart hospital consolidation to help bend the cost curve and improve healthcare delivery.

The FTC’s backwards analysis of efficiencies is juxtaposed in two recent hospital-physician alliances.

As I discussed in my last post, no one would doubt the need for greater integration between hospitals and physicians – the debate during the enactment of the Affordable Care Act (“ACA”) detailed how the current siloed approach to healthcare is the worst of all worlds, leading to escalating costs and inferior care. In FTC v. St. Luke’s Health System, Ltd., the FTC challenged Boise-based St. Luke’s acquisition of a physician practice in neighboring Nampa, Idaho.

In the case, St. Luke’s presented a compelling case for efficiencies.

As noted by the St. Luke’s court, one of the leading factors in rising healthcare costs is the use of the ineffective fee-for-service system. In their attempt to control costs and abandon fee-for-service payment, the merging parties effectively demonstrated to the court that the combined entity would offer a high level of coordinated and patient-centered care. Therefore, along with integrating electronic records and increasing access for under-privileged patients, the merged entity can also successfully manage population health and offer risk-based payment initiatives to all employed physicians. Indeed, the transaction consummated several months ago has already shown significant cost savings and consumer benefits especially for underserved patients. The court recognized

[t]he Acquisition was intended by St. Luke’s and Saltzer primarily to improve patient outcomes. The Court believes that it would have that effect if left intact.

(Appellants’ Reply Brief at 22, FTC v. St. Luke’s Health Sys., No 14-35173 (9th Cir. Sept. 2, 2014).)

But the court gave no weight to the efficiencies primarily because the FTC set forward the wrong legal roadmap.

Under the FTC’s current roadmap for efficiencies, the FTC may prove antitrust harm via predication and presumption while defendants are required to decisively prove countervailing procompetitive efficiencies. Such asymmetric burdens of proof greatly favor the FTC and eliminate a court’s ability to properly analyze the procompetitive nature of efficiencies against the supposed antitrust harm.

Moreover, the FTC basically claims that any efficiencies can only be considered “merger-specific” if the parties are able to demonstrate there are no less anticompetitive means to achieve them. It is not enough that they result directly from the merger.

In the case of St. Luke’s, the court determined the defendants’ efficiencies would “improve the quality of medical care” in Nampa, Idaho, but were not merger-specific. The court relied on the FTC’s experts to find that efficiencies such as “elimination of fee-for-service reimbursement” and the movement “to risk-based reimbursement” were not merger-specific, because other entities had potentially achieved similar efficiencies within different provider “structures.” The FTC and their experts did not indicate the success of these other models nor dispute that St. Luke’s would achieve their stated efficiencies. Instead, the mere possibility of potential, alternative structures was enough to overcome merger efficiencies purposed to “move the focus of health care back to the patient.” (The case is currently on appeal and hopefully the Ninth Circuit can correct the lower court’s error).

In contrast to the St. Luke’s case is the recent FTC advisory letter to the Norman Physician Hospital Organization (“Norman PHO”). The Norman PHO proposed a competitive collaboration serving to integrate care between the Norman Physician Association’s 280 physicians and Norman Regional Health System, the largest health system in Norman, Oklahoma. In its analysis of the Norman PHO, the FTC found that the groups could not “quantify… the likely overall efficiency benefits of its proposed program” nor “provide direct evidence of actual efficiencies or competitive effects.” Furthermore, such an arrangement had the potential to “exercise market power.” Nonetheless, the FTC permitted the collaboration. Its decision was instead decided on the basis of Norman PHO’s non-exclusive physician contracting provisions.

It seems difficult if not impossible to reconcile the FTC’s approaches in Boise and Norman. In Norman the FTC relied on only theoretical efficiencies to permit an alliance with significant market power. The FTC was more than willing to accept Norman PHO’s “potential to… generate significant efficiencies.” Such an even-handed approach concerning efficiencies was not applied in analyzing efficiencies in St. Luke’s merger.

The starting point for understanding the FTC’s misguided analysis of efficiencies in St. Luke’s and other merger cases stems from the 2010 Horizontal Merger Guidelines (“Guidelines”).

A recent dissent by FTC Commissioner Joshua Wright outlines the problem – there are asymmetric burdens placed on the plaintiff and defendant. Using the Guidelines, FTC’s merger analysis

embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.

Relying on the structural presumption established in United States v. Philadelphia Nat’l Bank, the FTC need only illustrate that a merger will substantially lessen competition, typically demonstrated through a showing of undue concentration in a relevant market, not actual anticompetitive effects. If this low burden is met, the burden is then shifted to the defendants to rebut the presumption of competitive harm.

As part of their defense, defendants must then prove that any proposed efficiencies are cognizable, meaning “merger-specific,” and have been “verified and do not arise from anticompetitive reductions in output or service.” Furthermore, merging parties must demonstrate “by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved…, how each would enhance the merged firm’s ability and incentive to compete, and why each would be merger-specific.”

As stated in a recent speech by FTC Commissioner Joshua Wright,

the critical lesson of the modern economic approach to mergers is that post-merger changes in pricing incentives and competitive effects are what matter.

The FTC’s merger policy “has long been dominated by a focus on only one side of the ledger—anticompetitive effects.” In other words the defendants must demonstrate efficiencies with certainty, while the government can condemn a merger based on a prediction. This asymmetric enforcement policy favors the FTC while requiring defendants meet stringent, unyielding standards.

As the ICLE amicus brief in St. Luke’s discusses, not satisfied with the asymmetric advantage, the plaintiffs in St. Luke’s attempt to “guild the lily” by claiming that efficiencies can only be considered in cases where there is a presumption of competitive harm, perhaps based solely on “first order” evidence, such as increased market shares. Of course, nothing in the law, Guidelines, or sound competition policy limits the defense in that fashion.

The court should consider efficiencies regardless of the level of economic harm. The question is whether the efficiencies will outweigh that harm. As Geoff recently pointed out:

There is no economic basis for demanding more proof of claimed efficiencies than of claimed anticompetitive harms. And the Guidelines since 1997 were (ostensibly) drafted in part precisely to ensure that efficiencies were appropriately considered by the agencies (and the courts) in their enforcement decisions.

With presumptions that strongly benefit the FTC, it is clear that efficiencies are often overlooked or ignored. From 1997-2007, FTC’s Bureau of Competition staff deliberated on a total of 342 efficiencies claims. Of the 342 efficiency claims, only 29 were accepted by FTC staff whereas 109 were rejected and 204 received “no decision.” The most common concerns among FTC staff were that stated efficiencies were not verifiable or were not merger specific.

Both “concerns” come directly from the Guidelines requiring plaintiffs provide significant and oftentimes impossible foresight and information to overcome evidentiary burdens. As former FTC Chairman Tim Muris observed

too often, the [FTC] found no cognizable efficiencies when anticompetitive effects were determined to be likely and seemed to recognize efficiency only when no adverse effects were predicted.

Thus, in situations in which the FTC believes the dominant issue is market concentration, plaintiffs’ attempts to demonstrate procompetitive reasoning are outright dismissed.

The FTC’s efficiency arguments are also not grounded in legal precedent. Courts have recognized that asymmetric burdens are inconsistent with the intent of the Act. As then D.C. Circuit Judge Clarence Thomas observed,

[i]mposing a heavy burden of production on a defendant would be particularly anomalous where … it is easy to establish a prima facie case.

Courts have recognized that efficiencies can be “speculative” or be “based on a prediction backed by sound business judgment.” And in Sherman Act cases the law places the burden on the plaintiff to demonstrate that there are less restrictive alternatives to a potentially illegal restraint – unlike the requirement applied by the FTC that the defendant prove there are no less restrictive alternatives to a merger to achieve efficiencies.

The FTC and the courts should deem worthy efficiencies wherein there is a reasonable likelihood that procompetitive effects will take place post-merger. Furthermore, the courts should not look at efficiencies inside a vacuum. In healthcare, policies and laws, such as the effects of the ACA, must be taken into account. The ACA promotes coordination among providers and incentivizes entities that can move away from fee-for-service payment. In the past, courts relying on the role of health policy in merger analysis have found that efficiencies leading to integrated medicine and “better medical care” are relevant.

In St. Luke’s the court observed that “the existing law seemed to hinder innovation and resist creative solutions” and that “flexibility and experimentation” are “two virtues that are not emphasized in the antitrust law.” Undoubtedly, the current approach to efficiencies makes it near impossible for providers to demonstrate efficiencies.

As Commissioner Wright has observed, these asymmetric evidentiary burdens

do not make economic sense and are inconsistent with a merger policy designed to promote consumer welfare.

In the context of St. Luke’s and other healthcare provider mergers, appropriate efficiency analysis is a keystone of determining a merger’s total effects. Dismissal of efficiencies on the basis of a rigid, incorrect legal procedural structure is not aligned with current economic thinking or a sound approach to incorporate competition analysis into the drive for healthcare reform. It is time for the FTC to set efficiency analysis in the right direction.

The free market position on telecom reform has become rather confused of late. Erstwhile conservative Senator Thune is now cosponsoring a version of Senator Rockefeller’s previously proposed video reform bill, bundled into satellite legislation (the Satellite Television Access and Viewer Rights Act or “STAVRA”) that would also include a provision dubbed “Local Choice.” Some free marketeers have defended the bill as a step in the right direction.

Although it looks as if the proposal may be losing steam this Congress, the legislation has been described as a “big and bold idea,” and it’s by no means off the menu. But it should be.

It has been said that politics makes for strange bedfellows. Indeed, people who disagree on just about everything can sometimes unite around a common perceived enemy. Take carriage disputes, for instance. Perhaps because, for some people, a day without The Bachelor is simply a day lost, an unlikely alliance of pro-regulation activists like Public Knowledge and industry stalwarts like Dish has emerged to oppose the ability of copyright holders to withhold content as part of carriage negotiations.

Senator Rockefeller’s Online Video Bill was the catalyst for the Local Choice amendments to STAVRA. Rockefeller’s bill did, well, a lot of terrible things, from imposing certain net neutrality requirements, to overturning the Supreme Court’s Aereo decision, to adding even more complications to the already Byzantine morass of video programming regulations.

But putting Senator Thune’s lipstick on Rockefeller’s pig can’t save the bill, and some of the worst problems from Senator Rockefeller’s original proposal remain.

Among other things, the new bill is designed to weaken the ability of copyright owners to negotiate with distributors, most notably by taking away their ability to withhold content during carriage disputes and by forcing TV stations to sell content on an a la carte basis.

Video distribution issues are complicated — at least under current law. But at root these are just commercial contracts and, like any contracts, they rely on a couple of fundamental principles.

First is the basic property right. The Supreme Court (at least somewhat) settled this for now (in Aereo), by protecting the right of copyright holders to be compensated for carriage of their content. With this baseline, distributors must engage in negotiations to obtain content, rather than employing technological workarounds and exploiting legal loopholes.

Second is the related ability of contracts to govern the terms of trade. A property right isn’t worth much if its owner can’t control how it is used, governed or exchanged.

Finally, and derived from these, is the issue of bargaining power. Good-faith negotiations require both sides not to act strategically by intentionally causing negotiations to break down. But if negotiations do break down, parties need to be able to protect their rights. When content owners are not able to withhold content in carriage disputes, they are put in an untenable bargaining position. This invites bad faith negotiations by distributors.

The STAVRA/Local Choice proposal would undermine the property rights and freedom of contract that bring The Bachelor to your TV, and the proposed bill does real damage by curtailing the scope of the property right in TV programming and restricting the range of contracts available for networks to license their content.

The bill would require that essentially all broadcast stations that elect retrans make their content available a la carte — thus unbundling some of the proverbial sticks that make up the traditional property right. It would also establish MVPD pass-through of each local affiliate. Subscribers would pay a fee determined by the affiliate, and the station must be offered on an unbundled basis, without any minimum tier required – meaning an MVPD has to offer local stations to its customers with no markup, on an a la carte basis, if the station doesn’t elect must-carry. It would also direct the FCC to open a rulemaking to determine whether broadcasters should be prohibited from withholding their content online during a dispute with an MPVD.

“Free market” supporters of the bill assert something like “if we don’t do this to stop blackouts, we won’t be able to stem the tide of regulation of broadcasters.” Presumably this would end blackouts of broadcast programming: If you’re an MVPD subscriber, and you pay the $1.40 (or whatever) for CBS, you get it, period. The broadcaster sets an annual per-subscriber rate; MVPDs pass it on and retransmit only to subscribers who opt in.

But none of this is good for consumers.

When transaction costs are positive, negotiations sometimes break down. If the original right is placed in the wrong hands, then contracting may not assure the most efficient outcome. I think it was Coase who said that.

But taking away the ability of content owners to restrict access to their content during a bargaining dispute effectively places the right to content in the hands of distributors. Obviously, this change in bargaining position will depress the value of content. Placing the rights in the hands of distributors reduces the incentive to create content in the first place; this is why the law protects copyright to begin with. But it also reduces the ability of content owners and distributors to reach innovative agreements and contractual arrangements (like certain promotional deals) that benefit consumers, distributors and content owners alike.

The mandating of a la carte licensing doesn’t benefit consumers, either. Bundling is generally pro-competitive and actually gives consumers more content than they would otherwise have. The bill’s proposal to force programmers to sell content to consumers a la carte may actually lead to higher overall prices for less content. Not much of a bargain.

There are plenty of other ways this is bad for consumers, even if it narrowly “protects” them from blackouts. For example, the bill would prohibit a network from making a deal with an MVPD that provides a discount on a bundle including carriage of both its owned broadcast stations as well as the network’s affiliated cable programming. This is not a worthwhile — or free market — trade-off; it is an ill-advised and economically indefensible attack on vertical distribution arrangements — exactly the same thing that animates many net neutrality defenders.

Just as net neutrality’s meddling in commercial arrangements between ISPs and edge providers will ensure a host of unintended consequences, so will the Rockefeller/Thune bill foreclose a host of welfare-increasing deals. In the end, in exchange for never having to go three days without CBS content, the bill will make that content more expensive, limit the range of programming offered, and lock video distribution into a prescribed business model.

Former FCC Commissioner Rob McDowell sees the same hypocritical connection between net neutrality and broadcast regulation like the Local Choice bill:

According to comments filed with the FCC by Time Warner Cable and the National Cable and Telecommunications Association, broadcasters should not be allowed to take down or withhold the content they produce and own from online distribution even if subscribers have not paid for it—as a matter of federal law. In other words, edge providers should be forced to stream their online content no matter what. Such an overreach, of course, would lay waste to the economics of the Internet. It would also violate the First Amendment’s prohibition against state-mandated, or forced, speech—the flip side of censorship.

It is possible that the cable companies figure that subjecting powerful broadcasters to anti-free speech rules will shift the political momentum in the FCC and among the public away from net neutrality. But cable’s anti-free speech arguments play right into the hands of the net-neutrality crowd. They want to place the entire Internet ecosystem, physical networks, content and apps, in the hands of federal bureaucrats.

While cable providers have generally opposed net neutrality regulation, there is, apparently, some support among them for regulations that would apply to the edge. The Rockefeller/Thune proposal is just a replay of this constraint — this time by forcing programmers to allow retransmission of broadcast content under terms set by Congress. While “what’s good for the goose is good for the gander” sounds appealing in theory, here it is simply doubling down on a terrible idea.

What it reveals most of all is that true neutrality advocates don’t want government control to be limited to ISPs — rather, progressives like Rockefeller (and apparently some conservatives, like Thune) want to subject the whole apparatus — distribution and content alike — to intrusive government oversight in order to “protect” consumers (a point Fred Campbell deftly expands upon here and here).

You can be sure that, if the GOP supports broadcast a la carte, it will pave the way for Democrats (and moderates like McCain who back a la carte) to expand anti-consumer unbundling requirements to cable next. Nearly every economic analysis has concluded that mandated a la carte pricing of cable programming would be harmful to consumers. There is no reason to think that applying it to broadcast channels would be any different.

What’s more, the logical extension of the bill is to apply unbundling to all MVPD channels and to saddle them with contract restraints, as well — and while we’re at it, why not unbundle House of Cards from Orange is the New Black? The Rockefeller bill may have started in part as an effort to “protect” OVDs, but there’ll be no limiting this camel once its nose is under the tent. Like it or not, channel unbundling is arbitrary — why not unbundle by program, episode, studio, production company, etc.?

There is simply no principled basis for the restraints in this bill, and thus there will be no limit to its reach. Indeed, “free market” defenders of the Rockefeller/Thune approach may well be supporting a bill that ultimately leads to something like compulsory, a la carte licensing of all video programming. As I noted in my testimony last year before the House Commerce Committee on the satellite video bill:

Unless we are prepared to bear the consumer harm from reduced variety, weakened competition and possibly even higher prices (and absolutely higher prices for some content), there is no economic justification for interfering in these business decisions.

So much for property rights — and so much for vibrant video programming.

That there is something wrong with the current system is evident to anyone who looks at it. As Gus Hurwitz noted in recent testimony on Rockefeller’s original bill,

The problems with the existing regulatory regime cannot be understated. It involves multiple statutes implemented by multiple agencies to govern technologies developed in the 60s, 70s, and 80s, according to policy goals from the 50s, 60s, and 70s. We are no longer living in a world where the Rube Goldberg of compulsory licenses, must carry and retransmission consent, financial interest and syndication exclusivity rules, and the panoply of Federal, state, and local regulations makes sense – yet these are the rules that govern the video industry.

While video regulation is in need of reform, this bill is not an improvement. In the short run it may ameliorate some carriage disputes, but it will do so at the expense of continued programming vibrancy and distribution innovations. The better way to effect change would be to abolish the Byzantine regulations that simultaneously attempt to place thumbs of both sides of the scale, and to rely on free market negotiations with a copyright baseline and antitrust review for actual abuses.

But STAVRA/Local Choice is about as far from that as you can get.

There is a consensus in America that we need to control health care costs and improve the delivery of health care. After a long debate on health care reform and careful scrutiny of health care markets, there seems to be agreement that the unintegrated, “siloed approach” to health care is inefficient, costly, and contrary to the goal of improving care. But some antitrust enforcers — most notably the FTC — are standing in the way.

Enlightened health care providers are responding to this consensus by entering into transactions that will lead to greater clinical and financial integration, facilitating a movement from volume-based to value-based delivery of care. Any many aspects of the Affordable Care Act encourage this path to integration. Yet when the market seeks to address these critical concerns about our health care system, the FTC and some state Attorneys General take positions diametrically opposed to sound national health care policy as adopted by Congress and implemented by the Department of Health and Human Services.

To be sure, not all state antitrust enforcers stand in the way of health care reform. For example, many states including New York, Pennsylvania and Massachusetts, seem to be willing to permit hospital mergers even in concentrated markets with an agreement for continued regulation. At the same time, however, the FTC has been aggressively challenging integration, taking the stance that hospital mergers will raise prices by giving those hospitals greater leverage in negotiations.

The distance between HHS and the FTC in DC is about 6 blocks, but in healthcare policy they seem to be are miles apart.

The FTC’s skepticism about integration is an old story. As I have discussed previously, during the last decade the agency challenged more than 30 physician collaborations even though those cases lacked any evidence that the collaborations led to higher prices. And, when physicians asked for advice on collaborations, it took the Commission on average more than 436 days to respond to those requests (about as long as it took Congress to debate and enact the Affordable Care Act).

The FTC is on a recent winning streak in challenging hospital mergers. But those were primarily simple cases with direct competition between hospitals in the same market with very high levels of concentration. The courts did not struggle long in these cases, because the competitive harm appeared straightforward.

Far more controversial is when a hospital acquires a physician practice. This type of vertical integration seems precisely what the advocates for health care reform are crying out for. The lack of integration between physicians and hospitals is a core to the problems in health care delivery. But the antitrust law is entirely solicitous of these types of vertical mergers. There has not been a vertical merger successfully challenged in the courts since 1980 – the days of reruns of the TV show Dr. Kildare. And even the supposedly pro-enforcement Obama Administration has not gone to court to challenge a vertical merger, and the Obama FTC has not even secured a merger consent under a vertical theory.

The case in which the FTC has decided to “bet the house” is its challenge to St. Luke’s Health System’s acquisition of Saltzer Medical Group in Nampa, Idaho.

St. Luke’s operates the largest hospital in Boise, and Saltzer is the largest physician practice in Nampa, roughly 20-miles away. But rather than recognizing that this was a vertical affiliation designed to integrate care and to promote a transition to a system in which the provider takes the risk of overutilization, the FTC characterized the transaction as purely horizontal – no different from the merger of two hospitals. In that manner, the FTC sought to paint concentration levels it designed to assure victory.

But back to the reasons why integration is essential. It is undisputed that provider integration is the key to improving American health care. Americans pay substantially more than any other industrialized nation for health care services, 17.2 percent of gross domestic product. Furthermore, these higher costs are not associated with better overall care or greater access for patients. As noted during the debate on the Affordable Care Act, the American health care system’s higher costs and lower quality and access are mostly associated with the usage of a fee-for-service system that pays for each individual medical service, and the “siloed approach” to medicine in which providers work autonomously and do not coordinate to improve patient outcomes.

In order to lower health care costs and improve care, many providers have sought to transform health care into a value-based, patient-centered approach. To institute such a health care initiative, medical staff, physicians, and hospitals must clinically integrate and align their financial incentives. Integrated providers utilize financial risk, share electronic records and data, and implement quality measures in order to provide the best patient care.

The most effective means of ensuring full-scale integration is through a tight affiliation, most often achieved through a merger. Unlike contractual arrangements that are costly, time-sensitive, and complicated by an outdated health care regulatory structure, integrated affiliations ensure that entities can effectively combine and promote structural change throughout the newly formed organization.

For nearly five weeks of trial in Boise St. Luke’s and the FTC fought these conflicting visions of integration and health care policy. Ultimately, the court decided the supposed Nampa primary care physician market posited by the FTC would become far more concentrated, and the merger would substantially lessen competition for “Adult Primary Care Services” by raising prices in Nampa. As such, the district court ordered an immediate divestiture.

Rarely, however, has an antitrust court expressed such anguish at its decision. The district court readily “applauded [St. Luke’s] for its efforts to improve the delivery of healthcare.” It acknowledged the positive impact the merger would have on health care within the region. The court further noted that Saltzer had attempted to coordinate with other providers via loose affiliations but had failed to reap any benefits. Due to Saltzer’s lack of integration, Saltzer physicians had limited “the number of Medicaid or uninsured patients they could accept.”

According to the district court, the combination of St. Luke’s and Saltzer would “improve the quality of medical care.” Along with utilizing the same electronic medical records system and giving the Saltzer physicians access to sophisticated quality metrics designed to improve their practices, the parties would improve care by abandoning fee-for-service payment for all employed physicians and institute population health management reimbursing the physicians via risk-based payment initiatives.

As noted by the district court, these stated efficiencies would improve patient outcomes “if left intact.” Along with improving coordination and quality of care, the merger, as noted by an amicus brief submitted by the International Center for Law & Economics and the Medicaid Defense Fund to the Ninth Circuit, has also already expanded access to Medicaid and uninsured patients by ensuring previously constrained Saltzer physicians can offer services to the most needy.

The court ultimately was not persuaded by the demonstrated procompetitive benefits. Instead, the district court relied on the FTC’s misguided arguments and determined that the stated efficiencies were not “merger-specific,” because such efficiencies could potentially be achieved via other organizational structures. The district court did not analyze the potential success of substitute structures in achieving the stated efficiencies; instead, it relied on the mere existence of alternative provider structures. As a result, as ICLE and the Medicaid Defense Fund point out:

By placing the ultimate burden of proving efficiencies on the Appellants and applying a narrow, impractical view of merger specificity, the court has wrongfully denied application of known procompetitive efficiencies. In fact, under the court’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to oppose untested, theoretical less restrictive structural alternatives.

Notably, the district court’s divestiture order has been stayed by the Ninth Circuit. The appeal on the merits is expected to be heard some time this autumn. Along with reviewing the relevant geographic market and usage of divestiture as a remedy, the Ninth Circuit will also analyze the lower court’s analysis of the merger’s procompetitive efficiencies. For now, the stay order is a limited victory for underserved patients and the merging defendants. While such a ruling is not determinative of the Ninth Circuit’s decision on the merits, it does demonstrate that the merging parties have at least a reasonable possibility of success.

As one might imagine, the Ninth Circuit decision is of great importance to the antitrust and health care reform community. If the district court’s ruling is upheld, it could provide a deterrent to health care providers from further integrating via mergers, a precedent antithetical to the very goals of health care reform. However, if the Ninth Circuit finds the merger does not substantially lessen competition, then precompetitive vertical integration is less likely to be derailed by misapplication of the antitrust laws. The importance and impact of such a decision on American patients cannot be understated.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

Microsoft wants you to believe that Google’s business practices stifle competition and harm consumers. Again.

The latest volley in its tiresome and ironic campaign to bludgeon Google with the same regulatory club once used against Microsoft itself is the company’s effort to foment an Android-related antitrust case in Europe.

In a recent polemicMicrosoft consultant (and business school professor) Ben Edelman denounces Google for requiring that, if device manufacturers want to pre-install key Google apps on Android devices, they “must install all the apps Google specifies, with the prominence Google requires, including setting these apps as defaults where Google instructs.” Edelman trots out gasp-worthy “secret” licensing agreements that he claims support his allegation (more on this later).

Similarly, a recent Wall Street Journal article, “Android’s ‘Open’ System Has Limits,” cites Edelman’s claim that limits on the licensing of Google’s proprietary apps mean that the Android operating system isn’t truly open source and comes with “strings attached.”

In fact, along with the Microsoft-funded trade organization FairSearch, Edelman has gone so far as to charge that this “tying” constitutes an antitrust violation. It is this claim that Microsoft and a network of proxies brought to the Commission when their efforts to manufacture a search-neutrality-based competition case against Google failed.

But before getting too caught up in the latest round of anti-Google hysteria, it’s worth noting that the Federal Trade Commission has already reviewed these claims. After a thorough, two-year inquiry, the FTC found the antitrust arguments against Google to be without merit. The South Korea Fair Trade Commission conducted its own two year investigation into Google’s Android business practices and dismissed the claims before it as meritless, as well.

Taking on Edelman and FairSearch with an exhaustive scholarly analysis, German law professor Torsten Koerber recently assessed the nature of competition among mobile operating systems and concluded that:

(T)he (EU) Fairsearch complaint ultimately does not aim to protect competition or consumers, as it pretends to. It rather strives to shelter Microsoft from competition by abusing competition law to attack Google’s business model and subvert competition.

It’s time to take a step back and consider the real issues at play.

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.” He writes, “I see no way to reconcile the MADA restrictions with [Android openness].”

Conveniently, however, Edelman neglects to cite to Section 2.6 of the MADA:

The parties will create an open environment for the Devices by making all Android Products and Android Application Programming Interfaces available and open on the Devices and will take no action to limit or restrict the Android platform.

Professor Korber’s analysis provides a straight-forward explanation of the relationship between Android and its OEM licensees:

Google offers Android to OEMs on a royalty-free basis. The licensees are free to download, distribute and even modify the Android code as they like. OEMs can create mobile devices that run “pure” Android…or they can apply their own user interfaces (IO) and thereby hide most of the underlying Android system (e.g. Samsung’s “TouchWiz” or HTC’s “Sense”). OEMs make ample use of this option.

The truth is that the Android operating system remains, as ever, definitively open source — but Android’s openness isn’t really what the fuss is about. In this case, the confusion (or obfuscation) stems from the casual confounding of Google Apps with the Android Operating System. As we’ll see, they aren’t the same thing.

Consider Amazon, which pre-loads no Google applications at all on its Kindle Fire and Fire Phone. Amazon’s version of Android uses Microsoft’s Bing as the default search engineNokia provides mapping services, and the app store is Amazon’s own.

Still, Microsoft’s apologists continue to claim that Android licensees can’t choose to opt out of Google’s applications suite — even though, according to a new report from ABI Research, 20 percent of smartphones shipped between May and July 2014 were based on a “Google-less” version of the Android OS. And that number is consistently increasing: Analysts predict that by 2015, 30 percent of Android phones won’t access Google Services.

It’s true that equipment manufacturers who choose the Android operating system have the option to include the suite of integrated, proprietary Google apps and services licensed (royalty-free) under the name Google Mobile Services (GMS). GMS includes Google Search, Maps, Calendar, YouTube and other apps that together define the “Google Android experience” that users know and love.

But Google Android is far from the only Android experience.

Even if a manufacturer chooses to license Google’s apps suite, Google’s terms are not exclusive. Handset makers are free to install competing applications, including other search engines, map applications or app stores.

Although Google requires that Google Search be made easily accessible (hardly a bad thing for consumers, as it is Google Search that finances the development and maintenance of all of the other (free) apps from which Google otherwise earns little to no revenue), OEMs and users alike can (and do) easily install and access other search engines in numerous ways. As Professor Korber notes:

The standard MADA does not entail any exclusivity for Google Search nor does it mandate a search default for the web browser.

Regardless, integrating key Google apps (like Google Search and YouTube) with other apps the company offers (like Gmail and Google+) is an antitrust problem only if it significantly forecloses competitors from these apps’ markets compared to a world without integrated Google apps, and without pro-competitive justification. Neither is true, despite the unsubstantiated claims to the contrary from Edelman, FairSearch and others.

Consumers and developers expect and demand consistency across devices so they know what they’re getting and don’t have to re-learn basic functions or program multiple versions of the same application. Indeed, Apple’s devices are popular in part because Apple’s closed iOS provides a predictable, seamless experience for users and developers.

But making Android competitive with its tightly controlled competitors requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

Unlike Android, Apple prohibits modifications of its operating system by downstream partners and users, and completely controls the pre-installation of apps on iOS devices. It deeply integrates applications into iOS, including Apple Maps, iTunes, Siri, Safari, its App Store and others. Microsoft has copied Apple’s model to a large degree, hard-coding its own applications (including Bing, Windows Store, Skype, Internet Explorer, Bing Maps and Office) into the Windows Phone operating system.

In the service of creating and maintaining a competitive platform, each of these closed OS’s bakes into its operating system significant limitations on which third-party apps can be installed and what they can (and can’t) do. For example, neither platform permits installation of a third-party app store, and neither can be significantly customized. Apple’s iOS also prohibits users from changing default applications — although the soon-to-be released iOS 8 appears to be somewhat more flexible than previous versions.

In addition to pre-installing a raft of their own apps and limiting installation of other apps, both Apple and Microsoft enable greater functionality for their own apps than they do the third-party apps they allow.

For example, Apple doesn’t make available for other browsers (like Google’s Chrome) all the JavaScript functionality that it does for Safari, and it requires other browsers to use iOS Webkit instead of their own web engines. As a result there are things that Chrome can’t do on iOS that Safari and only Safari can do, and Chrome itself is hamstrung in implementing its own software on iOS. This approach has led Mozilla to refuse to offer its popular Firefox browser for iOS devices (while it has no such reluctance about offering it on Android).

On Windows Phone, meanwhile, Bing is integrated into the OS and can’t be removed. Only in markets where Bing is not supported (and with Microsoft’s prior approval) can OEMs change the default search app from Bing. While it was once possible to change the default search engine that opens in Internet Explorer (although never from the hardware search button), the Windows 8.1 Hardware Development Notes, updated July 22, 2014, state:

By default, the only search provider included on the phone is Bing. The search provider used in the browser is always the same as the one launched by the hardware search button.

Both Apple iOS and Windows Phone tightly control the ability to use non-default apps to open intents sent from other apps and, in Windows especially, often these linkages can’t be changed.

As a result of these sorts of policies, maintaining the integrity — and thus the brand — of the platform is (relatively) easy for closed systems. While plenty of browsers are perfectly capable of answering an intent to open a web page, Windows Phone can better ensure a consistent and reliable experience by forcing Internet Explorer to handle the operation.

By comparison, Android, with or without Google Mobile Services, is dramatically more open, more flexible and customizable, and more amenable to third-party competition. Even the APIs that it uses to integrate its apps are open to all developers, ensuring that there is nothing that Google apps are able to do that non-Google apps with the same functionality are prevented from doing.

In other words, not just Gmail, but any email app is permitted to handle requests from any other app to send emails; not just Google Calendar but any calendar app is permitted to handle requests from any other app to accept invitations.

In no small part because of this openness and flexibility, current reports indicate that Android OS runs 85 percent of mobile devices worldwide. But it is OEM giant Samsung, not Google, that dominates the market, with a 65 percent share of all Android devices. Competition is rife, however, especially in emerging markets. In fact, according to one report, “Chinese and Indian vendors accounted for the majority of smartphone shipments for the first time with a 51% share” in 2Q 2014.

As he has not been in the past, Edelman is at least nominally circumspect in his unsubstantiated legal conclusions about Android’s anticompetitive effect:

Applicable antitrust law can be complicated: Some ties yield useful efficiencies, and not all ties reduce welfare.

Given Edelman’s connections to Microsoft and the realities of the market he is discussing, it could hardly be otherwise. If every integration were an antitrust violation, every element of every operating system — including Apple’s iOS as well as every variant of Microsoft’s Windows — should arguably be the subject of a government investigation.

In truth, Google has done nothing more than ensure that its own suite of apps functions on top of Android to maintain what Google sees as seamless interconnectivity, a high-quality experience for users, and consistency for application developers — while still allowing handset manufacturers room to innovate in a way that is impossible on other platforms. This is the very definition of pro-competitive, and ultimately this is what allows the platform as a whole to compete against its far more vertically integrated alternatives.

Which brings us back to Microsoft. On the conclusion of the FTC investigation in January 2013, a GigaOm exposé on the case had this to say:

Critics who say Google is too powerful have nagged the government for years to regulate the company’s search listings. But today the critics came up dry….

The biggest loser is Microsoft, which funded a long-running cloak-and-dagger lobbying campaign to convince the public and government that its arch-enemy had to be regulated….

The FTC is also a loser because it ran a high profile two-year investigation but came up dry.

EU regulators, take note.

Anyone interested in antitrust enforcement policy (and what TOTM reader isn’t?) should read FTC Commissioner Josh Wright’s interview in the latest issue of The Antitrust Source.  The extensive (22 page!) interview covers a number of topics and demonstrates the positive influence Commissioner Wright is having on antitrust enforcement and competition policy in general.

Commissioner Wright’s consistent concern with minimizing error costs will come as no surprise to TOTM regulars.  Here are a few related themes emphasized in the interview:

A commitment to evidence-based antitrust.

Asked about his prior writings on the superiority of “evidence-based” antitrust analysis, Commissioner Wright explains the concept as follows:

The central idea is to wherever possible shift away from casual empiricism and intuitions as the basis for decision-making and instead commit seriously to the decision-theoretic framework applied to minimize the costs of erroneous enforcement and policy decisions and powered by the best available theory and evidence.

This means, of course, that discrete enforcement decisions – should we bring a challenge or not? – should be based on the best available empirical evidence about the effects of the practice or transaction at issue. But it also encompasses a commitment to design institutions and structure liability rules on the basis of the best available evidence concerning a practice’s tendency to occasion procompetitive or anticompetitive effects. As Wright explains:

Evidence-based antitrust encompasses a commitment to using the best available economic theory and empirical evidence to make [a discrete enforcement] decision; but it also stands for a much broader commitment to structuring antitrust enforcement and policy decision-making. For example, evidence-based antitrust is a commitment that would require an enforcement agency seeking to design its policy with respect to a particular set of business arrangements – loyalty discounts, for example – to rely upon the existing theory and empirical evidence in calibrating that policy.

Of course, if the FTC is committed to evidence-based antitrust policy, then it will utilize its institutional advantages to enhance the empirical record on practices whose effects are unclear. Thus, Commissioner Wright lauds the FTC’s study of – rather than preemptive action against – patent assertion entities, calling it “precisely the type of activity that the FTC is well-suited to do.”

A commitment to evidence-based antitrust also means that the agency shouldn’t get ahead of itself in restricting conduct with known consumer benefits and only theoretical (i.e., not empirically established) harms. Accordingly, Commissioner Wright says he “divorced [him]self from a number of recommendations” in the FTC’s recent data broker report:

For the majority of these other recommendations [beyond basic disclosure requirements], I simply do not think that we have any evidence that the benefits from Congress adopting those recommendations would exceed the costs. … I would need to have some confidence based on evidence, especially about an area where evidence is scarce. I’m not comfortable relying on my priors about these activities, especially when confronted by something new that could be beneficial. … The danger would be that we recommend actions that either chill some of the beneficial activity the data brokers engage in or just impose compliance costs that we all recognize get passed on to consumers.

Similarly, Commissioner Wright has opposed “fencing-in” relief in consent decrees absent evidence that the practice being restricted threatens more harm than good. As an example, he points to the consent decree in the Graco case, which we discussed here:

Graco employed exclusive dealing contracts, but we did not allege that the exclusive dealing contracts violated the antitrust laws or Section 5. However, as fencing-in relief for the consummated merger, the consent included prohibitions on exclusive dealing and loyalty discounts despite there being no evidence that the firm had employed either of those tactics to anticompetitive ends. When an FTC settlement bans a form of discounting as standard injunctive relief in a merger case without convincing evidence that the discounts themselves were a competitive problem, it raises significant concerns.

A commitment to clear enforcement principles.

At several points throughout the interview, Commissioner Wright emphasizes the value of articulating clear principles that can guide business planners’ behavior. But he’s not calling for a bunch of ex ante liability rules. The old per se rule against minimum resale price maintenance, for example, was clear – and bad! Embracing overly broad liability rules for the sake of clarity is inconsistent with the evidence-based, decision-theoretic approach Commissioner Wright prefers. The clarity he is advocating, then, is clarity on broad principles that will govern enforcement decisions.  He thus reiterates his call for a formal policy statement defining the Commission’s authority to prosecute unfair methods of competition under Section 5 of the FTC Act.  (TOTM hosted a blog symposium on that topic last summer.)  Wright also suggests that the Commission should “synthesize and offer high-level principles that would provide additional guidance” on how the Commission will use its Section 5 authority to address data security matters.

Extension, not extraction, should be the touchstone for Section 2 liability.

When asked about his prior criticism of FTC actions based on alleged violations of licensing commitments to standards development organizations (e.g., N-Data), Commissioner Wright emphasized that there should be no Section 2 liability in such cases, or similar cases involving alleged patent hold-up, absent an extension of monopoly power. In other words, it is not enough to show that the alleged bad act resulted in higher prices; it must also have led to the creation, maintenance, or enhancement of monopoly power.  Wright explains:

The logic is relatively straightforward. The antitrust laws do not apply to all increases of price. The Sherman Act is not a price regulation statute. The antitrust laws govern the competitive process. The Supreme Court said in Trinko that a lawful monopolist is allowed to charge the monopoly price. In NYNEX, the Supreme Court held that even if that monopolist raises its price through bad conduct, so long as that bad conduct does not harm the competitive process, it does not violate the antitrust laws. The bad conduct may violate other laws. It may be a fraud problem, it might violate regulatory rules, it may violate all sorts of other areas of law. In the patent context, it might give rise to doctrines like equitable estoppel. But it is not an antitrust problem; antitrust cannot be the hammer for each and every one of the nails that implicate price changes.

In my view, the appropriate way to deal with patent holdup cases is to require what we require for all Section 2 cases. We do not need special antitrust rules for patent holdup; much less for patent assertion entities. The rule is simply that the plaintiff must demonstrate that the conduct results in the acquisition of market power, not merely the ability to extract existing monopoly rents. … That distinction between extracting lawfully acquired and existing monopoly rents and acquiring by unlawful conduct additional monopoly power is one that has run through Section 2 jurisprudence for quite some time.

In light of these remarks (which remind me of this excellent piece by Dennis Carlton and Ken Heyer), it is not surprising that Commissioner Wright also hopes and believes that the Roberts Court will overrule Jefferson Parish’s quasi-per se rule against tying. As Einer Elhauge has observed, that rule might make sense if the mere extraction of monopoly profits (via metering price discrimination or Loew’s-type bundling) was an “anticompetitive” effect of tying.  If, however, anticompetitive harm requires extension of monopoly power, as Wright contends, then a tie-in cannot be anticompetitive unless it results in substantial foreclosure of the tied product market, a necessary prerequisite for a tie-in to enhance market power in the tied or tying markets.  That means tying should not be evaluated under the quasi-per se rule but should instead be subject to a rule of reason similar to that governing exclusive dealing (i.e., some sort of “qualitative foreclosure” approach).  (I explain this point in great detail here.)

Optimal does not mean perfect.

Commissioner Wright makes this point in response to a question about whether the government should encourage “standards development organizations to provide greater clarity to their intellectual property policies to reduce the likelihood of holdup or other concerns.”  While Wright acknowledges that “more complete, more precise contracts” could limit the problem of patent holdup, he observes that there is a cost to greater precision and completeness and that the parties to these contracts already have an incentive to put the optimal amount of effort into minimizing the cost of holdup. He explains:

[M]inimizing the probability of holdup does not mean that it is zero. Holdup can happen. It will happen. It will be observed in the wild from time to time, and there is again an important question about whether antitrust has any role to play there. My answer to that question is yes in the case of deception that results in market power. Otherwise, we ought to leave the governance of what amount to contracts between SSO and their members to contract law and in some cases to patent doctrines like equitable estoppel that can be helpful in governing holdup.

…[I]t is quite an odd thing for an agency to be going out and giving advice to sophisticated parties on how to design their contracts. Perhaps I would be more comfortable if there were convincing and systematic evidence that the contracts were the result of market failure. But there is not such evidence.

Consumer welfare is the touchstone.

When asked whether “there [are] circumstances where non-competition concerns, such as privacy, should play a role in merger analysis,” Commissioner Wright is unwavering:

No. I think that there is a great danger when we allow competition law to be unmoored from its relatively narrow focus upon consumer welfare. It is the connection between the law and consumer welfare that allows antitrust to harness the power of economic theory and empirical methodologies. All of the gains that antitrust law and policy as a body have earned over the past fifty or sixty years have been from becoming more closely tethered to industrial organization economics, more closely integrating economic thought in the law, and in agency discretion and decision-making. I think that the tight link between the consumer welfare standard and antitrust law is what has allowed such remarkable improvements in what effectively amounts to a body of common law.

Calls to incorporate non-economic concerns into antitrust analysis, I think, threaten to undo some, if not all, of that progress. Antitrust law and enforcement in the United States has some experience with trying to incorporate various non-economic concerns, including the welfare of small dealers and worthy men and so forth. The results of the experiment were not good for consumers and did not generate sound antitrust policy. It is widely understood and recognized why that is the case.

***

Those are just some highlights. There’s lots more in the interview—in particular, some good stuff on the role of efficiencies in FTC investigations, the diverging standards for the FTC and DOJ to obtain injunctions against unconsummated mergers, and the proper way to analyze reverse payment settlements.  Do read the whole thing.  If you’re like me, it may make you feel a little more affinity for Mitch McConnell.

In a June 12, 2014 TOTM post, I discussed the private antitrust challenge to NCAA rules that barred NCAA member universities from compensating athletes for use of their images and names in television broadcasts and video games.

On August 8 a federal district judge held that the NCAA had violated the antitrust laws and enjoined the NCAA from enforcing those rules, effective 2016.  The judge’s 99-page opinion, which discusses NCAA price-fixing agreements, is worth a read.  It confronts and debunks the NCAA’s efficiency justifications for their cartel-like restrictions on athletic scholarships.  If the decision withstands appeal, it will allow  NCAA member schools to offer prospective football and basketball recruits trust funds that could be accessed after graduation (subject to certain limitations), granting those athletes a share of the billions of dollars in revenues they generate for NCAA member universities.

A large number of NCAA rules undoubtedly generate substantial efficiencies that benefit NCAA  member institutions, college sports fans, and college athletes.  But the beneficial nature of those rules does not justify separate monopsony price fixing arrangements that disadvantage athletic recruits – arrangements that cannot legitimately be tied to the NCAA’s welfare-enhancing interest in promoting intercollegiate athletics.  Stay tuned.

A study released today by the Heritage Foundation (authored by Christopher M. Pope) succinctly describes the inherently anticompetitive nature of Obamacare, which will tend to inflate prices, not reduce costs:

“The growth of monopoly power among health care providers bears much responsibility for driving up the cost of health care over recent years. By mandating that general hospitals provide uncompensated care, state and federal legislators have given them cause to insist on regulations and discriminatory subsidies to protect them from cheaper competitors. Instead of freeing these markets to allow the provision of care by the most efficient organizations, the Affordable Care Act endorses these anti-competitive arrangements. It extends the premium paid for treatment in general hospitals, employs the purchasing power of the Medicare program to encourage the consolidation of medical practices, and reforms insurance law to eliminate many of the margins for competition between carriers. Institutions sheltered from competition tend to accumulate unnecessary costs over time. In the absence of pro-competitive reforms, higher spending under Obamacare is likely to only further inflate prices faced by those seeking affordable care.”

In short, as the study demonstrates, “[t]he shackling of competition is an essential feature of Obamacare, not a bug.” Accordingly, Obamacare’s enactors (Congress) and implementers (especially HHS) could benefit from a dose of competition advocacy aimed at reforming this welfare-destructive regulatory system. The study highlights particular worthwhile reforms:

“■Refuse to prop up monopoly power. Government regulation and spending should not shield dominant providers from competitors. Monopolies are irresponsive to the needs of patients and payers. They are an unreliable method of subsidizing care that tends to both lower quality and inflate costs.

■Repeal certificate-of-need laws. Legislative constraints on the construction of additional medical capacity should be repealed. Innovative providers should be allowed to expand or establish new facilities that challenge incumbents with lower prices and better quality.

■Subsidize patients, not providers. Public policies should be provider-neutral. Payments should reimburse providers for providing care, period. In particular, publicly funded programs should not operate payment systems designed to keep certain providers in business regardless of the quality, volume, or cost of the treatments they provide. If some individuals are unable to pay for their care, policymakers should subsidize such needy individuals directly.

■Allow patients to shop around. Wherever possible governments and employers should put patients in control of the funds expended on their care, and permit them to keep any savings they obtain from seeking out more efficient providers.

■Repeal Obamacare and its mandates. Forcing individuals to purchase standardized health insurance establishes a captive market, making it easier for providers, insurers, and regulators to degrade services and inflate costs with impunity. Repealing Obamacare and its purchase mandates is essential to creating a market in which suppliers have the flexibility to respond to consumer demands for better value for their money.”

Perhaps the Federal Trade Commission, which has a substantial interest in promoting procompetitive health care policies, might consider holding a workshop exploring the merits of these reform proposals, as part of its ongoing initiatives in the health care area. (Commendably, and consistent with one of the Heritage study’s key recommendations, the FTC already has advocated in favor of the repeal of certificate-of-need laws.)

The Federal Trade Commission’s recent enforcement actions against Amazon and Apple raise important questions about the FTC’s consumer protection practices, especially its use of economics. How does the Commission weigh the costs and benefits of its enforcement decisions? How does the agency employ economic analysis in digital consumer protection cases generally?

Join the International Center for Law and Economics and TechFreedom on Thursday, July 31 at the Woolly Mammoth Theatre Company for a lunch and panel discussion on these important issues, featuring FTC Commissioner Joshua Wright, Director of the FTC’s Bureau of Economics Martin Gaynor, and several former FTC officials. RSVP here.

Commissioner Wright will present a keynote address discussing his dissent in Apple and his approach to applying economics in consumer protection cases generally.

Geoffrey Manne, Executive Director of ICLE, will briefly discuss his recent paper on the role of economics in the FTC’s consumer protection enforcement. Berin Szoka, TechFreedom President, will moderate a panel discussion featuring:

  • Martin Gaynor, Director, FTC Bureau of Economics
  • David Balto, Fmr. Deputy Assistant Director for Policy & Coordination, FTC Bureau of Competition
  • Howard Beales, Fmr. Director, FTC Bureau of Consumer Protection
  • James Cooper, Fmr. Acting Director & Fmr. Deputy Director, FTC Office of Policy Planning
  • Pauline Ippolito, Fmr. Acting Director & Fmr. Deputy Director, FTC Bureau of Economics

Background

The FTC recently issued a complaint and consent order against Apple, alleging its in-app purchasing design doesn’t meet the Commission’s standards of fairness. The action and resulting settlement drew a forceful dissent from Commissioner Wright, and sparked a discussion among the Commissioners about balancing economic harms and benefits in Section 5 unfairness jurisprudence. More recently, the FTC brought a similar action against Amazon, which is now pending in federal district court because Amazon refused to settle.

Event Info

The “FTC: Technology and Reform” project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology. The Project’s initial report, released in December 2013, identified critical questions facing the agency, Congress, and the courts about the FTC’s future, and proposed a framework for addressing them.

The event will be live streamed here beginning at 12:15pm. Join the conversation on Twitter with the #FTCReform hashtag.

When:

Thursday, July 31
11:45 am – 12:15 pm — Lunch and registration
12:15 pm – 2:00 pm — Keynote address, paper presentation & panel discussion

Where:

Woolly Mammoth Theatre Company – Rehearsal Hall
641 D St NW
Washington, DC 20004

Questions? – Email mail@techfreedom.orgRSVP here.

See ICLE’s and TechFreedom’s other work on FTC reform, including:

  • Geoffrey Manne’s Congressional testimony on the the FTC@100
  • Op-ed by Berin Szoka and Geoffrey Manne, “The Second Century of the Federal Trade Commission”
  • Two posts by Geoffrey Manne on the FTC’s Amazon Complaint, here and here.

About The International Center for Law and Economics:

The International Center for Law and Economics is a non-profit, non-partisan research center aimed at fostering rigorous policy analysis and evidence-based regulation.

About TechFreedom:

TechFreedom is a non-profit, non-partisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.