Section 5 of the Federal Trade Commission Act proclaims that “[u]nfair methods of competition . . . are hereby declared unlawful.” The FTC has exclusive authority to enforce that provision and uses it to prosecute Sherman Act violations. The Commission also uses the provision to prosecute conduct that doesn’t violate the Sherman Act but is, in the Commission’s view, an “unfair method of competition.”

That’s somewhat troubling, for “unfairness” is largely in the eye of the beholder. One FTC Commissioner recently defined an unfair method of competition as an action that is “‘collusive, coercive, predatory, restrictive, or deceitful,’ or otherwise oppressive, [where the actor lacks] a justification grounded in its legitimate, independent self-interest.” Some years ago, a commissioner observed that a “standalone” Section 5 action (i.e., one not premised on conduct that would violate the Sherman Act) could be used to police “social and environmental harms produced as unwelcome by-products of the marketplace: resource depletion, energy waste, environmental contamination, worker alienation, the psychological and social consequences of producer-stimulated demands.” While it’s unlikely that any FTC Commissioner would go that far today, the fact remains that those subject to Section 5 really don’t know what it forbids.  And that situation flies in the face of the Rule of Law, which at a minimum requires that those in danger of state punishment know in advance what they’re not allowed to do.

In light of this fundamental Rule of Law problem (not to mention the detrimental chilling effect vague competition rules create), many within the antitrust community have called for the FTC to provide guidance on the scope of its “unfair methods of competition” authority. Most notably, two members of the five-member FTC—Commissioners Maureen Ohlhausen and Josh Wright—have publicly called for the Commission to promulgate guidelines. So have former FTC Chairman Bill Kovacic, a number of leading practitioners, and a great many antitrust scholars.

Unfortunately, FTC Chairwoman Edith Ramirez has opposed the promulgation of Section 5 guidelines. She says she instead “favor[s] the common law approach, which has been a mainstay of American antitrust policy since the turn of the twentieth century.” Chairwoman Ramirez observes that the common law method has managed to distill workable liability rules from broad prohibitions in the primary antitrust statutes. Section 1 of the Sherman Act, for example, provides that “[e]very contract, combination … or conspiracy, in restraint of trade … is declared to be illegal.” Section 2 prohibits actions to “monopolize, or attempt to monopolize … any part of … trade.” Clayton Act Section 7 forbids any merger whose effect “may be substantially to lessen competition, or tend to create a monopoly.” Just as the common law transformed these vague provisions into fairly clear liability rules, the Chairwoman says, it can be used to provide adequate guidance on Section 5.

The problem is, there is no Section 5 common law. As Commissioner Wright and his attorney-advisor Jan Rybnicek explain in a new paper, development of a common law—which concededly may be preferable to a prescriptive statutory approach, given its flexibility, ability to evolve with new learning, and sensitivity to time- and place-specific factors—requires certain conditions that do not exist in the Section 5 context.

The common law develops and evolves in a salutary direction because (1) large numbers of litigants do their best to persuade adjudicators of the superiority of their position; (2) the closest cases—those requiring the adjudicator to make fine distinctions—get appealed and reported; (3) the adjudicators publish opinions that set forth all relevant facts, the arguments of the parties, and why one side prevailed over the other; (4) commentators criticize published opinions that are unsound or rely on welfare-reducing rules; (5) adjudicators typically follow past precedents, tweaking (or occasionally overruling) them when they have been undermined; and (6) future parties rely on past decisions when planning their affairs.

Section 5 “adjudication,” such as it is, doesn’t look anything like this. Because the Commission has exclusive authority to bring standalone Section 5 actions, it alone picks the disputes that could form the basis of any common law. It then acts as both prosecutor and judge in the administrative action that follows. Not surprisingly, defendants, who cannot know the contours of a prohibition that will change with the composition of the Commission and who face an inherently biased tribunal, usually settle quickly. After all, they are, in Commissioner Wright’s words, both “shooting at a moving target and have the chips stacked against them.” As a result, we end up with very few disputes, and even those are not vigorously litigated.

Moreover, because nearly all standalone Section 5 actions result in settlements, we almost never end up with a reasoned opinion from an adjudicator explaining why she did or did not find liability on the facts at hand and why she rejected the losing side’s arguments. These sorts of opinions are absolutely crucial for the development of the common law. Chairwoman Ramirez says litigants can glean principles from other administrative documents like complaints and consent agreements, but those documents can’t substitute for a reasoned opinion that parses arguments and says which work, which don’t, and why. On top of all this, the FTC doesn’t even treat its own enforcement decisions as precedent! How on earth could the Commission’s body of enforcement decisions guide decision-making when each could well be a one-off?

I’m a huge fan of the common law. It generally accommodates the Hayekian “knowledge problem” far better than inflexible, top-down statutes. But it requires both inputs—lots of vigorously litigated disputes—and outputs—reasoned opinions that are recognized as presumptively binding. In the Section 5 context, we’re short on both. It’s time for guidelines.

PayPal co-founder Peter Thiel has a terrific essay in the Review section of today’s Wall Street Journal.  The essay, Competition Is for Losers, is adapted from Mr. Thiel’s soon-to-be-released book, Zero to One: Notes on Startups, or How to Build the Future.  Based on the title of the book, I assume it is primarily a how-to guide for entrepreneurs.  But if the rest of the book is anything like the essay in today’s Journal, it will also offer lots of guidance to policy makers–antitrust officials in particular.

We antitrusters usually begin with the assumption that monopoly is bad and perfect competition is good. That’s the starting point for most antitrust courses: the professor lays out the model of perfect competition, points to all the wealth it creates and how that wealth is distributed (more to consumers than to producers), and contrasts it to the monopoly pricing model, with its steep marginal revenue curve, hideous “deadweight loss” triangle, and unseemly redistribution of surplus from consumers to producers. Which is better, kids?  Why, perfect competition, of course!

Mr. Thiel makes the excellent and oft-neglected point that monopoly power is not necessarily a bad thing. First, monopolists can do certain good things that perfect competitors can’t do:

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google’s motto–“Don’t be evil”–is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence.  In business, money is either an important thing or it is everything. Monopolists can think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

Fair enough, Thiel. But what about consumers? That model we learned shows us that they’re worse off under monopoly.  And what about the deadweight loss triangle–don’t forget about that ugly thing! 

So a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsize profits come at the expense of the rest of society? Actually, yes: Profits come out of customers’ wallets, and monopolies deserve their bad reputations–but only in a world where nothing changes.

Wait a minute, Thiel. Why do you think things are different when we inject “change” into the analysis?

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: Deeds are shuffled around from player to player, but the board never changes. There is no way to win by inventing a better kind of real estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up.

But the world we live in is dynamic: We can invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

Even the government knows this: That is why one of the departments works hard to create monopolies (by granting patents to new inventions) even though another part hunts them down (by prosecuting antitrust cases). It is possible to question whether anyone should really be rewarded a monopoly simply for having been the first to think of something like a mobile software design. But something like Apple’s monopoly profits from designing, producing and marketing the iPhone were clearly the reward for creating greater abundance, not artificial scarcity: Customers were happy to finally have the choice of paying high prices to get a smartphone that actually works. The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decadeslong operating system dominance.

…If the tendency of monopoly businesses was to hold back progress, they would be dangerous, and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and finance the ambitious research projects that firms locked in competition can’t dream of.

Geez, Thiel.  You know who you sound like?  Justice Scalia. Here’s how he once explained your idea (to shrieks and howls from many in the antitrust establishment!):

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices–at least for a short period–is what attracts “business acumen” in the first place. It induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Sounds like you and Scalia are calling for us antitrusters to update our models.  Is that it?

So why are economists obsessed with competition as an ideal state? It is a relic of history. Economists copied their mathematics from the work of 19th-century physicists: They see individuals and businesses as interchangeable atoms, not as unique creators. Their theories describe an equilibrium state of perfect competition because that is what’s easy to model, not because it represents the best of business.

C’mon now, Thiel. Surely you don’t expect us antitrusters to defer to you over all these learned economists when it comes to business.

Recently I highlighted problems with the FTC’s enforcement actions targeting companies’ data security protection policies, and recommended that the FTC adopt a cost-benefit approach to regulation in this area.  Yesterday the Heritage Foundation released a more detailed paper by me on this topic, replete with recommendations for new FTC guidance and specific reforms aimed at maintaining appropriate FTC oversight while reducing excessive burdens.  Happy reading!

A century ago Congress enacted the Clayton Act, which prohibits acquisitions that may substantially lessen competition. For years, the antitrust enforcement Agencies looked at only one part of the ledger – the potential for price increases. Agencies didn’t take into account the potential efficiencies in cost savings, better products, services, and innovation. One of the major reforms of the Clinton Administration was to fully incorporate efficiencies in merger analysis, helping to develop sound enforcement standards for the 21st Century.

But the current approach of the Federal Trade Commission (“FTC”), especially in hospital mergers, appears to be taking a major step backwards by failing to fully consider efficiencies and arguing for legal thresholds inconsistent with sound competition policy. The FTC’s approach used primarily in hospital mergers seems uniquely misguided since there is a tremendous need for smart hospital consolidation to help bend the cost curve and improve healthcare delivery.

The FTC’s backwards analysis of efficiencies is juxtaposed in two recent hospital-physician alliances.

As I discussed in my last post, no one would doubt the need for greater integration between hospitals and physicians – the debate during the enactment of the Affordable Care Act (“ACA”) detailed how the current siloed approach to healthcare is the worst of all worlds, leading to escalating costs and inferior care. In FTC v. St. Luke’s Health System, Ltd., the FTC challenged Boise-based St. Luke’s acquisition of a physician practice in neighboring Nampa, Idaho.

In the case, St. Luke’s presented a compelling case for efficiencies.

As noted by the St. Luke’s court, one of the leading factors in rising healthcare costs is the use of the ineffective fee-for-service system. In their attempt to control costs and abandon fee-for-service payment, the merging parties effectively demonstrated to the court that the combined entity would offer a high level of coordinated and patient-centered care. Therefore, along with integrating electronic records and increasing access for under-privileged patients, the merged entity can also successfully manage population health and offer risk-based payment initiatives to all employed physicians. Indeed, the transaction consummated several months ago has already shown significant cost savings and consumer benefits especially for underserved patients. The court recognized

[t]he Acquisition was intended by St. Luke’s and Saltzer primarily to improve patient outcomes. The Court believes that it would have that effect if left intact.

(Appellants’ Reply Brief at 22, FTC v. St. Luke’s Health Sys., No 14-35173 (9th Cir. Sept. 2, 2014).)

But the court gave no weight to the efficiencies primarily because the FTC set forward the wrong legal roadmap.

Under the FTC’s current roadmap for efficiencies, the FTC may prove antitrust harm via predication and presumption while defendants are required to decisively prove countervailing procompetitive efficiencies. Such asymmetric burdens of proof greatly favor the FTC and eliminate a court’s ability to properly analyze the procompetitive nature of efficiencies against the supposed antitrust harm.

Moreover, the FTC basically claims that any efficiencies can only be considered “merger-specific” if the parties are able to demonstrate there are no less anticompetitive means to achieve them. It is not enough that they result directly from the merger.

In the case of St. Luke’s, the court determined the defendants’ efficiencies would “improve the quality of medical care” in Nampa, Idaho, but were not merger-specific. The court relied on the FTC’s experts to find that efficiencies such as “elimination of fee-for-service reimbursement” and the movement “to risk-based reimbursement” were not merger-specific, because other entities had potentially achieved similar efficiencies within different provider “structures.” The FTC and their experts did not indicate the success of these other models nor dispute that St. Luke’s would achieve their stated efficiencies. Instead, the mere possibility of potential, alternative structures was enough to overcome merger efficiencies purposed to “move the focus of health care back to the patient.” (The case is currently on appeal and hopefully the Ninth Circuit can correct the lower court’s error).

In contrast to the St. Luke’s case is the recent FTC advisory letter to the Norman Physician Hospital Organization (“Norman PHO”). The Norman PHO proposed a competitive collaboration serving to integrate care between the Norman Physician Association’s 280 physicians and Norman Regional Health System, the largest health system in Norman, Oklahoma. In its analysis of the Norman PHO, the FTC found that the groups could not “quantify… the likely overall efficiency benefits of its proposed program” nor “provide direct evidence of actual efficiencies or competitive effects.” Furthermore, such an arrangement had the potential to “exercise market power.” Nonetheless, the FTC permitted the collaboration. Its decision was instead decided on the basis of Norman PHO’s non-exclusive physician contracting provisions.

It seems difficult if not impossible to reconcile the FTC’s approaches in Boise and Norman. In Norman the FTC relied on only theoretical efficiencies to permit an alliance with significant market power. The FTC was more than willing to accept Norman PHO’s “potential to… generate significant efficiencies.” Such an even-handed approach concerning efficiencies was not applied in analyzing efficiencies in St. Luke’s merger.

The starting point for understanding the FTC’s misguided analysis of efficiencies in St. Luke’s and other merger cases stems from the 2010 Horizontal Merger Guidelines (“Guidelines”).

A recent dissent by FTC Commissioner Joshua Wright outlines the problem – there are asymmetric burdens placed on the plaintiff and defendant. Using the Guidelines, FTC’s merger analysis

embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.

Relying on the structural presumption established in United States v. Philadelphia Nat’l Bank, the FTC need only illustrate that a merger will substantially lessen competition, typically demonstrated through a showing of undue concentration in a relevant market, not actual anticompetitive effects. If this low burden is met, the burden is then shifted to the defendants to rebut the presumption of competitive harm.

As part of their defense, defendants must then prove that any proposed efficiencies are cognizable, meaning “merger-specific,” and have been “verified and do not arise from anticompetitive reductions in output or service.” Furthermore, merging parties must demonstrate “by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved…, how each would enhance the merged firm’s ability and incentive to compete, and why each would be merger-specific.”

As stated in a recent speech by FTC Commissioner Joshua Wright,

the critical lesson of the modern economic approach to mergers is that post-merger changes in pricing incentives and competitive effects are what matter.

The FTC’s merger policy “has long been dominated by a focus on only one side of the ledger—anticompetitive effects.” In other words the defendants must demonstrate efficiencies with certainty, while the government can condemn a merger based on a prediction. This asymmetric enforcement policy favors the FTC while requiring defendants meet stringent, unyielding standards.

As the ICLE amicus brief in St. Luke’s discusses, not satisfied with the asymmetric advantage, the plaintiffs in St. Luke’s attempt to “guild the lily” by claiming that efficiencies can only be considered in cases where there is a presumption of competitive harm, perhaps based solely on “first order” evidence, such as increased market shares. Of course, nothing in the law, Guidelines, or sound competition policy limits the defense in that fashion.

The court should consider efficiencies regardless of the level of economic harm. The question is whether the efficiencies will outweigh that harm. As Geoff recently pointed out:

There is no economic basis for demanding more proof of claimed efficiencies than of claimed anticompetitive harms. And the Guidelines since 1997 were (ostensibly) drafted in part precisely to ensure that efficiencies were appropriately considered by the agencies (and the courts) in their enforcement decisions.

With presumptions that strongly benefit the FTC, it is clear that efficiencies are often overlooked or ignored. From 1997-2007, FTC’s Bureau of Competition staff deliberated on a total of 342 efficiencies claims. Of the 342 efficiency claims, only 29 were accepted by FTC staff whereas 109 were rejected and 204 received “no decision.” The most common concerns among FTC staff were that stated efficiencies were not verifiable or were not merger specific.

Both “concerns” come directly from the Guidelines requiring plaintiffs provide significant and oftentimes impossible foresight and information to overcome evidentiary burdens. As former FTC Chairman Tim Muris observed

too often, the [FTC] found no cognizable efficiencies when anticompetitive effects were determined to be likely and seemed to recognize efficiency only when no adverse effects were predicted.

Thus, in situations in which the FTC believes the dominant issue is market concentration, plaintiffs’ attempts to demonstrate procompetitive reasoning are outright dismissed.

The FTC’s efficiency arguments are also not grounded in legal precedent. Courts have recognized that asymmetric burdens are inconsistent with the intent of the Act. As then D.C. Circuit Judge Clarence Thomas observed,

[i]mposing a heavy burden of production on a defendant would be particularly anomalous where … it is easy to establish a prima facie case.

Courts have recognized that efficiencies can be “speculative” or be “based on a prediction backed by sound business judgment.” And in Sherman Act cases the law places the burden on the plaintiff to demonstrate that there are less restrictive alternatives to a potentially illegal restraint – unlike the requirement applied by the FTC that the defendant prove there are no less restrictive alternatives to a merger to achieve efficiencies.

The FTC and the courts should deem worthy efficiencies wherein there is a reasonable likelihood that procompetitive effects will take place post-merger. Furthermore, the courts should not look at efficiencies inside a vacuum. In healthcare, policies and laws, such as the effects of the ACA, must be taken into account. The ACA promotes coordination among providers and incentivizes entities that can move away from fee-for-service payment. In the past, courts relying on the role of health policy in merger analysis have found that efficiencies leading to integrated medicine and “better medical care” are relevant.

In St. Luke’s the court observed that “the existing law seemed to hinder innovation and resist creative solutions” and that “flexibility and experimentation” are “two virtues that are not emphasized in the antitrust law.” Undoubtedly, the current approach to efficiencies makes it near impossible for providers to demonstrate efficiencies.

As Commissioner Wright has observed, these asymmetric evidentiary burdens

do not make economic sense and are inconsistent with a merger policy designed to promote consumer welfare.

In the context of St. Luke’s and other healthcare provider mergers, appropriate efficiency analysis is a keystone of determining a merger’s total effects. Dismissal of efficiencies on the basis of a rigid, incorrect legal procedural structure is not aligned with current economic thinking or a sound approach to incorporate competition analysis into the drive for healthcare reform. It is time for the FTC to set efficiency analysis in the right direction.

The free market position on telecom reform has become rather confused of late. Erstwhile conservative Senator Thune is now cosponsoring a version of Senator Rockefeller’s previously proposed video reform bill, bundled into satellite legislation (the Satellite Television Access and Viewer Rights Act or “STAVRA”) that would also include a provision dubbed “Local Choice.” Some free marketeers have defended the bill as a step in the right direction.

Although it looks as if the proposal may be losing steam this Congress, the legislation has been described as a “big and bold idea,” and it’s by no means off the menu. But it should be.

It has been said that politics makes for strange bedfellows. Indeed, people who disagree on just about everything can sometimes unite around a common perceived enemy. Take carriage disputes, for instance. Perhaps because, for some people, a day without The Bachelor is simply a day lost, an unlikely alliance of pro-regulation activists like Public Knowledge and industry stalwarts like Dish has emerged to oppose the ability of copyright holders to withhold content as part of carriage negotiations.

Senator Rockefeller’s Online Video Bill was the catalyst for the Local Choice amendments to STAVRA. Rockefeller’s bill did, well, a lot of terrible things, from imposing certain net neutrality requirements, to overturning the Supreme Court’s Aereo decision, to adding even more complications to the already Byzantine morass of video programming regulations.

But putting Senator Thune’s lipstick on Rockefeller’s pig can’t save the bill, and some of the worst problems from Senator Rockefeller’s original proposal remain.

Among other things, the new bill is designed to weaken the ability of copyright owners to negotiate with distributors, most notably by taking away their ability to withhold content during carriage disputes and by forcing TV stations to sell content on an a la carte basis.

Video distribution issues are complicated — at least under current law. But at root these are just commercial contracts and, like any contracts, they rely on a couple of fundamental principles.

First is the basic property right. The Supreme Court (at least somewhat) settled this for now (in Aereo), by protecting the right of copyright holders to be compensated for carriage of their content. With this baseline, distributors must engage in negotiations to obtain content, rather than employing technological workarounds and exploiting legal loopholes.

Second is the related ability of contracts to govern the terms of trade. A property right isn’t worth much if its owner can’t control how it is used, governed or exchanged.

Finally, and derived from these, is the issue of bargaining power. Good-faith negotiations require both sides not to act strategically by intentionally causing negotiations to break down. But if negotiations do break down, parties need to be able to protect their rights. When content owners are not able to withhold content in carriage disputes, they are put in an untenable bargaining position. This invites bad faith negotiations by distributors.

The STAVRA/Local Choice proposal would undermine the property rights and freedom of contract that bring The Bachelor to your TV, and the proposed bill does real damage by curtailing the scope of the property right in TV programming and restricting the range of contracts available for networks to license their content.

The bill would require that essentially all broadcast stations that elect retrans make their content available a la carte — thus unbundling some of the proverbial sticks that make up the traditional property right. It would also establish MVPD pass-through of each local affiliate. Subscribers would pay a fee determined by the affiliate, and the station must be offered on an unbundled basis, without any minimum tier required – meaning an MVPD has to offer local stations to its customers with no markup, on an a la carte basis, if the station doesn’t elect must-carry. It would also direct the FCC to open a rulemaking to determine whether broadcasters should be prohibited from withholding their content online during a dispute with an MPVD.

“Free market” supporters of the bill assert something like “if we don’t do this to stop blackouts, we won’t be able to stem the tide of regulation of broadcasters.” Presumably this would end blackouts of broadcast programming: If you’re an MVPD subscriber, and you pay the $1.40 (or whatever) for CBS, you get it, period. The broadcaster sets an annual per-subscriber rate; MVPDs pass it on and retransmit only to subscribers who opt in.

But none of this is good for consumers.

When transaction costs are positive, negotiations sometimes break down. If the original right is placed in the wrong hands, then contracting may not assure the most efficient outcome. I think it was Coase who said that.

But taking away the ability of content owners to restrict access to their content during a bargaining dispute effectively places the right to content in the hands of distributors. Obviously, this change in bargaining position will depress the value of content. Placing the rights in the hands of distributors reduces the incentive to create content in the first place; this is why the law protects copyright to begin with. But it also reduces the ability of content owners and distributors to reach innovative agreements and contractual arrangements (like certain promotional deals) that benefit consumers, distributors and content owners alike.

The mandating of a la carte licensing doesn’t benefit consumers, either. Bundling is generally pro-competitive and actually gives consumers more content than they would otherwise have. The bill’s proposal to force programmers to sell content to consumers a la carte may actually lead to higher overall prices for less content. Not much of a bargain.

There are plenty of other ways this is bad for consumers, even if it narrowly “protects” them from blackouts. For example, the bill would prohibit a network from making a deal with an MVPD that provides a discount on a bundle including carriage of both its owned broadcast stations as well as the network’s affiliated cable programming. This is not a worthwhile — or free market — trade-off; it is an ill-advised and economically indefensible attack on vertical distribution arrangements — exactly the same thing that animates many net neutrality defenders.

Just as net neutrality’s meddling in commercial arrangements between ISPs and edge providers will ensure a host of unintended consequences, so will the Rockefeller/Thune bill foreclose a host of welfare-increasing deals. In the end, in exchange for never having to go three days without CBS content, the bill will make that content more expensive, limit the range of programming offered, and lock video distribution into a prescribed business model.

Former FCC Commissioner Rob McDowell sees the same hypocritical connection between net neutrality and broadcast regulation like the Local Choice bill:

According to comments filed with the FCC by Time Warner Cable and the National Cable and Telecommunications Association, broadcasters should not be allowed to take down or withhold the content they produce and own from online distribution even if subscribers have not paid for it—as a matter of federal law. In other words, edge providers should be forced to stream their online content no matter what. Such an overreach, of course, would lay waste to the economics of the Internet. It would also violate the First Amendment’s prohibition against state-mandated, or forced, speech—the flip side of censorship.

It is possible that the cable companies figure that subjecting powerful broadcasters to anti-free speech rules will shift the political momentum in the FCC and among the public away from net neutrality. But cable’s anti-free speech arguments play right into the hands of the net-neutrality crowd. They want to place the entire Internet ecosystem, physical networks, content and apps, in the hands of federal bureaucrats.

While cable providers have generally opposed net neutrality regulation, there is, apparently, some support among them for regulations that would apply to the edge. The Rockefeller/Thune proposal is just a replay of this constraint — this time by forcing programmers to allow retransmission of broadcast content under terms set by Congress. While “what’s good for the goose is good for the gander” sounds appealing in theory, here it is simply doubling down on a terrible idea.

What it reveals most of all is that true neutrality advocates don’t want government control to be limited to ISPs — rather, progressives like Rockefeller (and apparently some conservatives, like Thune) want to subject the whole apparatus — distribution and content alike — to intrusive government oversight in order to “protect” consumers (a point Fred Campbell deftly expands upon here and here).

You can be sure that, if the GOP supports broadcast a la carte, it will pave the way for Democrats (and moderates like McCain who back a la carte) to expand anti-consumer unbundling requirements to cable next. Nearly every economic analysis has concluded that mandated a la carte pricing of cable programming would be harmful to consumers. There is no reason to think that applying it to broadcast channels would be any different.

What’s more, the logical extension of the bill is to apply unbundling to all MVPD channels and to saddle them with contract restraints, as well — and while we’re at it, why not unbundle House of Cards from Orange is the New Black? The Rockefeller bill may have started in part as an effort to “protect” OVDs, but there’ll be no limiting this camel once its nose is under the tent. Like it or not, channel unbundling is arbitrary — why not unbundle by program, episode, studio, production company, etc.?

There is simply no principled basis for the restraints in this bill, and thus there will be no limit to its reach. Indeed, “free market” defenders of the Rockefeller/Thune approach may well be supporting a bill that ultimately leads to something like compulsory, a la carte licensing of all video programming. As I noted in my testimony last year before the House Commerce Committee on the satellite video bill:

Unless we are prepared to bear the consumer harm from reduced variety, weakened competition and possibly even higher prices (and absolutely higher prices for some content), there is no economic justification for interfering in these business decisions.

So much for property rights — and so much for vibrant video programming.

That there is something wrong with the current system is evident to anyone who looks at it. As Gus Hurwitz noted in recent testimony on Rockefeller’s original bill,

The problems with the existing regulatory regime cannot be understated. It involves multiple statutes implemented by multiple agencies to govern technologies developed in the 60s, 70s, and 80s, according to policy goals from the 50s, 60s, and 70s. We are no longer living in a world where the Rube Goldberg of compulsory licenses, must carry and retransmission consent, financial interest and syndication exclusivity rules, and the panoply of Federal, state, and local regulations makes sense – yet these are the rules that govern the video industry.

While video regulation is in need of reform, this bill is not an improvement. In the short run it may ameliorate some carriage disputes, but it will do so at the expense of continued programming vibrancy and distribution innovations. The better way to effect change would be to abolish the Byzantine regulations that simultaneously attempt to place thumbs of both sides of the scale, and to rely on free market negotiations with a copyright baseline and antitrust review for actual abuses.

But STAVRA/Local Choice is about as far from that as you can get.

The Wall Street Journal dropped an FCC bombshell last week, although I’m not sure anyone noticed. In an article ostensibly about the possible role that MFNs might play in the Comcast/Time-Warner Cable merger, the Journal noted that

The FCC is encouraging big media companies to offer feedback confidentially on Comcast’s $45-billion offer for Time Warner Cable.

Not only is the FCC holding secret meetings, but it is encouraging Comcast’s and TWC’s commercial rivals to hold confidential meetings and to submit information under seal. This is not a normal part of ex parte proceedings at the FCC.

In the typical proceeding of this sort – known as a “permit-but-disclose proceeding” – ex parte communications are subject to a host of disclosure requirements delineated in 47 CFR 1.1206. But section 1.1200(a) of the Commission’s rules permits the FCC, in its discretion, to modify the applicable procedures if the public interest so requires.

If you dig deeply into the Public Notice seeking comments on the merger, you find a single sentence stating that

Requests for exemptions from the disclosure requirements pursuant to section 1.1204(a)(9) may be made to Jonathan Sallet [the FCC's General Counsel] or Hillary Burchuk [who heads the transaction review team].

Similar language appears in the AT&T/DirecTV transaction Public Notice.

This leads to the cited rule exempting certain ex parte presentations from the usual disclosure requirements in such proceedings, including the referenced one that exempts ex partes from disclosure when

The presentation is made pursuant to an express or implied promise of confidentiality to protect an individual from the possibility of reprisal, or there is a reasonable expectation that disclosure would endanger the life or physical safety of an individual

So the FCC is inviting “media companies” to offer confidential feedback and to hold secret meetings that the FCC will hold confidential because of “the possibility of reprisal” based on language intended to protect individuals.

Such deviations from the standard permit-but-disclose procedures are extremely rare. As in non-existent. I guess there might be other examples, but I was unable to find a single one in a quick search. And I’m willing to bet that the language inviting confidential communications in the PN hasn’t appeared before – and certainly not in a transaction review.

It is worth pointing out that the language in 1.1204(a)(9) is remarkably similar to language that appears in the Freedom of Information Act. As the DOJ notes regarding that exemption:

Exemption 7(D) provides protection for “records or information compiled for law enforcement purposes [which] could reasonably be expected to disclose the identity of a confidential source… to ensure that “confidential sources are not lost through retaliation against the sources for past disclosure or because of the sources’ fear of future disclosure.”

Surely the fear-of-reprisal rationale for confidentiality makes sense in that context – but here? And invoked to elicit secret meetings and to keep confidential information from corporations instead of individuals, it makes even less sense (and doesn’t even obviously comply with the rule itself). It is not as though – as far as I know – someone approached the Commission with stated fears and requested it implement a procedure for confidentiality in these particular reviews.

Rather, this is the Commission inviting non-transparent process in the midst of a heated, politicized and heavily-scrutinized transaction review.

The optics are astoundingly bad.

Unfortunately, this kind of behavior seems to be par for the course for the current FCC. As Commissioner Pai has noted on more than one occasion, the minority commissioners have been routinely kept in the dark with respect to important matters at the Commission – not coincidentally, in other highly-politicized proceedings.

What’s particularly troubling is that, for all its faults, the FCC’s process is typically extremely open and transparent. Public comments, endless ex parte meetings, regular Open Commission Meetings are all the norm. And this is as it should be. Particularly when it comes to transactions and other regulated conduct for which the regulated entity bears the burden of proving that its behavior does not offend the public interest, it is obviously necessary to have all of the information – to know what might concern the Commission and to make a case respecting those matters.

The kind of arrogance on display of late, and the seeming abuse of process that goes along with it, hearkens back to the heady days of Kevin Martin’s tenure as FCC Chairman – a tenure described as “dysfunctional” and noted for its abuse of process.

All of which should stand as a warning to the vocal, pro-regulatory minority pushing for the FCC to proclaim enormous power to regulate net neutrality – and broadband generally – under Title II. Just as Chairman Martin tried to manipulate diversity rules to accomplish his pet project of cable channel unbundling, some future Chairman will undoubtedly claim authority under Title II to accomplish some other unintended, but politically expedient, objective — and it may not be one the self-proclaimed consumer advocates like, when it happens.

Bad as that risk may be, it is only made more likely by regulatory reviews undertaken in secret. Whatever impelled the Chairman to invite unprecedented secrecy into these transaction reviews, it seems to be of a piece with a deepening politicization and abuse of process at the Commission. It’s both shameful – and deeply worrying.

[Cross posted at the CPIP Blog.]

By Mark Schultz & Adam Mossoff

A handful of increasingly noisy critics of intellectual property (IP) have emerged within free market organizations. Both the emergence and vehemence of this group has surprised most observers, since free market advocates generally support property rights. It’s true that there has long been a strain of IP skepticism among some libertarian intellectuals. However, the surprised observer would be correct to think that the latest critique is something new. In our experience, most free market advocates see the benefit and importance of protecting the property rights of all who perform productive labor – whether the results are tangible or intangible.

How do the claims of this emerging critique stand up? We have had occasion to examine the arguments of free market IP skeptics before. (For example, see here, here, here.) So far, we have largely found their claims wanting.

We have yet another occasion to examine their arguments, and once again we are underwhelmed and disappointed. We recently posted an essay at AEI’s Tech Policy Daily prompted by an odd report recently released by the Mercatus Center, a free-market think tank. The Mercatus report attacks recent research that supposedly asserts, in the words of the authors of the Mercatus report, that “the existence of intellectual property in an industry creates the jobs in that industry.” They contend that this research “provide[s] no theoretical or empirical evidence to support” its claims of the importance of intellectual property to the U.S. economy.

Our AEI essay responds to these claims by explaining how these IP skeptics both mischaracterize the studies that they are attacking and fail to acknowledge the actual historical and economic evidence on the connections between IP, innovation, and economic prosperity. We recommend that anyone who may be confused by the assertions of any IP skeptics waving the banner of property rights and the free market read our essay at AEI, as well as our previous essays in which we have called out similarly odd statements from Mercatus about IP rights.

The Mercatus report, though, exemplifies many of the concerns we raise about these IP skeptics, and so it deserves to be considered at greater length.

For instance, something we touched on briefly in our AEI essay is the fact that the authors of this Mercatus report offer no empirical evidence of their own within their lengthy critique of several empirical studies, and at best they invoke thin theoretical support for their contentions.

This is odd if only because they are critiquing several empirical studies that develop careful, balanced and rigorous models for testing one of the biggest economic questions in innovation policy: What is the relationship between intellectual property and jobs and economic growth?

Apparently, the authors of the Mercatus report presume that the burden of proof is entirely on the proponents of IP, and that a bit of hand waving using abstract economic concepts and generalized theory is enough to defeat arguments supported by empirical data and plausible methodology.

This move raises a foundational question that frames all debates about IP rights today: On whom should the burden rest? On those who claim that IP has beneficial economic effects? Or on those who claim otherwise, such as the authors of the Mercatus report?

The burden of proof here is an important issue. Too often, recent debates about IP rights have started from an assumption that the entire burden of proof rests on those investigating or defending IP rights. Quite often, IP skeptics appear to believe that their criticism of IP rights needs little empirical or theoretical validation, beyond talismanic invocations of “monopoly” and anachronistic assertions that the Framers of the US Constitution were utilitarians.

As we detail in our AEI essay, though, the problem with arguments like those made in the Mercatus report is that they contradict history and empirics. For the evidence that supports this claim, including citations to the many studies that are ignored by the IP skeptics at Mercatus and elsewhere, check out the essay.

Despite these historical and economic facts, one may still believe that the US would enjoy even greater prosperity without IP. But IP skeptics who believe in this counterfactual world face a challenge. As a preliminary matter, they ought to acknowledge that they are the ones swimming against the tide of history and prevailing belief. More important, the burden of proof is on them – the IP skeptics – to explain why the U.S. has long prospered under an IP system they find so odious and destructive of property rights and economic progress, while countries that largely eschew IP have languished. This obligation is especially heavy for one who seeks to undermine empirical work such as the USPTO Report and other studies.

In sum, you can’t beat something with nothing. For IP skeptics to contest this evidence, they should offer more than polemical and theoretical broadsides. They ought to stop making faux originalist arguments that misstate basic legal facts about property and IP, and instead offer their own empirical evidence. The Mercatus report, however, is content to confine its empirics to critiques of others’ methodology – including claims their targets did not make.

For example, in addition to the several strawman attacks identified in our AEI essay, the Mercatus report constructs another strawman in its discussion of studies of copyright piracy done by Stephen Siwek for the Institute for Policy Innovation (IPI). Mercatus inaccurately and unfairly implies that Siwek’s studies on the impact of piracy in film and music assumed that every copy pirated was a sale lost – this is known as “the substitution rate problem.” In fact, Siwek’s methodology tackled that exact problem.

IPI and Siwek never seem to get credit for this, but Siwek was careful to avoid the one-to-one substitution rate estimate that Mercatus and others foist on him and then critique as empirically unsound. If one actually reads his report, it is clear that Siwek assumes that bootleg physical copies resulted in a 65.7% substitution rate, while illegal downloads resulted in a 20% substitution rate. Siwek’s methodology anticipates and renders moot the critique that Mercatus makes anyway.

After mischaracterizing these studies and their claims, the Mercatus report goes further in attacking them as supporting advocacy on behalf of IP rights. Yes, the empirical results have been used by think tanks, trade associations and others to support advocacy on behalf of IP rights. But does that advocacy make the questions asked and resulting research invalid? IP skeptics would have trumpeted results showing that IP-intensive industries had a minimal economic impact, just as Mercatus policy analysts have done with alleged empirical claims about IP in other contexts. In fact, IP skeptics at free-market institutions repeatedly invoke studies in policy advocacy that allegedly show harm from patent litigation, despite these studies suffering from far worse problems than anything alleged in their critiques of the USPTO and other studies.

Finally, we noted in our AEI essay how it was odd to hear a well-known libertarian think tank like Mercatus advocate for more government-funded programs, such as direct grants or prizes, as viable alternatives to individual property rights secured to inventors and creators. There is even more economic work being done beyond the empirical studies we cited in our AEI essay on the critical role that property rights in innovation serve in a flourishing free market, as well as work on the economic benefits of IP rights over other governmental programs like prizes.

Today, we are in the midst of a full-blown moral panic about the alleged evils of IP. It’s alarming that libertarians – the very people who should be defending all property rights – have jumped on this populist bandwagon. Imagine if free market advocates at the turn of the Twentieth Century had asserted that there was no evidence that property rights had contributed to the Industrial Revolution. Imagine them joining in common cause with the populist Progressives to suppress the enforcement of private rights and the enjoyment of economic liberty. It’s a bizarre image, but we are seeing its modern-day equivalent, as these libertarians join the chorus of voices arguing against property and private ordering in markets for innovation and creativity.

It’s also disconcerting that Mercatus appears to abandon its exceptionally high standards for scholarly work-product when it comes to IP rights. Its economic analyses and policy briefs on such subjects as telecommunications regulation, financial and healthcare markets, and the regulatory state have rightly made Mercatus a respected free-market institution. It’s unfortunate that it has lent this justly earned prestige and legitimacy to stale and derivative arguments against property and private ordering in the innovation and creative industries. It’s time to embrace the sound evidence and back off the rhetoric.

There is a consensus in America that we need to control health care costs and improve the delivery of health care. After a long debate on health care reform and careful scrutiny of health care markets, there seems to be agreement that the unintegrated, “siloed approach” to health care is inefficient, costly, and contrary to the goal of improving care. But some antitrust enforcers — most notably the FTC — are standing in the way.

Enlightened health care providers are responding to this consensus by entering into transactions that will lead to greater clinical and financial integration, facilitating a movement from volume-based to value-based delivery of care. Any many aspects of the Affordable Care Act encourage this path to integration. Yet when the market seeks to address these critical concerns about our health care system, the FTC and some state Attorneys General take positions diametrically opposed to sound national health care policy as adopted by Congress and implemented by the Department of Health and Human Services.

To be sure, not all state antitrust enforcers stand in the way of health care reform. For example, many states including New York, Pennsylvania and Massachusetts, seem to be willing to permit hospital mergers even in concentrated markets with an agreement for continued regulation. At the same time, however, the FTC has been aggressively challenging integration, taking the stance that hospital mergers will raise prices by giving those hospitals greater leverage in negotiations.

The distance between HHS and the FTC in DC is about 6 blocks, but in healthcare policy they seem to be are miles apart.

The FTC’s skepticism about integration is an old story. As I have discussed previously, during the last decade the agency challenged more than 30 physician collaborations even though those cases lacked any evidence that the collaborations led to higher prices. And, when physicians asked for advice on collaborations, it took the Commission on average more than 436 days to respond to those requests (about as long as it took Congress to debate and enact the Affordable Care Act).

The FTC is on a recent winning streak in challenging hospital mergers. But those were primarily simple cases with direct competition between hospitals in the same market with very high levels of concentration. The courts did not struggle long in these cases, because the competitive harm appeared straightforward.

Far more controversial is when a hospital acquires a physician practice. This type of vertical integration seems precisely what the advocates for health care reform are crying out for. The lack of integration between physicians and hospitals is a core to the problems in health care delivery. But the antitrust law is entirely solicitous of these types of vertical mergers. There has not been a vertical merger successfully challenged in the courts since 1980 – the days of reruns of the TV show Dr. Kildare. And even the supposedly pro-enforcement Obama Administration has not gone to court to challenge a vertical merger, and the Obama FTC has not even secured a merger consent under a vertical theory.

The case in which the FTC has decided to “bet the house” is its challenge to St. Luke’s Health System’s acquisition of Saltzer Medical Group in Nampa, Idaho.

St. Luke’s operates the largest hospital in Boise, and Saltzer is the largest physician practice in Nampa, roughly 20-miles away. But rather than recognizing that this was a vertical affiliation designed to integrate care and to promote a transition to a system in which the provider takes the risk of overutilization, the FTC characterized the transaction as purely horizontal – no different from the merger of two hospitals. In that manner, the FTC sought to paint concentration levels it designed to assure victory.

But back to the reasons why integration is essential. It is undisputed that provider integration is the key to improving American health care. Americans pay substantially more than any other industrialized nation for health care services, 17.2 percent of gross domestic product. Furthermore, these higher costs are not associated with better overall care or greater access for patients. As noted during the debate on the Affordable Care Act, the American health care system’s higher costs and lower quality and access are mostly associated with the usage of a fee-for-service system that pays for each individual medical service, and the “siloed approach” to medicine in which providers work autonomously and do not coordinate to improve patient outcomes.

In order to lower health care costs and improve care, many providers have sought to transform health care into a value-based, patient-centered approach. To institute such a health care initiative, medical staff, physicians, and hospitals must clinically integrate and align their financial incentives. Integrated providers utilize financial risk, share electronic records and data, and implement quality measures in order to provide the best patient care.

The most effective means of ensuring full-scale integration is through a tight affiliation, most often achieved through a merger. Unlike contractual arrangements that are costly, time-sensitive, and complicated by an outdated health care regulatory structure, integrated affiliations ensure that entities can effectively combine and promote structural change throughout the newly formed organization.

For nearly five weeks of trial in Boise St. Luke’s and the FTC fought these conflicting visions of integration and health care policy. Ultimately, the court decided the supposed Nampa primary care physician market posited by the FTC would become far more concentrated, and the merger would substantially lessen competition for “Adult Primary Care Services” by raising prices in Nampa. As such, the district court ordered an immediate divestiture.

Rarely, however, has an antitrust court expressed such anguish at its decision. The district court readily “applauded [St. Luke’s] for its efforts to improve the delivery of healthcare.” It acknowledged the positive impact the merger would have on health care within the region. The court further noted that Saltzer had attempted to coordinate with other providers via loose affiliations but had failed to reap any benefits. Due to Saltzer’s lack of integration, Saltzer physicians had limited “the number of Medicaid or uninsured patients they could accept.”

According to the district court, the combination of St. Luke’s and Saltzer would “improve the quality of medical care.” Along with utilizing the same electronic medical records system and giving the Saltzer physicians access to sophisticated quality metrics designed to improve their practices, the parties would improve care by abandoning fee-for-service payment for all employed physicians and institute population health management reimbursing the physicians via risk-based payment initiatives.

As noted by the district court, these stated efficiencies would improve patient outcomes “if left intact.” Along with improving coordination and quality of care, the merger, as noted by an amicus brief submitted by the International Center for Law & Economics and the Medicaid Defense Fund to the Ninth Circuit, has also already expanded access to Medicaid and uninsured patients by ensuring previously constrained Saltzer physicians can offer services to the most needy.

The court ultimately was not persuaded by the demonstrated procompetitive benefits. Instead, the district court relied on the FTC’s misguided arguments and determined that the stated efficiencies were not “merger-specific,” because such efficiencies could potentially be achieved via other organizational structures. The district court did not analyze the potential success of substitute structures in achieving the stated efficiencies; instead, it relied on the mere existence of alternative provider structures. As a result, as ICLE and the Medicaid Defense Fund point out:

By placing the ultimate burden of proving efficiencies on the Appellants and applying a narrow, impractical view of merger specificity, the court has wrongfully denied application of known procompetitive efficiencies. In fact, under the court’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to oppose untested, theoretical less restrictive structural alternatives.

Notably, the district court’s divestiture order has been stayed by the Ninth Circuit. The appeal on the merits is expected to be heard some time this autumn. Along with reviewing the relevant geographic market and usage of divestiture as a remedy, the Ninth Circuit will also analyze the lower court’s analysis of the merger’s procompetitive efficiencies. For now, the stay order is a limited victory for underserved patients and the merging defendants. While such a ruling is not determinative of the Ninth Circuit’s decision on the merits, it does demonstrate that the merging parties have at least a reasonable possibility of success.

As one might imagine, the Ninth Circuit decision is of great importance to the antitrust and health care reform community. If the district court’s ruling is upheld, it could provide a deterrent to health care providers from further integrating via mergers, a precedent antithetical to the very goals of health care reform. However, if the Ninth Circuit finds the merger does not substantially lessen competition, then precompetitive vertical integration is less likely to be derailed by misapplication of the antitrust laws. The importance and impact of such a decision on American patients cannot be understated.

The U.S. Federal Trade Commission (FTC) continues to expand its presence in online data regulation.  On August 13 the FTC announced a forthcoming workshop to explore appropriate policies toward “big data,” a term used to refer to advancing technologies that are dramatically expanding the commercial collection, analysis, use, and storage of data.  This initiative follows on the heels of the FTC’s May 2014 data broker report, which recommended that Congress impose a variety of requirements on companies that legally collect and sell consumers’ personal information.  (Among other requirements, companies would be required to create consumer data “portals” and implement business procedures that allow consumers to edit and suppress use of their data.)  The FTC also is calling for legislation that would enhance its authority over data security standards and empower it to issue rules requiring companies to inform consumers of security breaches.

These recent regulatory initiatives are in addition to the Commission’s active consumer data enforcement efforts.  Some of these efforts are pursuant to three targeted statutory authorizations – the FTC’s Safeguards Rule (promulgated pursuant to the Gramm-Leach-Bliley Act and directed at non-bank financial institutions), the Fair Credit Reporting Act (directed at consumer protecting agencies), and the Children’s Online Privacy Protection Act (directed at children’s information collected online).

The bulk of the FTC’s enforcement efforts, however, stem from its general authority to proscribe unfair or deceptive practices under Section 5(a)(1) of the FTC ActSince 2002, pursuant to its Section 5 powers, the FTC has filed and settled over 50 cases alleging that private companies used deceptive or ineffective (and thus unfair) practices in storing their data.  (Twitter, LexisNexis, ChoicePoint, GMR Transcription Services, GeneLink, Inc., and mobile device provider HTC are just a few of the firms that have agreed to settle.)  Settlements have involved consent decrees under which the company in question agreed to take a wide variety of “corrective measures” to avoid future harm.

As a matter of first principles, one may question the desirability of FTC data security investigations under Section 5.  Firms have every incentive to avoid data protection breaches that harm their customers, in order to avoid the harm to reputation and business values that stem from such lapses.  At the same time, firms must weigh the costs of alternative data protection systems in determining what the appropriate degree of protection should be.  Economic logic indicates that the optimal business policy is not one that focuses solely on implementing the strongest data protection system program without regard to cost.  Rather, the optimal policy is to invest in enhancing corporate data security up to the point where the marginal benefits of additional security equal the marginal costs, and no further.  Although individual businesses can only roughly approximate this outcome, one may expect that market forces will tend toward the optimal result, as firms that underinvest in data security lose customers and firms that overinvest in security find themselves priced out of the market.  There is no obvious “market failure” that suggests the market should not work adequately in the data security area.  Indeed, there is a large (and growing) amount of information on security systems available to business, and a thriving labor market for IT security specialists to whom companies can turn in designing their security programs.   Nevertheless, it would be naive in the extreme to believe that the FTC will choose to abandon its efforts to apply Section 5 to this area.  With that in mind, let us examine more closely the problems with existing FTC Section 5 data security settlements, with an eye to determining what improvements the Commission might beneficially make if it is so inclined.

The HTC settlement illustrates the breadth of decree-specific obligations the FTC has imposed.  HTC was required to “establish a comprehensive security program, undergo independent security assessments for 20 years, and develop and release software patches to fix security vulnerabilities.”  HTC also agreed to detailed security protocols that would be monitored by a third party.  The FTC did not cite specific harmful security breaches to justify these sanctions; HTC was merely charged with a failure to “take reasonable steps” to secure smartphone software.  Nor did the FTC explain what specific steps short of the decree requirements would have been deemed “reasonable.”

The HTC settlement exemplifies the FTC’s “security by design” approach to data security, under which the agency informs firms after the fact what they should have done, without exploring what they might have done to pass muster.  Although some academics view the FTC settlements as contributing usefully to a developing “common law” of data privacy, supporters of this approach ignore its inherent ex ante vagueness and the costs decree-specific mandates impose on companies.

Another serious problem stems from the enormous investigative and litigation costs associated with challenging an FTC complaint in this area – costs that incentivize most firms to quickly accede to consent decree terms even if they are onerous.  The sad case of LabMD, a small cancer detection lab, serves as warning to businesses that choose to engage in long-term administrative litigation against the FTC.  Due to the cost burden of the FTC’s multi-year litigation against it (which is still ongoing as of this writing), LabMD was forced to wind down its operations, and it stopped accepting new patients in January 2014.

The LabMD case suggests that FTC data security initiatives, carried out without regard to the scale or resources of the affected companies, have the potential to harm competition.  Relatively large companies are much better able to absorb FTC litigation and investigation costs.  Thus, it may be in the large firms’ interests to encourage the FTC to support intrusive and burdensome new FTC data security initiatives, as part of a “raising rivals’ costs” strategy to cripple or eliminate smaller rivals.  As a competition and consumer welfare watchdog, the FTC should keep this risk in mind when weighing the merits of expanding data security regulations or launching new data security investigations.

A common thread runs through the FTC’s myriad activities in data privacy “space” – the FTC’s failure to address whether its actions are cost-beneficial.  There is little doubt that the FTC’s enforcement actions impose substantial costs, both on businesses subject to decree and investigation, and on other firms possessing data that must contemplate business system redesigns to forestall potential future liability.  As a result, business innovation suffers.  Furthermore, those costs are passed on at least in part to consumers, in the form of higher prices and a reduction in the quality and quantity of new products and services.  The FTC should, consistent with its consumer welfare mandate, carefully weigh these costs against the presumed benefits flowing from a reduction in future data breaches.  A failure to carry out a cost-benefit appraisal, even a rudimentary one, makes it impossible to determine whether the FTC’s much touted data privacy projects are enhancing or reducing consumer welfare.

FTC Commissioner Josh Wright recently gave voice to the importance of cost benefit analysis in commenting on the FTC’s data brokerage report – a comment that applies equally well to all of the FTC’s data protection and privacy initiatives:

“I would . . . like to see evidence of the incidence and scope of consumer harms rather than just speculative hypotheticals about how consumers might be harmed before regulation aimed at reducing those harms is implemented.  Accordingly, the FTC would need to quantify more definitively the incidence or value of data broker practices to consumers before taking or endorsing regulatory or legislative action. . . .  We have no idea what the costs for businesses would be to implement consumer control over any and all data shared by data brokers and to what extent these costs would ultimately be passed on to consumers.  Once again, a critical safeguard to insure against the risk that our recommendations and actions do more harm than good for consumers is to require appropriate and thorough cost-benefit analysis before acting.  This failure could be especially important where the costs to businesses from complying with any recommendations are high, but where the ultimate benefit generated for consumers is minimal. . . .  If consumers have minimal concerns about the sharing of certain types of information – perhaps information that is already publicly available – I think we should know that before requiring data brokers to alter their practices and expend resources and incur costs that will be passed on to consumers.”

The FTC could take several actions to improve its data enforcement policies.  First and foremost, it could issue Data Security Guidelines that (1) clarify the FTC’s enforcement actions regarding data security will be rooted in cost-benefit analysis, and (2) will take into account investigative costs as well as (3) reasonable industry self-regulatory efforts.  (Such Guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses.  They should studiously avoid dictating to industry the data security principles that firms should adopt.)  Second, it could establish an FTC website portal that features continuously updated information on the Guidelines and other sources of guidance on data security. Third, it could employ cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection.  Fourth, it could urge its foreign counterpart agencies to adopt similar cost-benefit approaches to data security regulation.

Congress could also improve the situation by enacting a narrowly tailored statute that preempts all state regulation related to data protection.  Forty-seven states now have legislation in this area, which adds additional burdens to those already imposed by federal law.  Furthermore, differences among state laws render the data protection efforts of merchants who may have to safeguard data from across the country enormously complex and onerous.  Given the inherently interstate nature of electronic commerce and associated data breaches, preemption of state regulation in this area would comport with federalism principles.  (Consistent with public choice realities, there is always the risk, of course, that Congress might be tempted to go beyond narrow preemption and create new and unnecessary federal powers in this area.  I believe, however, that such a risk is worth running, given the potential magnitude of excessive regulatory burdens, and the ability to articulate a persuasive public policy case for narrow preemptive legislation.)

Stay tuned for a more fulsome discussion of these issues by me.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.