The free market position on telecom reform has become rather confused of late. Erstwhile conservative Senator Thune is now cosponsoring a version of Senator Rockefeller’s previously proposed video reform bill, bundled into satellite legislation (the Satellite Television Access and Viewer Rights Act or “STAVRA”) that would also include a provision dubbed “Local Choice.” Some free marketeers have defended the bill as a step in the right direction.

Although it looks as if the proposal may be losing steam this Congress, the legislation has been described as a “big and bold idea,” and it’s by no means off the menu. But it should be.

It has been said that politics makes for strange bedfellows. Indeed, people who disagree on just about everything can sometimes unite around a common perceived enemy. Take carriage disputes, for instance. Perhaps because, for some people, a day without The Bachelor is simply a day lost, an unlikely alliance of pro-regulation activists like Public Knowledge and industry stalwarts like Dish has emerged to oppose the ability of copyright holders to withhold content as part of carriage negotiations.

Senator Rockefeller’s Online Video Bill was the catalyst for the Local Choice amendments to STAVRA. Rockefeller’s bill did, well, a lot of terrible things, from imposing certain net neutrality requirements, to overturning the Supreme Court’s Aereo decision, to adding even more complications to the already Byzantine morass of video programming regulations.

But putting Senator Thune’s lipstick on Rockefeller’s pig can’t save the bill, and some of the worst problems from Senator Rockefeller’s original proposal remain.

Among other things, the new bill is designed to weaken the ability of copyright owners to negotiate with distributors, most notably by taking away their ability to withhold content during carriage disputes and by forcing TV stations to sell content on an a la carte basis.

Video distribution issues are complicated — at least under current law. But at root these are just commercial contracts and, like any contracts, they rely on a couple of fundamental principles.

First is the basic property right. The Supreme Court (at least somewhat) settled this for now (in Aereo), by protecting the right of copyright holders to be compensated for carriage of their content. With this baseline, distributors must engage in negotiations to obtain content, rather than employing technological workarounds and exploiting legal loopholes.

Second is the related ability of contracts to govern the terms of trade. A property right isn’t worth much if its owner can’t control how it is used, governed or exchanged.

Finally, and derived from these, is the issue of bargaining power. Good-faith negotiations require both sides not to act strategically by intentionally causing negotiations to break down. But if negotiations do break down, parties need to be able to protect their rights. When content owners are not able to withhold content in carriage disputes, they are put in an untenable bargaining position. This invites bad faith negotiations by distributors.

The STAVRA/Local Choice proposal would undermine the property rights and freedom of contract that bring The Bachelor to your TV, and the proposed bill does real damage by curtailing the scope of the property right in TV programming and restricting the range of contracts available for networks to license their content.

The bill would require that essentially all broadcast stations that elect retrans make their content available a la carte — thus unbundling some of the proverbial sticks that make up the traditional property right. It would also establish MVPD pass-through of each local affiliate. Subscribers would pay a fee determined by the affiliate, and the station must be offered on an unbundled basis, without any minimum tier required – meaning an MVPD has to offer local stations to its customers with no markup, on an a la carte basis, if the station doesn’t elect must-carry. It would also direct the FCC to open a rulemaking to determine whether broadcasters should be prohibited from withholding their content online during a dispute with an MPVD.

“Free market” supporters of the bill assert something like “if we don’t do this to stop blackouts, we won’t be able to stem the tide of regulation of broadcasters.” Presumably this would end blackouts of broadcast programming: If you’re an MVPD subscriber, and you pay the $1.40 (or whatever) for CBS, you get it, period. The broadcaster sets an annual per-subscriber rate; MVPDs pass it on and retransmit only to subscribers who opt in.

But none of this is good for consumers.

When transaction costs are positive, negotiations sometimes break down. If the original right is placed in the wrong hands, then contracting may not assure the most efficient outcome. I think it was Coase who said that.

But taking away the ability of content owners to restrict access to their content during a bargaining dispute effectively places the right to content in the hands of distributors. Obviously, this change in bargaining position will depress the value of content. Placing the rights in the hands of distributors reduces the incentive to create content in the first place; this is why the law protects copyright to begin with. But it also reduces the ability of content owners and distributors to reach innovative agreements and contractual arrangements (like certain promotional deals) that benefit consumers, distributors and content owners alike.

The mandating of a la carte licensing doesn’t benefit consumers, either. Bundling is generally pro-competitive and actually gives consumers more content than they would otherwise have. The bill’s proposal to force programmers to sell content to consumers a la carte may actually lead to higher overall prices for less content. Not much of a bargain.

There are plenty of other ways this is bad for consumers, even if it narrowly “protects” them from blackouts. For example, the bill would prohibit a network from making a deal with an MVPD that provides a discount on a bundle including carriage of both its owned broadcast stations as well as the network’s affiliated cable programming. This is not a worthwhile — or free market — trade-off; it is an ill-advised and economically indefensible attack on vertical distribution arrangements — exactly the same thing that animates many net neutrality defenders.

Just as net neutrality’s meddling in commercial arrangements between ISPs and edge providers will ensure a host of unintended consequences, so will the Rockefeller/Thune bill foreclose a host of welfare-increasing deals. In the end, in exchange for never having to go three days without CBS content, the bill will make that content more expensive, limit the range of programming offered, and lock video distribution into a prescribed business model.

Former FCC Commissioner Rob McDowell sees the same hypocritical connection between net neutrality and broadcast regulation like the Local Choice bill:

According to comments filed with the FCC by Time Warner Cable and the National Cable and Telecommunications Association, broadcasters should not be allowed to take down or withhold the content they produce and own from online distribution even if subscribers have not paid for it—as a matter of federal law. In other words, edge providers should be forced to stream their online content no matter what. Such an overreach, of course, would lay waste to the economics of the Internet. It would also violate the First Amendment’s prohibition against state-mandated, or forced, speech—the flip side of censorship.

It is possible that the cable companies figure that subjecting powerful broadcasters to anti-free speech rules will shift the political momentum in the FCC and among the public away from net neutrality. But cable’s anti-free speech arguments play right into the hands of the net-neutrality crowd. They want to place the entire Internet ecosystem, physical networks, content and apps, in the hands of federal bureaucrats.

While cable providers have generally opposed net neutrality regulation, there is, apparently, some support among them for regulations that would apply to the edge. The Rockefeller/Thune proposal is just a replay of this constraint — this time by forcing programmers to allow retransmission of broadcast content under terms set by Congress. While “what’s good for the goose is good for the gander” sounds appealing in theory, here it is simply doubling down on a terrible idea.

What it reveals most of all is that true neutrality advocates don’t want government control to be limited to ISPs — rather, progressives like Rockefeller (and apparently some conservatives, like Thune) want to subject the whole apparatus — distribution and content alike — to intrusive government oversight in order to “protect” consumers (a point Fred Campbell deftly expands upon here and here).

You can be sure that, if the GOP supports broadcast a la carte, it will pave the way for Democrats (and moderates like McCain who back a la carte) to expand anti-consumer unbundling requirements to cable next. Nearly every economic analysis has concluded that mandated a la carte pricing of cable programming would be harmful to consumers. There is no reason to think that applying it to broadcast channels would be any different.

What’s more, the logical extension of the bill is to apply unbundling to all MVPD channels and to saddle them with contract restraints, as well — and while we’re at it, why not unbundle House of Cards from Orange is the New Black? The Rockefeller bill may have started in part as an effort to “protect” OVDs, but there’ll be no limiting this camel once its nose is under the tent. Like it or not, channel unbundling is arbitrary — why not unbundle by program, episode, studio, production company, etc.?

There is simply no principled basis for the restraints in this bill, and thus there will be no limit to its reach. Indeed, “free market” defenders of the Rockefeller/Thune approach may well be supporting a bill that ultimately leads to something like compulsory, a la carte licensing of all video programming. As I noted in my testimony last year before the House Commerce Committee on the satellite video bill:

Unless we are prepared to bear the consumer harm from reduced variety, weakened competition and possibly even higher prices (and absolutely higher prices for some content), there is no economic justification for interfering in these business decisions.

So much for property rights — and so much for vibrant video programming.

That there is something wrong with the current system is evident to anyone who looks at it. As Gus Hurwitz noted in recent testimony on Rockefeller’s original bill,

The problems with the existing regulatory regime cannot be understated. It involves multiple statutes implemented by multiple agencies to govern technologies developed in the 60s, 70s, and 80s, according to policy goals from the 50s, 60s, and 70s. We are no longer living in a world where the Rube Goldberg of compulsory licenses, must carry and retransmission consent, financial interest and syndication exclusivity rules, and the panoply of Federal, state, and local regulations makes sense – yet these are the rules that govern the video industry.

While video regulation is in need of reform, this bill is not an improvement. In the short run it may ameliorate some carriage disputes, but it will do so at the expense of continued programming vibrancy and distribution innovations. The better way to effect change would be to abolish the Byzantine regulations that simultaneously attempt to place thumbs of both sides of the scale, and to rely on free market negotiations with a copyright baseline and antitrust review for actual abuses.

But STAVRA/Local Choice is about as far from that as you can get.

The Wall Street Journal dropped an FCC bombshell last week, although I’m not sure anyone noticed. In an article ostensibly about the possible role that MFNs might play in the Comcast/Time-Warner Cable merger, the Journal noted that

The FCC is encouraging big media companies to offer feedback confidentially on Comcast’s $45-billion offer for Time Warner Cable.

Not only is the FCC holding secret meetings, but it is encouraging Comcast’s and TWC’s commercial rivals to hold confidential meetings and to submit information under seal. This is not a normal part of ex parte proceedings at the FCC.

In the typical proceeding of this sort – known as a “permit-but-disclose proceeding” – ex parte communications are subject to a host of disclosure requirements delineated in 47 CFR 1.1206. But section 1.1200(a) of the Commission’s rules permits the FCC, in its discretion, to modify the applicable procedures if the public interest so requires.

If you dig deeply into the Public Notice seeking comments on the merger, you find a single sentence stating that

Requests for exemptions from the disclosure requirements pursuant to section 1.1204(a)(9) may be made to Jonathan Sallet [the FCC's General Counsel] or Hillary Burchuk [who heads the transaction review team].

Similar language appears in the AT&T/DirecTV transaction Public Notice.

This leads to the cited rule exempting certain ex parte presentations from the usual disclosure requirements in such proceedings, including the referenced one that exempts ex partes from disclosure when

The presentation is made pursuant to an express or implied promise of confidentiality to protect an individual from the possibility of reprisal, or there is a reasonable expectation that disclosure would endanger the life or physical safety of an individual

So the FCC is inviting “media companies” to offer confidential feedback and to hold secret meetings that the FCC will hold confidential because of “the possibility of reprisal” based on language intended to protect individuals.

Such deviations from the standard permit-but-disclose procedures are extremely rare. As in non-existent. I guess there might be other examples, but I was unable to find a single one in a quick search. And I’m willing to bet that the language inviting confidential communications in the PN hasn’t appeared before – and certainly not in a transaction review.

It is worth pointing out that the language in 1.1204(a)(9) is remarkably similar to language that appears in the Freedom of Information Act. As the DOJ notes regarding that exemption:

Exemption 7(D) provides protection for “records or information compiled for law enforcement purposes [which] could reasonably be expected to disclose the identity of a confidential source… to ensure that “confidential sources are not lost through retaliation against the sources for past disclosure or because of the sources’ fear of future disclosure.”

Surely the fear-of-reprisal rationale for confidentiality makes sense in that context – but here? And invoked to elicit secret meetings and to keep confidential information from corporations instead of individuals, it makes even less sense (and doesn’t even obviously comply with the rule itself). It is not as though – as far as I know – someone approached the Commission with stated fears and requested it implement a procedure for confidentiality in these particular reviews.

Rather, this is the Commission inviting non-transparent process in the midst of a heated, politicized and heavily-scrutinized transaction review.

The optics are astoundingly bad.

Unfortunately, this kind of behavior seems to be par for the course for the current FCC. As Commissioner Pai has noted on more than one occasion, the minority commissioners have been routinely kept in the dark with respect to important matters at the Commission – not coincidentally, in other highly-politicized proceedings.

What’s particularly troubling is that, for all its faults, the FCC’s process is typically extremely open and transparent. Public comments, endless ex parte meetings, regular Open Commission Meetings are all the norm. And this is as it should be. Particularly when it comes to transactions and other regulated conduct for which the regulated entity bears the burden of proving that its behavior does not offend the public interest, it is obviously necessary to have all of the information – to know what might concern the Commission and to make a case respecting those matters.

The kind of arrogance on display of late, and the seeming abuse of process that goes along with it, hearkens back to the heady days of Kevin Martin’s tenure as FCC Chairman – a tenure described as “dysfunctional” and noted for its abuse of process.

All of which should stand as a warning to the vocal, pro-regulatory minority pushing for the FCC to proclaim enormous power to regulate net neutrality – and broadband generally – under Title II. Just as Chairman Martin tried to manipulate diversity rules to accomplish his pet project of cable channel unbundling, some future Chairman will undoubtedly claim authority under Title II to accomplish some other unintended, but politically expedient, objective — and it may not be one the self-proclaimed consumer advocates like, when it happens.

Bad as that risk may be, it is only made more likely by regulatory reviews undertaken in secret. Whatever impelled the Chairman to invite unprecedented secrecy into these transaction reviews, it seems to be of a piece with a deepening politicization and abuse of process at the Commission. It’s both shameful – and deeply worrying.

[Cross posted at the CPIP Blog.]

By Mark Schultz & Adam Mossoff

A handful of increasingly noisy critics of intellectual property (IP) have emerged within free market organizations. Both the emergence and vehemence of this group has surprised most observers, since free market advocates generally support property rights. It’s true that there has long been a strain of IP skepticism among some libertarian intellectuals. However, the surprised observer would be correct to think that the latest critique is something new. In our experience, most free market advocates see the benefit and importance of protecting the property rights of all who perform productive labor – whether the results are tangible or intangible.

How do the claims of this emerging critique stand up? We have had occasion to examine the arguments of free market IP skeptics before. (For example, see here, here, here.) So far, we have largely found their claims wanting.

We have yet another occasion to examine their arguments, and once again we are underwhelmed and disappointed. We recently posted an essay at AEI’s Tech Policy Daily prompted by an odd report recently released by the Mercatus Center, a free-market think tank. The Mercatus report attacks recent research that supposedly asserts, in the words of the authors of the Mercatus report, that “the existence of intellectual property in an industry creates the jobs in that industry.” They contend that this research “provide[s] no theoretical or empirical evidence to support” its claims of the importance of intellectual property to the U.S. economy.

Our AEI essay responds to these claims by explaining how these IP skeptics both mischaracterize the studies that they are attacking and fail to acknowledge the actual historical and economic evidence on the connections between IP, innovation, and economic prosperity. We recommend that anyone who may be confused by the assertions of any IP skeptics waving the banner of property rights and the free market read our essay at AEI, as well as our previous essays in which we have called out similarly odd statements from Mercatus about IP rights.

The Mercatus report, though, exemplifies many of the concerns we raise about these IP skeptics, and so it deserves to be considered at greater length.

For instance, something we touched on briefly in our AEI essay is the fact that the authors of this Mercatus report offer no empirical evidence of their own within their lengthy critique of several empirical studies, and at best they invoke thin theoretical support for their contentions.

This is odd if only because they are critiquing several empirical studies that develop careful, balanced and rigorous models for testing one of the biggest economic questions in innovation policy: What is the relationship between intellectual property and jobs and economic growth?

Apparently, the authors of the Mercatus report presume that the burden of proof is entirely on the proponents of IP, and that a bit of hand waving using abstract economic concepts and generalized theory is enough to defeat arguments supported by empirical data and plausible methodology.

This move raises a foundational question that frames all debates about IP rights today: On whom should the burden rest? On those who claim that IP has beneficial economic effects? Or on those who claim otherwise, such as the authors of the Mercatus report?

The burden of proof here is an important issue. Too often, recent debates about IP rights have started from an assumption that the entire burden of proof rests on those investigating or defending IP rights. Quite often, IP skeptics appear to believe that their criticism of IP rights needs little empirical or theoretical validation, beyond talismanic invocations of “monopoly” and anachronistic assertions that the Framers of the US Constitution were utilitarians.

As we detail in our AEI essay, though, the problem with arguments like those made in the Mercatus report is that they contradict history and empirics. For the evidence that supports this claim, including citations to the many studies that are ignored by the IP skeptics at Mercatus and elsewhere, check out the essay.

Despite these historical and economic facts, one may still believe that the US would enjoy even greater prosperity without IP. But IP skeptics who believe in this counterfactual world face a challenge. As a preliminary matter, they ought to acknowledge that they are the ones swimming against the tide of history and prevailing belief. More important, the burden of proof is on them – the IP skeptics – to explain why the U.S. has long prospered under an IP system they find so odious and destructive of property rights and economic progress, while countries that largely eschew IP have languished. This obligation is especially heavy for one who seeks to undermine empirical work such as the USPTO Report and other studies.

In sum, you can’t beat something with nothing. For IP skeptics to contest this evidence, they should offer more than polemical and theoretical broadsides. They ought to stop making faux originalist arguments that misstate basic legal facts about property and IP, and instead offer their own empirical evidence. The Mercatus report, however, is content to confine its empirics to critiques of others’ methodology – including claims their targets did not make.

For example, in addition to the several strawman attacks identified in our AEI essay, the Mercatus report constructs another strawman in its discussion of studies of copyright piracy done by Stephen Siwek for the Institute for Policy Innovation (IPI). Mercatus inaccurately and unfairly implies that Siwek’s studies on the impact of piracy in film and music assumed that every copy pirated was a sale lost – this is known as “the substitution rate problem.” In fact, Siwek’s methodology tackled that exact problem.

IPI and Siwek never seem to get credit for this, but Siwek was careful to avoid the one-to-one substitution rate estimate that Mercatus and others foist on him and then critique as empirically unsound. If one actually reads his report, it is clear that Siwek assumes that bootleg physical copies resulted in a 65.7% substitution rate, while illegal downloads resulted in a 20% substitution rate. Siwek’s methodology anticipates and renders moot the critique that Mercatus makes anyway.

After mischaracterizing these studies and their claims, the Mercatus report goes further in attacking them as supporting advocacy on behalf of IP rights. Yes, the empirical results have been used by think tanks, trade associations and others to support advocacy on behalf of IP rights. But does that advocacy make the questions asked and resulting research invalid? IP skeptics would have trumpeted results showing that IP-intensive industries had a minimal economic impact, just as Mercatus policy analysts have done with alleged empirical claims about IP in other contexts. In fact, IP skeptics at free-market institutions repeatedly invoke studies in policy advocacy that allegedly show harm from patent litigation, despite these studies suffering from far worse problems than anything alleged in their critiques of the USPTO and other studies.

Finally, we noted in our AEI essay how it was odd to hear a well-known libertarian think tank like Mercatus advocate for more government-funded programs, such as direct grants or prizes, as viable alternatives to individual property rights secured to inventors and creators. There is even more economic work being done beyond the empirical studies we cited in our AEI essay on the critical role that property rights in innovation serve in a flourishing free market, as well as work on the economic benefits of IP rights over other governmental programs like prizes.

Today, we are in the midst of a full-blown moral panic about the alleged evils of IP. It’s alarming that libertarians – the very people who should be defending all property rights – have jumped on this populist bandwagon. Imagine if free market advocates at the turn of the Twentieth Century had asserted that there was no evidence that property rights had contributed to the Industrial Revolution. Imagine them joining in common cause with the populist Progressives to suppress the enforcement of private rights and the enjoyment of economic liberty. It’s a bizarre image, but we are seeing its modern-day equivalent, as these libertarians join the chorus of voices arguing against property and private ordering in markets for innovation and creativity.

It’s also disconcerting that Mercatus appears to abandon its exceptionally high standards for scholarly work-product when it comes to IP rights. Its economic analyses and policy briefs on such subjects as telecommunications regulation, financial and healthcare markets, and the regulatory state have rightly made Mercatus a respected free-market institution. It’s unfortunate that it has lent this justly earned prestige and legitimacy to stale and derivative arguments against property and private ordering in the innovation and creative industries. It’s time to embrace the sound evidence and back off the rhetoric.

There is a consensus in America that we need to control health care costs and improve the delivery of health care. After a long debate on health care reform and careful scrutiny of health care markets, there seems to be agreement that the unintegrated, “siloed approach” to health care is inefficient, costly, and contrary to the goal of improving care. But some antitrust enforcers — most notably the FTC — are standing in the way.

Enlightened health care providers are responding to this consensus by entering into transactions that will lead to greater clinical and financial integration, facilitating a movement from volume-based to value-based delivery of care. Any many aspects of the Affordable Care Act encourage this path to integration. Yet when the market seeks to address these critical concerns about our health care system, the FTC and some state Attorneys General take positions diametrically opposed to sound national health care policy as adopted by Congress and implemented by the Department of Health and Human Services.

To be sure, not all state antitrust enforcers stand in the way of health care reform. For example, many states including New York, Pennsylvania and Massachusetts, seem to be willing to permit hospital mergers even in concentrated markets with an agreement for continued regulation. At the same time, however, the FTC has been aggressively challenging integration, taking the stance that hospital mergers will raise prices by giving those hospitals greater leverage in negotiations.

The distance between HHS and the FTC in DC is about 6 blocks, but in healthcare policy they seem to be are miles apart.

The FTC’s skepticism about integration is an old story. As I have discussed previously, during the last decade the agency challenged more than 30 physician collaborations even though those cases lacked any evidence that the collaborations led to higher prices. And, when physicians asked for advice on collaborations, it took the Commission on average more than 436 days to respond to those requests (about as long as it took Congress to debate and enact the Affordable Care Act).

The FTC is on a recent winning streak in challenging hospital mergers. But those were primarily simple cases with direct competition between hospitals in the same market with very high levels of concentration. The courts did not struggle long in these cases, because the competitive harm appeared straightforward.

Far more controversial is when a hospital acquires a physician practice. This type of vertical integration seems precisely what the advocates for health care reform are crying out for. The lack of integration between physicians and hospitals is a core to the problems in health care delivery. But the antitrust law is entirely solicitous of these types of vertical mergers. There has not been a vertical merger successfully challenged in the courts since 1980 – the days of reruns of the TV show Dr. Kildare. And even the supposedly pro-enforcement Obama Administration has not gone to court to challenge a vertical merger, and the Obama FTC has not even secured a merger consent under a vertical theory.

The case in which the FTC has decided to “bet the house” is its challenge to St. Luke’s Health System’s acquisition of Saltzer Medical Group in Nampa, Idaho.

St. Luke’s operates the largest hospital in Boise, and Saltzer is the largest physician practice in Nampa, roughly 20-miles away. But rather than recognizing that this was a vertical affiliation designed to integrate care and to promote a transition to a system in which the provider takes the risk of overutilization, the FTC characterized the transaction as purely horizontal – no different from the merger of two hospitals. In that manner, the FTC sought to paint concentration levels it designed to assure victory.

But back to the reasons why integration is essential. It is undisputed that provider integration is the key to improving American health care. Americans pay substantially more than any other industrialized nation for health care services, 17.2 percent of gross domestic product. Furthermore, these higher costs are not associated with better overall care or greater access for patients. As noted during the debate on the Affordable Care Act, the American health care system’s higher costs and lower quality and access are mostly associated with the usage of a fee-for-service system that pays for each individual medical service, and the “siloed approach” to medicine in which providers work autonomously and do not coordinate to improve patient outcomes.

In order to lower health care costs and improve care, many providers have sought to transform health care into a value-based, patient-centered approach. To institute such a health care initiative, medical staff, physicians, and hospitals must clinically integrate and align their financial incentives. Integrated providers utilize financial risk, share electronic records and data, and implement quality measures in order to provide the best patient care.

The most effective means of ensuring full-scale integration is through a tight affiliation, most often achieved through a merger. Unlike contractual arrangements that are costly, time-sensitive, and complicated by an outdated health care regulatory structure, integrated affiliations ensure that entities can effectively combine and promote structural change throughout the newly formed organization.

For nearly five weeks of trial in Boise St. Luke’s and the FTC fought these conflicting visions of integration and health care policy. Ultimately, the court decided the supposed Nampa primary care physician market posited by the FTC would become far more concentrated, and the merger would substantially lessen competition for “Adult Primary Care Services” by raising prices in Nampa. As such, the district court ordered an immediate divestiture.

Rarely, however, has an antitrust court expressed such anguish at its decision. The district court readily “applauded [St. Luke’s] for its efforts to improve the delivery of healthcare.” It acknowledged the positive impact the merger would have on health care within the region. The court further noted that Saltzer had attempted to coordinate with other providers via loose affiliations but had failed to reap any benefits. Due to Saltzer’s lack of integration, Saltzer physicians had limited “the number of Medicaid or uninsured patients they could accept.”

According to the district court, the combination of St. Luke’s and Saltzer would “improve the quality of medical care.” Along with utilizing the same electronic medical records system and giving the Saltzer physicians access to sophisticated quality metrics designed to improve their practices, the parties would improve care by abandoning fee-for-service payment for all employed physicians and institute population health management reimbursing the physicians via risk-based payment initiatives.

As noted by the district court, these stated efficiencies would improve patient outcomes “if left intact.” Along with improving coordination and quality of care, the merger, as noted by an amicus brief submitted by the International Center for Law & Economics and the Medicaid Defense Fund to the Ninth Circuit, has also already expanded access to Medicaid and uninsured patients by ensuring previously constrained Saltzer physicians can offer services to the most needy.

The court ultimately was not persuaded by the demonstrated procompetitive benefits. Instead, the district court relied on the FTC’s misguided arguments and determined that the stated efficiencies were not “merger-specific,” because such efficiencies could potentially be achieved via other organizational structures. The district court did not analyze the potential success of substitute structures in achieving the stated efficiencies; instead, it relied on the mere existence of alternative provider structures. As a result, as ICLE and the Medicaid Defense Fund point out:

By placing the ultimate burden of proving efficiencies on the Appellants and applying a narrow, impractical view of merger specificity, the court has wrongfully denied application of known procompetitive efficiencies. In fact, under the court’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to oppose untested, theoretical less restrictive structural alternatives.

Notably, the district court’s divestiture order has been stayed by the Ninth Circuit. The appeal on the merits is expected to be heard some time this autumn. Along with reviewing the relevant geographic market and usage of divestiture as a remedy, the Ninth Circuit will also analyze the lower court’s analysis of the merger’s procompetitive efficiencies. For now, the stay order is a limited victory for underserved patients and the merging defendants. While such a ruling is not determinative of the Ninth Circuit’s decision on the merits, it does demonstrate that the merging parties have at least a reasonable possibility of success.

As one might imagine, the Ninth Circuit decision is of great importance to the antitrust and health care reform community. If the district court’s ruling is upheld, it could provide a deterrent to health care providers from further integrating via mergers, a precedent antithetical to the very goals of health care reform. However, if the Ninth Circuit finds the merger does not substantially lessen competition, then precompetitive vertical integration is less likely to be derailed by misapplication of the antitrust laws. The importance and impact of such a decision on American patients cannot be understated.

The U.S. Federal Trade Commission (FTC) continues to expand its presence in online data regulation.  On August 13 the FTC announced a forthcoming workshop to explore appropriate policies toward “big data,” a term used to refer to advancing technologies that are dramatically expanding the commercial collection, analysis, use, and storage of data.  This initiative follows on the heels of the FTC’s May 2014 data broker report, which recommended that Congress impose a variety of requirements on companies that legally collect and sell consumers’ personal information.  (Among other requirements, companies would be required to create consumer data “portals” and implement business procedures that allow consumers to edit and suppress use of their data.)  The FTC also is calling for legislation that would enhance its authority over data security standards and empower it to issue rules requiring companies to inform consumers of security breaches.

These recent regulatory initiatives are in addition to the Commission’s active consumer data enforcement efforts.  Some of these efforts are pursuant to three targeted statutory authorizations – the FTC’s Safeguards Rule (promulgated pursuant to the Gramm-Leach-Bliley Act and directed at non-bank financial institutions), the Fair Credit Reporting Act (directed at consumer protecting agencies), and the Children’s Online Privacy Protection Act (directed at children’s information collected online).

The bulk of the FTC’s enforcement efforts, however, stem from its general authority to proscribe unfair or deceptive practices under Section 5(a)(1) of the FTC ActSince 2002, pursuant to its Section 5 powers, the FTC has filed and settled over 50 cases alleging that private companies used deceptive or ineffective (and thus unfair) practices in storing their data.  (Twitter, LexisNexis, ChoicePoint, GMR Transcription Services, GeneLink, Inc., and mobile device provider HTC are just a few of the firms that have agreed to settle.)  Settlements have involved consent decrees under which the company in question agreed to take a wide variety of “corrective measures” to avoid future harm.

As a matter of first principles, one may question the desirability of FTC data security investigations under Section 5.  Firms have every incentive to avoid data protection breaches that harm their customers, in order to avoid the harm to reputation and business values that stem from such lapses.  At the same time, firms must weigh the costs of alternative data protection systems in determining what the appropriate degree of protection should be.  Economic logic indicates that the optimal business policy is not one that focuses solely on implementing the strongest data protection system program without regard to cost.  Rather, the optimal policy is to invest in enhancing corporate data security up to the point where the marginal benefits of additional security equal the marginal costs, and no further.  Although individual businesses can only roughly approximate this outcome, one may expect that market forces will tend toward the optimal result, as firms that underinvest in data security lose customers and firms that overinvest in security find themselves priced out of the market.  There is no obvious “market failure” that suggests the market should not work adequately in the data security area.  Indeed, there is a large (and growing) amount of information on security systems available to business, and a thriving labor market for IT security specialists to whom companies can turn in designing their security programs.   Nevertheless, it would be naive in the extreme to believe that the FTC will choose to abandon its efforts to apply Section 5 to this area.  With that in mind, let us examine more closely the problems with existing FTC Section 5 data security settlements, with an eye to determining what improvements the Commission might beneficially make if it is so inclined.

The HTC settlement illustrates the breadth of decree-specific obligations the FTC has imposed.  HTC was required to “establish a comprehensive security program, undergo independent security assessments for 20 years, and develop and release software patches to fix security vulnerabilities.”  HTC also agreed to detailed security protocols that would be monitored by a third party.  The FTC did not cite specific harmful security breaches to justify these sanctions; HTC was merely charged with a failure to “take reasonable steps” to secure smartphone software.  Nor did the FTC explain what specific steps short of the decree requirements would have been deemed “reasonable.”

The HTC settlement exemplifies the FTC’s “security by design” approach to data security, under which the agency informs firms after the fact what they should have done, without exploring what they might have done to pass muster.  Although some academics view the FTC settlements as contributing usefully to a developing “common law” of data privacy, supporters of this approach ignore its inherent ex ante vagueness and the costs decree-specific mandates impose on companies.

Another serious problem stems from the enormous investigative and litigation costs associated with challenging an FTC complaint in this area – costs that incentivize most firms to quickly accede to consent decree terms even if they are onerous.  The sad case of LabMD, a small cancer detection lab, serves as warning to businesses that choose to engage in long-term administrative litigation against the FTC.  Due to the cost burden of the FTC’s multi-year litigation against it (which is still ongoing as of this writing), LabMD was forced to wind down its operations, and it stopped accepting new patients in January 2014.

The LabMD case suggests that FTC data security initiatives, carried out without regard to the scale or resources of the affected companies, have the potential to harm competition.  Relatively large companies are much better able to absorb FTC litigation and investigation costs.  Thus, it may be in the large firms’ interests to encourage the FTC to support intrusive and burdensome new FTC data security initiatives, as part of a “raising rivals’ costs” strategy to cripple or eliminate smaller rivals.  As a competition and consumer welfare watchdog, the FTC should keep this risk in mind when weighing the merits of expanding data security regulations or launching new data security investigations.

A common thread runs through the FTC’s myriad activities in data privacy “space” – the FTC’s failure to address whether its actions are cost-beneficial.  There is little doubt that the FTC’s enforcement actions impose substantial costs, both on businesses subject to decree and investigation, and on other firms possessing data that must contemplate business system redesigns to forestall potential future liability.  As a result, business innovation suffers.  Furthermore, those costs are passed on at least in part to consumers, in the form of higher prices and a reduction in the quality and quantity of new products and services.  The FTC should, consistent with its consumer welfare mandate, carefully weigh these costs against the presumed benefits flowing from a reduction in future data breaches.  A failure to carry out a cost-benefit appraisal, even a rudimentary one, makes it impossible to determine whether the FTC’s much touted data privacy projects are enhancing or reducing consumer welfare.

FTC Commissioner Josh Wright recently gave voice to the importance of cost benefit analysis in commenting on the FTC’s data brokerage report – a comment that applies equally well to all of the FTC’s data protection and privacy initiatives:

“I would . . . like to see evidence of the incidence and scope of consumer harms rather than just speculative hypotheticals about how consumers might be harmed before regulation aimed at reducing those harms is implemented.  Accordingly, the FTC would need to quantify more definitively the incidence or value of data broker practices to consumers before taking or endorsing regulatory or legislative action. . . .  We have no idea what the costs for businesses would be to implement consumer control over any and all data shared by data brokers and to what extent these costs would ultimately be passed on to consumers.  Once again, a critical safeguard to insure against the risk that our recommendations and actions do more harm than good for consumers is to require appropriate and thorough cost-benefit analysis before acting.  This failure could be especially important where the costs to businesses from complying with any recommendations are high, but where the ultimate benefit generated for consumers is minimal. . . .  If consumers have minimal concerns about the sharing of certain types of information – perhaps information that is already publicly available – I think we should know that before requiring data brokers to alter their practices and expend resources and incur costs that will be passed on to consumers.”

The FTC could take several actions to improve its data enforcement policies.  First and foremost, it could issue Data Security Guidelines that (1) clarify the FTC’s enforcement actions regarding data security will be rooted in cost-benefit analysis, and (2) will take into account investigative costs as well as (3) reasonable industry self-regulatory efforts.  (Such Guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses.  They should studiously avoid dictating to industry the data security principles that firms should adopt.)  Second, it could establish an FTC website portal that features continuously updated information on the Guidelines and other sources of guidance on data security. Third, it could employ cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection.  Fourth, it could urge its foreign counterpart agencies to adopt similar cost-benefit approaches to data security regulation.

Congress could also improve the situation by enacting a narrowly tailored statute that preempts all state regulation related to data protection.  Forty-seven states now have legislation in this area, which adds additional burdens to those already imposed by federal law.  Furthermore, differences among state laws render the data protection efforts of merchants who may have to safeguard data from across the country enormously complex and onerous.  Given the inherently interstate nature of electronic commerce and associated data breaches, preemption of state regulation in this area would comport with federalism principles.  (Consistent with public choice realities, there is always the risk, of course, that Congress might be tempted to go beyond narrow preemption and create new and unnecessary federal powers in this area.  I believe, however, that such a risk is worth running, given the potential magnitude of excessive regulatory burdens, and the ability to articulate a persuasive public policy case for narrow preemptive legislation.)

Stay tuned for a more fulsome discussion of these issues by me.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

Microsoft wants you to believe that Google’s business practices stifle competition and harm consumers. Again.

The latest volley in its tiresome and ironic campaign to bludgeon Google with the same regulatory club once used against Microsoft itself is the company’s effort to foment an Android-related antitrust case in Europe.

In a recent polemicMicrosoft consultant (and business school professor) Ben Edelman denounces Google for requiring that, if device manufacturers want to pre-install key Google apps on Android devices, they “must install all the apps Google specifies, with the prominence Google requires, including setting these apps as defaults where Google instructs.” Edelman trots out gasp-worthy “secret” licensing agreements that he claims support his allegation (more on this later).

Similarly, a recent Wall Street Journal article, “Android’s ‘Open’ System Has Limits,” cites Edelman’s claim that limits on the licensing of Google’s proprietary apps mean that the Android operating system isn’t truly open source and comes with “strings attached.”

In fact, along with the Microsoft-funded trade organization FairSearch, Edelman has gone so far as to charge that this “tying” constitutes an antitrust violation. It is this claim that Microsoft and a network of proxies brought to the Commission when their efforts to manufacture a search-neutrality-based competition case against Google failed.

But before getting too caught up in the latest round of anti-Google hysteria, it’s worth noting that the Federal Trade Commission has already reviewed these claims. After a thorough, two-year inquiry, the FTC found the antitrust arguments against Google to be without merit. The South Korea Fair Trade Commission conducted its own two year investigation into Google’s Android business practices and dismissed the claims before it as meritless, as well.

Taking on Edelman and FairSearch with an exhaustive scholarly analysis, German law professor Torsten Koerber recently assessed the nature of competition among mobile operating systems and concluded that:

(T)he (EU) Fairsearch complaint ultimately does not aim to protect competition or consumers, as it pretends to. It rather strives to shelter Microsoft from competition by abusing competition law to attack Google’s business model and subvert competition.

It’s time to take a step back and consider the real issues at play.

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.” He writes, “I see no way to reconcile the MADA restrictions with [Android openness].”

Conveniently, however, Edelman neglects to cite to Section 2.6 of the MADA:

The parties will create an open environment for the Devices by making all Android Products and Android Application Programming Interfaces available and open on the Devices and will take no action to limit or restrict the Android platform.

Professor Korber’s analysis provides a straight-forward explanation of the relationship between Android and its OEM licensees:

Google offers Android to OEMs on a royalty-free basis. The licensees are free to download, distribute and even modify the Android code as they like. OEMs can create mobile devices that run “pure” Android…or they can apply their own user interfaces (IO) and thereby hide most of the underlying Android system (e.g. Samsung’s “TouchWiz” or HTC’s “Sense”). OEMs make ample use of this option.

The truth is that the Android operating system remains, as ever, definitively open source — but Android’s openness isn’t really what the fuss is about. In this case, the confusion (or obfuscation) stems from the casual confounding of Google Apps with the Android Operating System. As we’ll see, they aren’t the same thing.

Consider Amazon, which pre-loads no Google applications at all on its Kindle Fire and Fire Phone. Amazon’s version of Android uses Microsoft’s Bing as the default search engineNokia provides mapping services, and the app store is Amazon’s own.

Still, Microsoft’s apologists continue to claim that Android licensees can’t choose to opt out of Google’s applications suite — even though, according to a new report from ABI Research, 20 percent of smartphones shipped between May and July 2014 were based on a “Google-less” version of the Android OS. And that number is consistently increasing: Analysts predict that by 2015, 30 percent of Android phones won’t access Google Services.

It’s true that equipment manufacturers who choose the Android operating system have the option to include the suite of integrated, proprietary Google apps and services licensed (royalty-free) under the name Google Mobile Services (GMS). GMS includes Google Search, Maps, Calendar, YouTube and other apps that together define the “Google Android experience” that users know and love.

But Google Android is far from the only Android experience.

Even if a manufacturer chooses to license Google’s apps suite, Google’s terms are not exclusive. Handset makers are free to install competing applications, including other search engines, map applications or app stores.

Although Google requires that Google Search be made easily accessible (hardly a bad thing for consumers, as it is Google Search that finances the development and maintenance of all of the other (free) apps from which Google otherwise earns little to no revenue), OEMs and users alike can (and do) easily install and access other search engines in numerous ways. As Professor Korber notes:

The standard MADA does not entail any exclusivity for Google Search nor does it mandate a search default for the web browser.

Regardless, integrating key Google apps (like Google Search and YouTube) with other apps the company offers (like Gmail and Google+) is an antitrust problem only if it significantly forecloses competitors from these apps’ markets compared to a world without integrated Google apps, and without pro-competitive justification. Neither is true, despite the unsubstantiated claims to the contrary from Edelman, FairSearch and others.

Consumers and developers expect and demand consistency across devices so they know what they’re getting and don’t have to re-learn basic functions or program multiple versions of the same application. Indeed, Apple’s devices are popular in part because Apple’s closed iOS provides a predictable, seamless experience for users and developers.

But making Android competitive with its tightly controlled competitors requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

Unlike Android, Apple prohibits modifications of its operating system by downstream partners and users, and completely controls the pre-installation of apps on iOS devices. It deeply integrates applications into iOS, including Apple Maps, iTunes, Siri, Safari, its App Store and others. Microsoft has copied Apple’s model to a large degree, hard-coding its own applications (including Bing, Windows Store, Skype, Internet Explorer, Bing Maps and Office) into the Windows Phone operating system.

In the service of creating and maintaining a competitive platform, each of these closed OS’s bakes into its operating system significant limitations on which third-party apps can be installed and what they can (and can’t) do. For example, neither platform permits installation of a third-party app store, and neither can be significantly customized. Apple’s iOS also prohibits users from changing default applications — although the soon-to-be released iOS 8 appears to be somewhat more flexible than previous versions.

In addition to pre-installing a raft of their own apps and limiting installation of other apps, both Apple and Microsoft enable greater functionality for their own apps than they do the third-party apps they allow.

For example, Apple doesn’t make available for other browsers (like Google’s Chrome) all the JavaScript functionality that it does for Safari, and it requires other browsers to use iOS Webkit instead of their own web engines. As a result there are things that Chrome can’t do on iOS that Safari and only Safari can do, and Chrome itself is hamstrung in implementing its own software on iOS. This approach has led Mozilla to refuse to offer its popular Firefox browser for iOS devices (while it has no such reluctance about offering it on Android).

On Windows Phone, meanwhile, Bing is integrated into the OS and can’t be removed. Only in markets where Bing is not supported (and with Microsoft’s prior approval) can OEMs change the default search app from Bing. While it was once possible to change the default search engine that opens in Internet Explorer (although never from the hardware search button), the Windows 8.1 Hardware Development Notes, updated July 22, 2014, state:

By default, the only search provider included on the phone is Bing. The search provider used in the browser is always the same as the one launched by the hardware search button.

Both Apple iOS and Windows Phone tightly control the ability to use non-default apps to open intents sent from other apps and, in Windows especially, often these linkages can’t be changed.

As a result of these sorts of policies, maintaining the integrity — and thus the brand — of the platform is (relatively) easy for closed systems. While plenty of browsers are perfectly capable of answering an intent to open a web page, Windows Phone can better ensure a consistent and reliable experience by forcing Internet Explorer to handle the operation.

By comparison, Android, with or without Google Mobile Services, is dramatically more open, more flexible and customizable, and more amenable to third-party competition. Even the APIs that it uses to integrate its apps are open to all developers, ensuring that there is nothing that Google apps are able to do that non-Google apps with the same functionality are prevented from doing.

In other words, not just Gmail, but any email app is permitted to handle requests from any other app to send emails; not just Google Calendar but any calendar app is permitted to handle requests from any other app to accept invitations.

In no small part because of this openness and flexibility, current reports indicate that Android OS runs 85 percent of mobile devices worldwide. But it is OEM giant Samsung, not Google, that dominates the market, with a 65 percent share of all Android devices. Competition is rife, however, especially in emerging markets. In fact, according to one report, “Chinese and Indian vendors accounted for the majority of smartphone shipments for the first time with a 51% share” in 2Q 2014.

As he has not been in the past, Edelman is at least nominally circumspect in his unsubstantiated legal conclusions about Android’s anticompetitive effect:

Applicable antitrust law can be complicated: Some ties yield useful efficiencies, and not all ties reduce welfare.

Given Edelman’s connections to Microsoft and the realities of the market he is discussing, it could hardly be otherwise. If every integration were an antitrust violation, every element of every operating system — including Apple’s iOS as well as every variant of Microsoft’s Windows — should arguably be the subject of a government investigation.

In truth, Google has done nothing more than ensure that its own suite of apps functions on top of Android to maintain what Google sees as seamless interconnectivity, a high-quality experience for users, and consistency for application developers — while still allowing handset manufacturers room to innovate in a way that is impossible on other platforms. This is the very definition of pro-competitive, and ultimately this is what allows the platform as a whole to compete against its far more vertically integrated alternatives.

Which brings us back to Microsoft. On the conclusion of the FTC investigation in January 2013, a GigaOm exposé on the case had this to say:

Critics who say Google is too powerful have nagged the government for years to regulate the company’s search listings. But today the critics came up dry….

The biggest loser is Microsoft, which funded a long-running cloak-and-dagger lobbying campaign to convince the public and government that its arch-enemy had to be regulated….

The FTC is also a loser because it ran a high profile two-year investigation but came up dry.

EU regulators, take note.

[First posted to the CPIP Blog on June 17, 2014]

Last Thursday, Elon Musk, the founder and CEO of Tesla Motors, issued an announcement on the company’s blog with a catchy title: “All Our Patent Are Belong to You.” Commentary in social media and on blogs, as well as in traditional newspapers, jumped to the conclusion that Tesla is abandoning its patents and making them “freely” available to the public for whomever wants to use them. As with all things involving patented innovation these days, the reality of Tesla’s new patent policy does not match the PR spin or the buzz on the Internet.

The reality is that Tesla is not disclaiming its patent rights, despite Musk’s title to his announcement or his invocation in his announcement of the tread-worn cliché today that patents impede innovation. In fact, Tesla’s new policy is an example of Musk exercising patent rights, not abandoning them.

If you’re not puzzled by Tesla’s announcement, you should be. This is because patents are a type of property right that secures the exclusive rights to make, use, or sell an invention for a limited period of time. These rights do not come cheap — inventions cost time, effort, and money to create and companies like Tesla then exploit these property rights in spending even more time, effort and money in converting inventions into viable commercial products and services sold in the marketplace. Thus, if Tesla’s intention is to make its ideas available for public use, why, one may wonder, did it bother to expend the tremendous resources in acquiring the patents in the first place?

The key to understanding this important question lies in a single phrase in Musk’s announcement that almost everyone has failed to notice: “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” (emphasis added)

What does “in good faith” mean in this context? Fortunately, one intrepid reporter at the L.A. Times asked this question, and the answer from Musk makes clear that this new policy is not an abandonment of patent rights in favor of some fuzzy notion of the public domain, but rather it’s an exercise of his company’s patent rights: “Tesla will allow other manufacturers to use its patents in “good faith” – essentially barring those users from filing patent-infringement lawsuits against [Tesla] or trying to produce knockoffs of Tesla’s cars.” In the legalese known to patent lawyers and inventors the world over, this is not an abandonment of Tesla’s patents, this is what is known as a cross license.

In plain English, here’s the deal that Tesla is offering to manufacturers and users of its electrical car technology: in exchange for using Tesla’s patents, the users of Tesla’s patents cannot file patent infringement lawsuits against Tesla if Tesla uses their other patents. In other words, this is a classic deal made between businesses all of the time — you can use my property and I can use your property, and we cannot sue each other. It’s a similar deal to that made between two neighbors who agree to permit each other to cross each other’s backyard. In the context of patented innovation, this agreement is more complicated, but it is in principle the same thing: if automobile manufacturer X decides to use Tesla’s patents, and Tesla begins infringing X’s patents on other technology, then X has agreed through its prior use of Tesla’s patents that it cannot sue Tesla. Thus, each party has licensed the other to make, use and sell their respective patented technologies; in patent law parlance, it’s a “cross license.”

The only thing unique about this cross licensing offer is that Tesla publicly announced it as an open offer for anyone willing to accept it. This is not a patent “free for all,” and it certainly is not tantamount to Tesla “taking down the patent wall.” These are catchy sound bites, but they in fact obfuscate the clear business-minded nature of this commercial decision.

For anyone perhaps still doubting what is happening here, the same L.A Times story further confirms that Tesla is not abandoning the patent system. As stated to the reporter: “Tesla will continue to seek patents for its new technology to prevent others from poaching its advancements.” So much for the much ballyhooed pronouncements last week of how Tesla’s new patent (licensing) policy “reminds us of the urgent need for patent reform”! Musk clearly believes that the patent system is working just great for the new technological innovation his engineers are creating at Tesla right now.

For those working in the innovation industries, Tesla’s decision to cross license its old patents makes sense. Tesla Motors has already extracted much of the value from these old patents: Musk was able to secure venture capital funding for his startup company and he was able to secure for Tesla a dominant position in the electrical car market through his exclusive use of this patented innovation. (Venture capitalists consistently rely on patents in making investment decisions, and for anyone who doubts this need to watch only a few episodes of Shark Tank.) Now that everyone associates radical, cutting-edge innovation with Tesla, Musk can shift in his strategic use of his company’s assets, including his intellectual property rights, such as relying more heavily on the goodwill associated with the Tesla trademark. This is clear, for instance, from the statement to the LA Times that companies or individuals agreeing to the “good faith” terms of Tesla’s license agree not to make “knockoffs of Tesla’s cars.”

There are other equally important commercial reasons for Tesla adopting its new cross-licensing policy, but the point has been made. Tesla’s new cross-licensing policy for its old patents is not Musk embracing “the open source philosophy” (as he asserts in his announcement). This may make good PR given the overheated rhetoric today about the so-called “broken patent system,” but it’s time people recognize the difference between PR and a reasonable business decision that reflects a company that has used (old) patents to acquire a dominant market position and is now changing its business model given these successful developments.

At a minimum, people should recognize that Tesla is not declaring that it will not bring patent infringement lawsuits, but only that it will not sue people with whom it has licensed its patented innovation. This is not, contrary to one law professor’s statement, a company “refrain[ing] from exercising their patent rights to the fullest extent of the law.” In licensing its patented technology, Tesla is in fact exercising its patent rights to the fullest extent of the law, and that is exactly what the patent system promotes in the myriad business models and innovative

Anyone interested in antitrust enforcement policy (and what TOTM reader isn’t?) should read FTC Commissioner Josh Wright’s interview in the latest issue of The Antitrust Source.  The extensive (22 page!) interview covers a number of topics and demonstrates the positive influence Commissioner Wright is having on antitrust enforcement and competition policy in general.

Commissioner Wright’s consistent concern with minimizing error costs will come as no surprise to TOTM regulars.  Here are a few related themes emphasized in the interview:

A commitment to evidence-based antitrust.

Asked about his prior writings on the superiority of “evidence-based” antitrust analysis, Commissioner Wright explains the concept as follows:

The central idea is to wherever possible shift away from casual empiricism and intuitions as the basis for decision-making and instead commit seriously to the decision-theoretic framework applied to minimize the costs of erroneous enforcement and policy decisions and powered by the best available theory and evidence.

This means, of course, that discrete enforcement decisions – should we bring a challenge or not? – should be based on the best available empirical evidence about the effects of the practice or transaction at issue. But it also encompasses a commitment to design institutions and structure liability rules on the basis of the best available evidence concerning a practice’s tendency to occasion procompetitive or anticompetitive effects. As Wright explains:

Evidence-based antitrust encompasses a commitment to using the best available economic theory and empirical evidence to make [a discrete enforcement] decision; but it also stands for a much broader commitment to structuring antitrust enforcement and policy decision-making. For example, evidence-based antitrust is a commitment that would require an enforcement agency seeking to design its policy with respect to a particular set of business arrangements – loyalty discounts, for example – to rely upon the existing theory and empirical evidence in calibrating that policy.

Of course, if the FTC is committed to evidence-based antitrust policy, then it will utilize its institutional advantages to enhance the empirical record on practices whose effects are unclear. Thus, Commissioner Wright lauds the FTC’s study of – rather than preemptive action against – patent assertion entities, calling it “precisely the type of activity that the FTC is well-suited to do.”

A commitment to evidence-based antitrust also means that the agency shouldn’t get ahead of itself in restricting conduct with known consumer benefits and only theoretical (i.e., not empirically established) harms. Accordingly, Commissioner Wright says he “divorced [him]self from a number of recommendations” in the FTC’s recent data broker report:

For the majority of these other recommendations [beyond basic disclosure requirements], I simply do not think that we have any evidence that the benefits from Congress adopting those recommendations would exceed the costs. … I would need to have some confidence based on evidence, especially about an area where evidence is scarce. I’m not comfortable relying on my priors about these activities, especially when confronted by something new that could be beneficial. … The danger would be that we recommend actions that either chill some of the beneficial activity the data brokers engage in or just impose compliance costs that we all recognize get passed on to consumers.

Similarly, Commissioner Wright has opposed “fencing-in” relief in consent decrees absent evidence that the practice being restricted threatens more harm than good. As an example, he points to the consent decree in the Graco case, which we discussed here:

Graco employed exclusive dealing contracts, but we did not allege that the exclusive dealing contracts violated the antitrust laws or Section 5. However, as fencing-in relief for the consummated merger, the consent included prohibitions on exclusive dealing and loyalty discounts despite there being no evidence that the firm had employed either of those tactics to anticompetitive ends. When an FTC settlement bans a form of discounting as standard injunctive relief in a merger case without convincing evidence that the discounts themselves were a competitive problem, it raises significant concerns.

A commitment to clear enforcement principles.

At several points throughout the interview, Commissioner Wright emphasizes the value of articulating clear principles that can guide business planners’ behavior. But he’s not calling for a bunch of ex ante liability rules. The old per se rule against minimum resale price maintenance, for example, was clear – and bad! Embracing overly broad liability rules for the sake of clarity is inconsistent with the evidence-based, decision-theoretic approach Commissioner Wright prefers. The clarity he is advocating, then, is clarity on broad principles that will govern enforcement decisions.  He thus reiterates his call for a formal policy statement defining the Commission’s authority to prosecute unfair methods of competition under Section 5 of the FTC Act.  (TOTM hosted a blog symposium on that topic last summer.)  Wright also suggests that the Commission should “synthesize and offer high-level principles that would provide additional guidance” on how the Commission will use its Section 5 authority to address data security matters.

Extension, not extraction, should be the touchstone for Section 2 liability.

When asked about his prior criticism of FTC actions based on alleged violations of licensing commitments to standards development organizations (e.g., N-Data), Commissioner Wright emphasized that there should be no Section 2 liability in such cases, or similar cases involving alleged patent hold-up, absent an extension of monopoly power. In other words, it is not enough to show that the alleged bad act resulted in higher prices; it must also have led to the creation, maintenance, or enhancement of monopoly power.  Wright explains:

The logic is relatively straightforward. The antitrust laws do not apply to all increases of price. The Sherman Act is not a price regulation statute. The antitrust laws govern the competitive process. The Supreme Court said in Trinko that a lawful monopolist is allowed to charge the monopoly price. In NYNEX, the Supreme Court held that even if that monopolist raises its price through bad conduct, so long as that bad conduct does not harm the competitive process, it does not violate the antitrust laws. The bad conduct may violate other laws. It may be a fraud problem, it might violate regulatory rules, it may violate all sorts of other areas of law. In the patent context, it might give rise to doctrines like equitable estoppel. But it is not an antitrust problem; antitrust cannot be the hammer for each and every one of the nails that implicate price changes.

In my view, the appropriate way to deal with patent holdup cases is to require what we require for all Section 2 cases. We do not need special antitrust rules for patent holdup; much less for patent assertion entities. The rule is simply that the plaintiff must demonstrate that the conduct results in the acquisition of market power, not merely the ability to extract existing monopoly rents. … That distinction between extracting lawfully acquired and existing monopoly rents and acquiring by unlawful conduct additional monopoly power is one that has run through Section 2 jurisprudence for quite some time.

In light of these remarks (which remind me of this excellent piece by Dennis Carlton and Ken Heyer), it is not surprising that Commissioner Wright also hopes and believes that the Roberts Court will overrule Jefferson Parish’s quasi-per se rule against tying. As Einer Elhauge has observed, that rule might make sense if the mere extraction of monopoly profits (via metering price discrimination or Loew’s-type bundling) was an “anticompetitive” effect of tying.  If, however, anticompetitive harm requires extension of monopoly power, as Wright contends, then a tie-in cannot be anticompetitive unless it results in substantial foreclosure of the tied product market, a necessary prerequisite for a tie-in to enhance market power in the tied or tying markets.  That means tying should not be evaluated under the quasi-per se rule but should instead be subject to a rule of reason similar to that governing exclusive dealing (i.e., some sort of “qualitative foreclosure” approach).  (I explain this point in great detail here.)

Optimal does not mean perfect.

Commissioner Wright makes this point in response to a question about whether the government should encourage “standards development organizations to provide greater clarity to their intellectual property policies to reduce the likelihood of holdup or other concerns.”  While Wright acknowledges that “more complete, more precise contracts” could limit the problem of patent holdup, he observes that there is a cost to greater precision and completeness and that the parties to these contracts already have an incentive to put the optimal amount of effort into minimizing the cost of holdup. He explains:

[M]inimizing the probability of holdup does not mean that it is zero. Holdup can happen. It will happen. It will be observed in the wild from time to time, and there is again an important question about whether antitrust has any role to play there. My answer to that question is yes in the case of deception that results in market power. Otherwise, we ought to leave the governance of what amount to contracts between SSO and their members to contract law and in some cases to patent doctrines like equitable estoppel that can be helpful in governing holdup.

…[I]t is quite an odd thing for an agency to be going out and giving advice to sophisticated parties on how to design their contracts. Perhaps I would be more comfortable if there were convincing and systematic evidence that the contracts were the result of market failure. But there is not such evidence.

Consumer welfare is the touchstone.

When asked whether “there [are] circumstances where non-competition concerns, such as privacy, should play a role in merger analysis,” Commissioner Wright is unwavering:

No. I think that there is a great danger when we allow competition law to be unmoored from its relatively narrow focus upon consumer welfare. It is the connection between the law and consumer welfare that allows antitrust to harness the power of economic theory and empirical methodologies. All of the gains that antitrust law and policy as a body have earned over the past fifty or sixty years have been from becoming more closely tethered to industrial organization economics, more closely integrating economic thought in the law, and in agency discretion and decision-making. I think that the tight link between the consumer welfare standard and antitrust law is what has allowed such remarkable improvements in what effectively amounts to a body of common law.

Calls to incorporate non-economic concerns into antitrust analysis, I think, threaten to undo some, if not all, of that progress. Antitrust law and enforcement in the United States has some experience with trying to incorporate various non-economic concerns, including the welfare of small dealers and worthy men and so forth. The results of the experiment were not good for consumers and did not generate sound antitrust policy. It is widely understood and recognized why that is the case.

***

Those are just some highlights. There’s lots more in the interview—in particular, some good stuff on the role of efficiencies in FTC investigations, the diverging standards for the FTC and DOJ to obtain injunctions against unconsummated mergers, and the proper way to analyze reverse payment settlements.  Do read the whole thing.  If you’re like me, it may make you feel a little more affinity for Mitch McConnell.

In a June 12, 2014 TOTM post, I discussed the private antitrust challenge to NCAA rules that barred NCAA member universities from compensating athletes for use of their images and names in television broadcasts and video games.

On August 8 a federal district judge held that the NCAA had violated the antitrust laws and enjoined the NCAA from enforcing those rules, effective 2016.  The judge’s 99-page opinion, which discusses NCAA price-fixing agreements, is worth a read.  It confronts and debunks the NCAA’s efficiency justifications for their cartel-like restrictions on athletic scholarships.  If the decision withstands appeal, it will allow  NCAA member schools to offer prospective football and basketball recruits trust funds that could be accessed after graduation (subject to certain limitations), granting those athletes a share of the billions of dollars in revenues they generate for NCAA member universities.

A large number of NCAA rules undoubtedly generate substantial efficiencies that benefit NCAA  member institutions, college sports fans, and college athletes.  But the beneficial nature of those rules does not justify separate monopsony price fixing arrangements that disadvantage athletic recruits – arrangements that cannot legitimately be tied to the NCAA’s welfare-enhancing interest in promoting intercollegiate athletics.  Stay tuned.