Archives For regulation

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

On September 28, the American Antitrust Institute released a report (“AAI Report”) on the state of U.S. antitrust policy, provocatively entitled “A National Competition Policy:  Unpacking the Problem of Declining Competition and Setting Priorities for Moving Forward.”  Although the AAI Report contains some valuable suggestions, in important ways it reminds one of the drunkard who seeks his (or her) lost key under the nearest lamppost.  What it requires is greater sobriety and a broader vision of the problems that beset the American economy.

The AAI Report begins by asserting that “[n]ot since the first federal antitrust law was enacted over 120 years ago has there been the level of public concern over the concentration of economic and political power that we see today.”  Well, maybe, although I for one am not convinced.  The paper then states that “competition is now on the front pages, as concerns over rising concentration, extraordinary profits accruing to the top slice of corporations, slowing innovation, and widening income and wealth inequality have galvanized attention.”  It then goes on to call for a more aggressive federal antitrust enforcement policy, with particular attention paid to concentrated markets.  The implicit message is that dedicated antitrust enforcers during the Obama Administration, led by Federal Trade Commission Chairs Jonathan Leibowitz and Edith Ramirez, and Antitrust Division chiefs Christine Varney, Bill Baer, and Renata Hesse (Acting) have been laggard or asleep at the switch.  But where is the evidence for this?  I am unaware of any and the AAI doesn’t say.  Indeed, federal antitrust officials in the Obama Administration consistently have called for tough enforcement, and they have actively pursued vertical as well as horizontal conduct cases and novel theories of IP-antitrust liability.  Thus, the AAI Report’s contention that antitrust needs to be “reinvigorated” is unconvincing.

The AAI Report highlights three “symptoms” of declining competition:  (1) rising concentration, (2) higher profits to the few and slowing rates of start-up activity, and (3) widening income and wealth inequality.  But these concerns are not something that antitrust policy is designed to address.  Mergers that threaten to harm competition are within the purview of antitrust, but modern antitrust rightly focuses on the likely effects of such mergers, not on the mere fact that they may increase concentration.  Furthermore, antitrust assesses the effects of business agreements on the competitive process.  Antitrust does not ask whether business arrangements yield “unacceptably” high profits, or “overly low” rates of business formation, or “unacceptable” wealth and income inequality.  Indeed, antitrust is not well equipped to address such questions, nor does it possess the tools to “solve” them (even assuming they need to be solved).

In short, if American competition is indeed declining based on the symptoms flagged by the AAI Report, the key to the solution will not be found by searching under the antitrust policy lamppost for illumination.  Rather, a more thorough search, with the help of “common sense” flashlights, is warranted.

The search outside the antitrust spotlight is not, however, a difficult one.  Finding the explanation for lagging competitive conditions in the United States requires no great policy legerdemain, because sound published research already provides the answer.  And that answer centers on government failures, not private sector abuses.

Consider overregulation.  In its annual Red Tape Rising reports (see here for the latest one), the Heritage Foundation has documented the growing burden of federal regulation on the American economy.  Overregulation acts like an implicit tax on businesses and disincentivizes business start-ups.  Moreover, as regulatory requirements grow in complexity and burdensomeness, they increasingly place a premium on large size – only relatively larger businesses can better afford the fixed costs needed to establish regulatory compliance department than their smaller rivals.  Heritage Foundation Scholar Norbert Michel summarizes this phenomenon in his article Dodd-Frank and Glass-Steagall – ‘Consumer Protection for Billionaires’:

Even when it’s not by nefarious design, we end up with rules that favor the largest/best-funded firms over their smaller/less-well-funded competitors. Put differently, our massive regulatory state ends up keeping large firms’ competitors at bay.  The more detailed regulators try to be, the more complex the rules become. And the more complex the rules become, the smaller the number of people who really care. Hence, more complicated rules and regulations serve to protect existing firms from competition more than simple ones. All of this means consumers lose. They pay higher prices, they have fewer choices of financial products and services, and they pretty much end up with the same level of protection they’d have with a smaller regulatory state.

What’s worse, some of the most onerous regulatory schemes are explicitly designed to favor large competitors over small ones.  A prime example is financial services regulation, and, in particular, the rules adopted pursuant to the 2010 Dodd-Frank Act (other examples could readily be provided).  As a Heritage Foundation report explains (footnote citations omitted):

The [Dodd-Frank] act was largely intended to reduce the risk of a major bank failure, but the regulatory burden is crippling community banks (which played little role in the financial crisis). According to Harvard University researchers Marshall Lux and Robert Greene, small banks’ share of U.S. commercial banking assets declined nearly twice as much since the second quarter of 2010—around the time of Dodd–Frank’s passage—as occurred between 2006 and 2010. Their share currently stands at just 22 percent, down from 41 percent in 1994.

The increased consolidation rate is driven by regulatory economies of scale—larger banks are better suited to handle increased regulatory burdens than are smaller banks, causing the average costs of community banks to rise. The decline in small bank assets spells trouble for their primary customer base—small business loans and those seeking residential mortgages.

Ironically, Dodd–Frank proponents pushed for the law as necessary to rein in the big banks and Wall Street. In fact, the regulations are giving the largest companies a competitive advantage over smaller enterprises—the opposite outcome sought by Senator Christopher Dodd (D–CT), Representative Barney Frank (D–MA), and their allies. As Goldman Sachs CEO Lloyd Blankfein recently explained: “More intense regulatory and technology requirements have raised the barriers to entry higher than at any other time in modern history. This is an expensive business to be in, if you don’t have the market share in scale.

In sum, as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.

More generally, as Heritage Foundation President Jim DeMint and Heritage Action for America CEO Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations, including special tax preferences. Those special preferences undermine competition on the merits by firms that lack insider status, to the public detriment.  Relatedly, the hideously complex system of American business taxation, which features the highest corporate tax rates in the developed world (which can better be manipulated by very large corporate players), depresses wages and is a serious drag on the American economy, as shown by Heritage Foundation scholars Curtis Dubay and David Burton.  In a similar vein, David Burton testified before Congress in 2015 on how the various excesses of the American regulatory state (including bad tax, health care, immigration, and other regulatory policies, combined with an overly costly legal system) undermine U.S. entrepreneurship (see here).

In other words, special subsidies, regulations, and tax and regulatory programs for the well-connected are part and parcel of crony capitalism, which (1) favors large businesses, tending to raise concentration; (2) confers higher profits on the well-connected while discouraging small business entrepreneurship; and (3) promotes income and wealth inequality, with the greatest returns going to the wealthiest government cronies who know best how to play the Washington “rent seeking game.”  Unfortunately, crony capitalism has grown like topsy during the Obama Administration.

Accordingly, I would counsel AAI to turn its scholarly gaze away from antitrust and toward the true source of the American competitive ailments it spotlights:  crony capitalism enabled by the growth of big government special interest programs and increasingly costly regulatory schemes.  Let’s see if AAI takes my advice.

There must have been a great gnashing of teeth in Chairman Wheeler’s office this morning as the FCC announced that it was pulling the Chairman’s latest modifications to the set-top box proposal from its voting agenda. This is surely but a bump in the road for the Chairman; he will undoubtedly press ever onward in his quest to “fix” a market that is flooded with competition and consumer choice. But, as we stop to take a breath for a moment while this latest FCC adventure is temporarily paused, there is a larger issue worth considering: the lack of transparency at the FCC.

Although the Commission has an unfortunate tradition of non-disclosure surrounding many of its regulatory proposals, the problem has seemingly been exacerbated by Chairman Wheeler’s aggressive agenda and his intransigence in the face of overwhelming and rigorous criticism.

Perhaps nowhere was this attitude more apparent than with his handling of the Open Internet Order, which was plagued with enough process problems to elicit a call for a delay of the Commission’s vote on the initial rules from Democratic Commissioner Rosenworcel, and a strong rebuke from the Chairman of the House Oversight Committee prior to the Commission’s vote on the final rules (which were not disclosed to the public until after the vote).

But the same cavalier dismissal of public and stakeholder input has plagued the Chairman’s beleaguered set-top box proposal, as well.

As Commissioner Pai noted before Congress in March:

The FCC continues to choose opacity over transparency. The decisions we make impact hundreds of millions of Americans and thousands of small businesses. And yet to the public, to Congress, and even to the Commissioners at the FCC, the agency’s work remains a black box.

Take this simple proposition: The public should be able to see what we’re voting on before we vote on it. That’s how Congress works, as you know. Anyone can look up any pending bill right now by going to And that’s how many state commissions work too. But not the FCC.

Exhibit A in Commissioner Pai’s lament was the set-top box proceeding:

Instead, the public gets to see only what the Chairman’s Office deigns to release, so controversial policy proposals can be (and typically are) hidden in a wave of media adulation. That happened just last month when the agency proposed changes to its set-top-box rules but tried to mislead content producers and the public about whether set-top box manufacturers would be permitted to insert their own advertisements into programming streams.

Now, although the Chairman’s initial proposal was eventually released, we have only a fact sheet and an op-ed by Chairman Wheeler on which to judge the purportedly substantial changes embodied in his latest version.

Even Democrats in Congress have recognized the process problems that have plagued this proceeding. As Senator Feinstein (D-CA) urged in a recent letter to Chairman Wheeler:

Given the significance of this proceeding, I ask that you make public the new proposal under consideration by the Commission, so that all interested stakeholders, members of Congress, copyright experts, and others can comment on the potential copyright implications of the new proposal before the Commission votes on it.

And as Senator Heller (R-NV) wrote in a letter to Chairman Wheeler this week:

I believe it is unacceptable that the FCC has not released the text of this proposal before Thursday’s vote. A three-page fact sheet does not provide enough details for Congress to conduct proper oversight of this rulemaking that will significantly impact both consumers and industry…. I encourage you to release the text immediately so that the American public has a full understanding of what is being considered by the Commission….

Of course, this isn’t a new problem at the FCC. In fact, before he supported Chairman Wheeler’s efforts to impose Open Internet rules without sufficient public disclosure, then-Senator Obama decried then-Chairman Martin’s efforts to enact new media ownership rules with insufficient process in 2007:

Repealing the cross ownership rules and retaining the rest of our existing regulations is not a proposal that has been put out for public comment; the proper process for vetting it is not in closed door meetings with lobbyists or in selective leaks to the New York Times.

Although such a proposal may pass the muster of a federal court, Congress and the public have the right to review any specific proposal and decide whether or not it constitutes sound policy. And the Commission has the responsibility to defend any new proposal in public discourse and debate.

And although you won’t find them complaining this time (because this time they want the excessive intervention that the NPRM seems to contemplate), regulatory advocates lamented just exactly this sort of secrecy at the Commission when Chairman Genachowski proposed his media ownership rules in 2012. At that time Free Press angrily wrote:

[T]he Commission still has not made public its actual media ownership order…. Furthermore, it’s disingenuous for the FCC to suggest that its process now is more transparent than the one former Chairman Martin used to adopt similar rules. Genachowski’s FCC has yet to publish any details of its final proposal, offering only vague snippets in press releases… despite the president’s instruction to rulemaking agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.

As Free Press noted, President Obama did indeed instruct “agencies to conduct any significant business in open meetings with opportunities for members of the public to have their voices heard.” In his Memorandum on Transparency and Open Government, his first executive action, the president urged that:

Public engagement enhances the Government’s effectiveness and improves the quality of its decisions. Knowledge is widely dispersed in society, and public officials benefit from having access to that dispersed knowledge. Executive departments and agencies should offer Americans increased opportunities to participate in policymaking and to provide their Government with the benefits of their collective expertise and information.

The resulting Open Government Directive calls on executive agencies to

take prompt steps to expand access to information by making it available online in open formats. With respect to information, the presumption shall be in favor of openness….

The FCC is not an “executive agency,” and so is not directly subject to the Directive. But the Chairman’s willingness to stray so far from basic principles of transparency is woefully inconsistent with the basic principles of good government and the ideals of heightened transparency claimed by this administration.

Imagine if you will… that a federal regulatory agency were to decide that the iPhone ecosystem was too constraining and too expensive; that consumers — who had otherwise voted for iPhones with their dollars — were being harmed by the fact that the platform was not “open” enough.

Such an agency might resolve (on the basis of a very generous reading of a statute), to force Apple to make its iOS software available to any hardware platform that wished to have it, in the process making all of the apps and user data accessible to the consumer via these new third parties, on terms set by the agency… for free.

Difficult as it may be to picture this ever happening, it is exactly the sort of Twilight Zone scenario that FCC Chairman Tom Wheeler is currently proposing with his new set-top box proposal.

Based on the limited information we have so far (a fact sheet and an op-ed), Chairman Wheeler’s new proposal does claw back some of the worst excesses of his initial draft (which we critiqued in our comments and reply comments to that proposal).

But it also appears to reinforce others — most notably the plan’s disregard for the right of content creators to control the distribution of their content. Wheeler continues to dismiss the complex business models, relationships, and licensing terms that have evolved over years of competition and innovation. Instead, he offers  a one-size-fits-all “solution” to a “problem” that market participants are already falling over themselves to provide.

Plus ça change…

To begin with, Chairman Wheeler’s new proposal is based on the same faulty premise: that consumers pay too much for set-top boxes, and that the FCC is somehow both prescient enough and Congressionally ordained to “fix” this problem. As we wrote in our initial comments, however,

[a]lthough the Commission asserts that set-top boxes are too expensive, the history of overall MVPD prices tells a remarkably different story. Since 1994, per-channel cable prices including set-top box fees have fallen by 2 percent, while overall consumer prices have increased by 54 percent. After adjusting for inflation, this represents an impressive overall price decrease.

And the fact is that no one buys set-top boxes in isolation; rather, the price consumers pay for cable service includes the ability to access that service. Whether the set-top box fee is broken out on subscribers’ bills or not, the total price consumers pay is unlikely to change as a result of the Commission’s intervention.

As we have previously noted, the MVPD set-top box market is an aftermarket; no one buys set-top boxes without first (or simultaneously) buying MVPD service. And as economist Ben Klein (among others) has shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive:

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

Engineering the set-top box aftermarket to bring more direct competition to bear may redistribute profits, but it’s unlikely to change what consumers pay.

Stripped of its questionable claims regarding consumer prices and placed in the proper context — in which consumers enjoy more ways to access more video content than ever before — Wheeler’s initial proposal ultimately rested on its promise to “pave the way for a competitive marketplace for alternate navigation devices, and… end the need for multiple remote controls.” Weak sauce, indeed.

He now adds a new promise: that “integrated search” will be seamlessly available for consumers across the new platforms. But just as universal remotes and channel-specific apps on platforms like Apple TV have already made his “multiple remotes” promise a hollow one, so, too, have competitive pressures already begun to deliver integrated search.

Meanwhile, such marginal benefits come with a host of substantial costs, as others have pointed out. Do we really need the FCC to grant itself more powers and create a substantial and coercive new regulatory regime to mandate what the market is already poised to provide?

From ignoring copyright to obliterating copyright

Chairman Wheeler’s first proposal engendered fervent criticism for the impossible position in which it placed MVPDs — of having to disregard, even outright violate, their contractual obligations to content creators.

Commendably, the new proposal acknowledges that contractual relationships between MVPDs and content providers should remain “intact.” Thus, the proposal purports to enable programmers and MVPDs to maintain “their channel position, advertising and contracts… in place.” MVPDs will retain “end-to-end” control of the display of content through their apps, and all contractually guaranteed content protection mechanisms will remain, because the “pay-TV’s software will manage the full suite of linear and on-demand programming licensed by the pay-TV provider.”

But, improved as it is, the new proposal continues to operate in an imagined world where the incredibly intricate and complex process by which content is created and distributed can be reduced to the simplest of terms, dictated by a regulator and applied uniformly across all content and all providers.

According to the fact sheet, the new proposal would “[p]rotect[] copyrights and… [h]onor[] the sanctity of contracts” through a “standard license”:

The proposed final rules require the development of a standard license governing the process for placing an app on a device or platform. A standard license will give device manufacturers the certainty required to bring innovative products to market… The license will not affect the underlying contracts between programmers and pay-TV providers. The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.

But programming is distributed under a diverse range of contract terms. The only way a single, “standard license” could possibly honor these contracts is by forcing content providers to license all of their content under identical terms.

Leaving aside for a moment the fact that the FCC has no authority whatever to do this, for such a scheme to work, the agency would necessarily have to strip content holders of their right to govern the terms on which their content is accessed. After all, if MVPDs are legally bound to redistribute content on fixed terms, they have no room to permit content creators to freely exercise their rights to specify terms like windowing, online distribution restrictions, geographic restrictions, and the like.

In other words, the proposal simply cannot deliver on its promise that “[t]he license will not affect the underlying contracts between programmers and pay-TV providers.”

But fear not: According to the Fact Sheet, “[p]rogrammers will have a seat at the table to ensure that content remains protected.” Such largesse! One would be forgiven for assuming that the programmers’ (single?) seat will surrounded by those of other participants — regulatory advocates, technology companies, and others — whose sole objective will be to minimize content companies’ ability to restrict the terms on which their content is accessed.

And we cannot ignore the ominous final portion of the Fact Sheet’s “Standard License” description: “The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.” Such an arrogation of ultimate authority by the FCC doesn’t bode well for that programmer’s “seat at the table” amounting to much.

Unfortunately, we can only imagine the contours of the final proposal that will describe the many ways by which distribution licenses can “harm the marketplace for competitive devices.” But an educated guess would venture that there will be precious little room for content creators and MVPDs to replicate a large swath of the contract terms they currently employ. “Any content owner can have its content painted any color that it wants, so long as it is black.”

At least we can take solace in the fact that the FCC has no authority to do what Wheeler wants it to do

And, of course, this all presumes that the FCC will be able to plausibly muster the legal authority in the Communications Act to create what amounts to a de facto compulsory licensing scheme.

A single license imposed upon all MVPDs, along with the necessary restrictions this will place upon content creators, does just as much as an overt compulsory license to undermine content owners’ statutory property rights. For every license agreement that would be different than the standard agreement, the proposed standard license would amount to a compulsory imposition of terms that the rights holders and MVPDs would not otherwise have agreed to. And if this sounds tedious and confusing, just wait until the Commission starts designing its multistakeholder Standard Licensing Oversight Process (“SLOP”)….

Unfortunately for Chairman Wheeler (but fortunately for the rest of us), the FCC has neither the legal authority, nor the requisite expertise, to enact such a regime.

Last month, the Copyright Office was clear on this score in its letter to Congress commenting on the Chairman’s original proposal:  

[I]t is important to remember that only Congress, through the exercise of its power under the Copyright Clause, and not the FCC or any other agency, has the constitutional authority to create exceptions and limitations in copyright law. While Congress has enacted compulsory licensing schemes, they have done so in response to demonstrated market failures, and in a carefully circumscribed manner.

Assuming that Section 629 of the Communications Act — the provision that otherwise empowers the Commission to promote a competitive set-top box market — fails to empower the FCC to rewrite copyright law (which is assuredly the case), the Commission will be on shaky ground for the inevitable torrent of lawsuits that will follow the revised proposal.

In fact, this new proposal feels more like an emergency pivot by a panicked Chairman than an actual, well-grounded legal recommendation. While the new proposal improves upon the original, it retains at its core the same ill-informed, ill-advised and illegal assertion of authority that plagued its predecessor.

Last week the International Center for Law & Economics and I filed an amicus brief in the DC Circuit in support of en banc review of the court’s decision to uphold the FCC’s 2015 Open Internet Order.

In our previous amicus brief before the panel that initially reviewed the OIO, we argued, among other things, that

In order to justify its Order, the Commission makes questionable use of important facts. For instance, the Order’s ban on paid prioritization ignores and mischaracterizes relevant record evidence and relies on irrelevant evidence. The Order also omits any substantial consideration of costs. The apparent necessity of the Commission’s aggressive treatment of the Order’s factual basis demonstrates the lengths to which the Commission must go in its attempt to fit the Order within its statutory authority.

Our brief supporting en banc review builds on these points to argue that

By reflexively affording substantial deference to the FCC in affirming the Open Internet Order (“OIO”), the panel majority’s opinion is in tension with recent Supreme Court precedent….

The panel majority need not have, and arguably should not have, afforded the FCC the level of deference that it did. The Supreme Court’s decisions in State Farm, Fox, and Encino all require a more thorough vetting of the reasons underlying an agency change in policy than is otherwise required under the familiar Chevron framework. Similarly, Brown and Williamson, Utility Air Regulatory Group, and King all indicate circumstances in which an agency construction of an otherwise ambiguous statute is not due deference, including when the agency interpretation is a departure from longstanding agency understandings of a statute or when the agency is not acting in an expert capacity (e.g., its decision is based on changing policy preferences, not changing factual or technical considerations).

In effect, the panel majority based its decision whether to afford the FCC deference upon deference to the agency’s poorly supported assertions that it was due deference. We argue that this is wholly inappropriate in light of recent Supreme Court cases.


The panel majority failed to appreciate the importance of granting Chevron deference to the FCC. That importance is most clearly seen at an aggregate level. In a large-scale study of every Court of Appeals decision between 2003 and 2013, Professors Kent Barnett and Christopher Walker found that a court’s decision to defer to agency action is uniquely determinative in cases where, as here, an agency is changing established policy.

Kent Barnett & Christopher J. Walker, Chevron In the Circuit Courts 61, Figure 14 (2016), available at

Figure 14 from Barnett & Walker, as reproduced in our brief.

As  that study demonstrates,

agency decisions to change established policy tend to present serious, systematic defects — and [thus that] it is incumbent upon this court to review the panel majority’s decision to reflexively grant Chevron deference. Further, the data underscore the importance of the Supreme Court’s command in Fox and Encino that agencies show good reason for a change in policy; its recognition in Brown & Williamson and UARG that departures from existing policy may fall outside of the Chevron regime; and its command in King that policies not made by agencies acting in their capacity as technical experts may fall outside of the Chevron regime. In such cases, the Court essentially holds that reflexive application of Chevron deference may not be appropriate because these circumstances may tend toward agency action that is arbitrary, capricious, in excess of statutory authority, or otherwise not in accordance with law.

As we conclude:

The present case is a clear example where greater scrutiny of an agency’s decision-making process is both warranted and necessary. The panel majority all too readily afforded the FCC great deference, despite the clear and unaddressed evidence of serious flaws in the agency’s decision-making process. As we argued in our brief before the panel, and as Judge Williams recognized in his partial dissent, the OIO was based on factually inaccurate, contradicted, and irrelevant record evidence.

Read our full — and very short — amicus brief here.

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

In the wake of the recent OIO decision, separation of powers issues should be at the forefront of everyone’s mind. In reaching its decision, the DC Circuit relied upon Chevron to justify its extreme deference to the FCC. The court held, for instance, that

Our job is to ensure that an agency has acted “within the limits of [Congress’s] delegation” of authority… and that its action is not “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”… Critically, we do not “inquire as to whether the agency’s decision is wise as a policy matter; indeed, we are forbidden from substituting our judgment for that of the agency.”… Nor do we inquire whether “some or many economists would disapprove of the [agency’s] approach” because “we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgment by an agency acting pursuant to congressionally delegated authority.

The DC Circuit’s decision takes a broad view of Chevron deference and, in so doing, ignores or dismisses some of the limits placed upon the doctrine by cases like Michigan v. EPA and UARG v. EPA (though Judge Williams does bring up UARG in dissent).

Whatever one thinks of the validity of the FCC’s approach to regulating the Internet, there is no question that it has, at best, a weak statutory foothold. Without prejudging the merits of the OIO, or the question of deference to agencies that find “[regulatory] elephants in [statutory] mouseholes,”  such broad claims of authority, based on such limited statutory language, should give one pause. That the court upheld the FCC’s interpretation of the Act without expressing reservations, suggesting any limits, or admitting of any concrete basis for challenging the agency’s authority beyond circular references to “abuse of discretion” is deeply troubling.

Separation of powers is a fundamental feature of our democracy, and one that has undoubtedly contributed to the longevity of our system of self-governance. Not least among the important features of separation of powers is the ability of courts to review the lawfulness of legislation and executive action.

The founders presciently realized the dangers of allowing one part of the government to centralize power in itself. In Federalist 47, James Madison observed that

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, selfappointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. (emphasis added)

The modern administrative apparatus has become the sort of governmental body that the founders feared and that we have somehow grown to accept. The FCC is not alone in this: any member of the alphabet soup that constitutes our administrative state, whether “independent” or otherwise, is typically vested with great, essentially unreviewable authority over the economy and our daily lives.

As Justice Thomas so aptly put it in his must-read concurrence in Michigan v. EPA:

Perhaps there is some unique historical justification for deferring to federal agencies, but these cases reveal how paltry an effort we have made to understand it or to confine ourselves to its boundaries. Although we hold today that EPA exceeded even the extremely permissive limits on agency power set by our precedents, we should be alarmed that it felt sufficiently emboldened by those precedents to make the bid for deference that it did here. As in other areas of our jurisprudence concerning administrative agencies, we seem to be straying further and further from the Constitution without so much as pausing to ask why. We should stop to consider that document before blithely giving the force of law to any other agency “interpretations” of federal statutes.

Administrative discretion is fantastic — until it isn’t. If your party is the one in power, unlimited discretion gives your side the ability to run down a wish list, checking off controversial items that could never make it past a deliberative body like Congress. That same discretion, however, becomes a nightmare under extreme deference as political opponents, newly in power, roll back preferred policies. In the end, regulation tends toward the extremes, on both sides, and ultimately consumers and companies pay the price in the form of excessive regulatory burdens and extreme uncertainty.

In theory, it is (or should be) left to the courts to rein in agency overreach. Unfortunately, courts have been relatively unwilling to push back on the administrative state, leaving the task up to Congress. And Congress, too, has, over the years, found too much it likes in agency power to seriously take on the structural problems that give agencies effectively free reign. At least, until recently.

In March of this year, Representative Ratcliffe (R-TX) proposed HR 4768: the Separation of Powers Restoration Act (“SOPRA”). Arguably this is first real effort to fix the underlying problem since the 1995 “Comprehensive Regulatory Reform Act” (although, it should be noted, SOPRA is far more targeted than was the CRRA). Under SOPRA, 5 U.S.C. § 706 — the enacted portion of the APA that deals with judicial review of agency actions —  would be amended to read as follows (with the new language highlighted):

(a) To the extent necessary to decision and when presented, the reviewing court shall determine the meaning or applicability of the terms of an agency action and decide de novo all relevant questions of law, including the interpretation of constitutional and statutory provisions, and rules made by agencies. Notwithstanding any other provision of law, this subsection shall apply in any action for judicial review of agency action authorized under any provision of law. No law may exempt any such civil action from the application of this section except by specific reference to this section.

These changes to the scope of review would operate as a much-needed check on the unlimited discretion that agencies currently enjoy. They give courts the ability to review “de novo all relevant questions of law,” which includes agencies’ interpretations of their own rules.

The status quo has created a negative feedback cycle. The Chevron doctrine, as it has played out, gives outsized incentives to both the federal agencies, as well as courts, to essentially disregard Congress’s intended meaning for particular statutes. Today an agency can write rules and make decisions safe in the knowledge that Chevron will likely insulate it from any truly serious probing by a district court with regards to how well the agency’s action actually matches up with congressional intent or with even rudimentary cost-benefit analysis.

Defenders of the administrative state may balk at changing this state of affairs, of course. But defending an institution that is almost entirely immune from judicial and legal review seems to be a particularly hard row to hoe.

Public Knowledge, for instance, claims that

Judicial deference to agency decision-making is critical in instances where Congress’ intent is unclear because it balances each branch of government’s appropriate role and acknowledges the realities of the modern regulatory state.

To quote Justice Scalia, an unfortunate champion of the Chevron doctrine, this is “pure applesauce.”

The very core of the problem that SOPRA addresses is that the administrative state is not a proper branch of government — it’s a shadow system of quasi-legislation and quasi-legal review. Congress can be chastened by popular vote. Judges who abuse discretion can be overturned (or impeached). The administrative agencies, on the other hand, are insulated through doctrines like Chevron and Auer, and their personnel subject more or less to the political whims of the executive branch.

Even agencies directly under the control of the executive branch  — let alone independent agencies — become petrified caricatures of their original design as layers of bureaucratic rule and custom accrue over years, eventually turning the organization into an entity that serves, more or less, to perpetuate its own existence.

Other supporters of the status quo actually identify the unreviewable see-saw of agency discretion as a feature, not a bug:

Even people who agree with the anti-government premises of the sponsors [of SOPRA] should recognize that a change in the APA standard of review is an inapt tool for advancing that agenda. It is shortsighted, because it ignores the fact that, over time, political administrations change. Sometimes the administration in office will generally be in favor of deregulation, and in these circumstances a more intrusive standard of judicial review would tend to undercut that administration’s policies just as surely as it may tend to undercut a more progressive administration’s policies when the latter holds power. The APA applies equally to affirmative regulation and to deregulation.

But presidential elections — far from justifying this extreme administrative deference — actually make the case for trimming the sails of the administrative state. Presidential elections have become an important part about how candidates will wield the immense regulatory power vested in the executive branch.

Thus, for example, as part of his presidential bid, Jeb Bush indicated he would use the EPA to roll back every policy that Obama had put into place. One of Donald Trump’s allies suggested that Trump “should turn off [CNN’s] FCC license” in order to punish the news agency. And VP hopeful Elizabeth Warren has suggested using the FDIC to limit the growth of financial institutions, and using the FCC and FTC to tilt the markets to make it easier for the small companies to get an advantage over the “big guys.”

Far from being neutral, technocratic administrators of complex social and economic matters, administrative agencies have become one more political weapon of majority parties as they make the case for how their candidates will use all the power at their disposal — and more — to work their will.

As Justice Thomas, again, noted in Michigan v. EPA:

In reality…, agencies “interpreting” ambiguous statutes typically are not engaged in acts of interpretation at all. Instead, as Chevron itself acknowledged, they are engaged in the “formulation of policy.” Statutory ambiguity thus becomes an implicit delegation of rulemaking authority, and that authority is used not to find the best meaning of the text, but to formulate legally binding rules to fill in gaps based on policy judgments made by the agency rather than Congress.

And this is just the thing: SOPRA would bring far-more-valuable predictability and longevity to our legal system by imposing a system of accountability on the agencies. Currently, commissions often believe they can act with impunity (until the next election at least), and even the intended constraints of the APA frequently won’t do much to tether their whims to statute or law if they’re intent on deviating. Having a known constraint (or, at least, a reliable process by which judicial constraint may be imposed) on their behavior will make them think twice about exactly how legally and economically sound proposed rules and other actions are.

The administrative state isn’t going away, even if SOPRA were passed; it will continue to be the source of the majority of the rules under which our economy operates. We have long believed that a benefit of our judicial system is its consistency and relative lack of politicization. If this is a benefit for interpreting laws when agencies aren’t involved, it should also be a benefit when they are involved. Particularly as more and more law emanates from agencies rather than Congress, the oversight of largely neutral judicial arbiters is an essential check on the administrative apparatus’ “accumulation of all powers.”

The interest of judges tends to include a respect for the development of precedent that yields consistent and transparent rules for all future litigants and, more broadly, for economic actors and consumers making decisions in the shadow of the law. This is markedly distinct from agencies which, more often than not, promote the particular, shifting, and often-narrow political sentiments of the day.

Whether a Republican- or a Democrat— appointed district judge reviews an agency action, that judge will be bound (more or less) by the precedent that came before, regardless of the judge’s individual political preferences. Contrast this with the FCC’s decision to reclassify broadband as a Title II service, for example, where previously it had been committed to the idea that broadband was an information service, subject to an entirely different — and far less onerous — regulatory regime.  Of course, the next FCC Chairman may feel differently, and nothing would stop another regulatory shift back to the pre-OIO status quo. Perhaps more troublingly, the enormous discretion afforded by courts under current standards of review would permit the agency to endlessly tweak its rules — forbearing from some regulations but not others, un-forbearing, re-interpreting, etc., with precious few judicial standards available to bring certainty to the rules or to ensure their fealty to the statute or the sound economics that is supposed to undergird administrative decisionmaking.

SOPRA, or a bill like it, would have required the Commission to actually be accountable for its historical regulations, and would force it to undergo at least rudimentary economic analysis to justify its actions. This form of accountability can only be to the good.

The genius of our system is its (potential) respect for the rule of law. This is an issue that both sides of the aisle should be able to get behind: minority status is always just one election cycle away. We should all hope to see SOPRA — or some bill like it — gain traction, rooted in long-overdue reflection on just how comfortable we are as a polity with a bureaucratic system increasingly driven by unaccountable discretion.

As regulatory review of the merger between Aetna and Humana hits the homestretch, merger critics have become increasingly vocal in their opposition to the deal. This is particularly true of a subset of healthcare providers concerned about losing bargaining power over insurers.

Fortunately for consumers, the merger appears to be well on its way to approval. California recently became the 16th of 20 state insurance commissions that will eventually review the merger to approve it. The U.S. Department of Justice is currently reviewing the merger and may issue its determination as early as July.

Only Missouri has issued a preliminary opinion that the merger might lead to competitive harm. But Missouri is almost certain to remain an outlier, and its analysis simply doesn’t hold up to scrutiny.

The Missouri opinion echoed the Missouri Hospital Association’s (MHA) concerns about the effect of the merger on Medicare Advantage (MA) plans. It’s important to remember, however, that hospital associations like the MHA are not consumer advocacy groups. They are trade organizations whose primary function is to protect the interests of their member hospitals.

In fact, the American Hospital Association (AHA) has mounted continuous opposition to the deal. This is itself a good indication that the merger will benefit consumers, in part by reducing hospital reimbursement costs under MA plans.

More generally, critics have argued that history proves that health insurance mergers lead to higher premiums, without any countervailing benefits. Merger opponents place great stock in a study by economist Leemore Dafny and co-authors that purports to show that insurance mergers have historically led to seven percent higher premiums.

But that study, which looked at a pre-Affordable Care Act (ACA) deal and assessed its effects only on premiums for traditional employer-provided plans, has little relevance today.

The Dafny study first performed a straightforward statistical analysis of overall changes in concentration (that is, the number of insurers in a given market) and price, and concluded that “there is no significant association between concentration levels and premium growth.” Critics never mention this finding.

The study’s secondary, more speculative, analysis took the observed effects of a single merger — the 1999 merger between Prudential and Aetna — and extrapolated for all changes in concentration (i.e., the number of insurers in a given market) and price over an eight-year period. It concluded that, on average, seven percent of the cumulative increase in premium prices between 1998 and 2006 was the result of a reduction in the number of insurers.

But what critics fail to mention is that when the authors looked at the actual consequences of the 1999 Prudential/Aetna merger, they found effects lasting only two years — and an average price increase of only one half of one percent. And these negligible effects were restricted to premiums paid under plans purchased by large employers, a critical limitation of the studies’ relevance to today’s proposed mergers.

Moreover, as the study notes in passing, over the same eight-year period, average premium prices increased in total by 54 percent. Yet the study offers no insights into what was driving the vast bulk of premium price increases — or whether those factors are still present today.  

Few sectors of the economy have changed more radically in the past few decades than healthcare has. While extrapolated effects drawn from 17-year-old data may grab headlines, they really don’t tell us much of anything about the likely effects of a particular merger today.

Indeed, the ACA and current trends in healthcare policy have dramatically altered the way health insurance markets work. Among other things, the advent of new technologies and the move to “value-based” care are redefining the relationship between insurers and healthcare providers. Nowhere is this more evident than in the Medicare and Medicare Advantage market at the heart of the Aetna/Humana merger.

In an effort to stop the merger on antitrust grounds, critics claim that Medicare and MA are distinct products, in distinct markets. But it is simply incorrect to claim that Medicare Advantage and traditional Medicare aren’t “genuine alternatives.”

In fact, as the Office of Insurance Regulation in Florida — a bellwether state for healthcare policy — concluded in approving the merger: “Medicare Advantage, the private market product, competes directly with Traditional Medicare.”

Consumers who search for plans at are presented with a direct comparison between traditional Medicare and available MA plans. And the evidence suggests that they regularly switch between the two. Today, almost a third of eligible Medicare recipients choose MA plans, and the majority of current MA enrollees switched to MA from traditional Medicare.

True, Medicare and MA plans are not identical. But for antitrust purposes, substitutes need not be perfect to exert pricing discipline on each other. Take HMOs and PPOs, for example. No one disputes that they are substitutes, and that prices for one constrain prices for the other. But as anyone who has considered switching between an HMO and a PPO knows, price is not the only variable that influences consumers’ decisions.

The same is true for MA and traditional Medicare. For many consumers, Medicare’s standard benefits, more-expensive supplemental benefits, plus a wider range of provider options present a viable alternative to MA’s lower-cost expanded benefits and narrower, managed provider network.

The move away from a traditional fee-for-service model changes how insurers do business. It requires larger investments in technology, better tracking of preventive care and health outcomes, and more-holistic supervision of patient care by insurers. Arguably, all of this may be accomplished most efficiently by larger insurers with more resources and a greater ability to work with larger, more integrated providers.

This is exactly why many hospitals, which continue to profit from traditional, fee-for-service systems, are opposed to a merger that promises to expand these value-based plans. Significantly, healthcare providers like Encompass Medical Group, which have done the most to transition their services to the value-based care model, have offered letters of support for the merger.

Regardless of their rhetoric — whether about market definition or historic precedent — the most vocal merger critics are opposed to the deal for a very simple reason: They stand to lose money if the merger is approved. That may be a good reason for some hospitals to wish the merger would go away, but it is a terrible reason to actually stop it.

[This post was first published on June 27, 2016 in The Hill as “Don’t believe the critics, Aetna-Humana merger a good deal for consumers“]

Yesterday the Heritage Foundation published a Legal Memorandum, in which I explain the need for the reform of U.S. Food and Drug Administration (FDA) regulation, in order to promote path-breaking biopharmaceutical innovation.  Highlights of this Legal Memorandum are set forth below.

In recent decades, U.S. and foreign biopharmaceutical companies (makers of drugs that are based on chemical compounds or biological materials, such as vaccines) and medical device manufacturers have been responsible for many cures and advances in treatment that have benefited patients’ lives.  New cancer treatments, medical devices, and other medical discoveries are being made at a rapid pace.

The biopharmaceutical industry is also a major generator of American economic growth and a high-technology leader.  The U.S. biopharmaceutical sector directly employs over 810,000 workers, supports 3.4 million American jobs across the country, contributed almost one-fourth of all domestic research and development (R&D) funded by U.S. businesses in 2013—more than any other single sector—and contributes roughly $790 billion a year to the American economy, according to one study.   American biopharmaceutical firms collaborate with hospitals, universities, and research institutions around the country to provide clinical trials and treatments and to create new jobs.  Their products also boost workplace productivity by treating medical conditions, thereby reducing absenteeism and disability leave.

Properly tailored and limited regulation of biopharmaceutical products and medical devices helps to promote public safety, but FDA regulations as currently designed hinder and slow the innovation process and retard the diffusion of medical improvements.  Specifically, research indicates that current regulatory norms and the delays they engender unnecessarily bloat costs, discourage research and development, slow the pace of health improvements for millions of Americans, and harm the American economy.  These factors should be kept in mind by Congress and the Administration as they study how best to reform (and, where appropriate, eliminate) FDA regulation of drugs and medical devices.  (One particular reform that appears to be unequivocally beneficial and thus worthy of immediate consideration is the prohibition of any FDA restrictions on truthful speech concerning off-label drug uses—speech that benefits consumers and enjoys First Amendment protection.)  Reducing the burdens imposed on inventors by the FDA would allow more drugs to get to the market more quickly so that patients could pursue new and potentially lifesaving treatments.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.