Archives For regulation

It’s fitting that FCC Chairman Ajit Pai recently compared his predecessor’s jettisoning of the FCC’s light touch framework for Internet access regulation without hard evidence to the Oklahoma City Thunder’s James Harden trade. That infamous deal broke up a young nucleus of three of the best players in the NBA in 2012 because keeping all three might someday create salary cap concerns. What few saw coming was a new TV deal in 2015 that sent the salary cap soaring.

If it’s hard to predict how the market will evolve in the closed world of professional basketball, predictions about the path of Internet innovation are an order of magnitude harder — especially for those making crucial decisions with a lot of money at stake.

The FCC’s answer for what it considered to be the dangerous unpredictability of Internet innovation was to write itself a blank check of authority to regulate ISPs in the 2015 Open Internet Order (OIO), embodied in what is referred to as the “Internet conduct standard.” This standard expanded the scope of Internet access regulation well beyond the core principle of preserving openness (i.e., ensuring that any legal content can be accessed by all users) by granting the FCC the unbounded, discretionary authority to define and address “new and novel threats to the Internet.”

When asked about what the standard meant (not long after writing it), former Chairman Tom Wheeler replied,

We don’t really know. We don’t know where things will go next. We have created a playing field where there are known rules, and the FCC will sit there as a referee and will throw the flag.

Somehow, former Chairman Wheeler would have us believe that an amorphous standard that means whatever the agency (or its Enforcement Bureau) says it means created a playing field with “known rules.” But claiming such broad authority is hardly the light-touch approach marketed to the public. Instead, this ill-conceived standard allows the FCC to wade as deeply as it chooses into how an ISP organizes its business and how it manages its network traffic.

Such an approach is destined to undermine, rather than further, the objectives of Internet openness, as embodied in Chairman Powell’s 2005 Internet Policy Statement:

To foster creation, adoption and use of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.

Instead, the Internet conduct standard is emblematic of how an off-the-rails quest to heavily regulate one specific component of the complex Internet ecosystem results in arbitrary regulatory imbalances — e.g., between ISPs and over-the-top (OTT) or edge providers that offer similar services such as video streaming or voice calling.

As Boston College Law Professor, Dan Lyons, puts it:

While many might assume that, in theory, what’s good for Netflix is good for consumers, the reality is more complex. To protect innovation at the edge of the Internet ecosystem, the Commission’s sweeping rules reduce the opportunity for consumer-friendly innovation elsewhere, namely by facilities-based broadband providers.

This is no recipe for innovation, nor does it coherently distinguish between practices that might impede competition and innovation on the Internet and those that are merely politically disfavored, for any reason or no reason at all.

Free data madness

The Internet conduct standard’s unholy combination of unfettered discretion and the impulse to micromanage can (and will) be deployed without credible justification to the detriment of consumers and innovation. Nowhere has this been more evident than in the confusion surrounding the regulation of “free data.”

Free data, like T-Mobile’s Binge On program, is data consumed by a user that has been subsidized by a mobile operator or a content provider. The vertical arrangements between operators and content providers creating the free data offerings provide many benefits to consumers, including enabling subscribers to consume more data (or, for low-income users, to consume data in the first place), facilitating product differentiation by mobile operators that offer a variety of free data plans (including allowing smaller operators the chance to get a leg up on competitors by assembling a market-share-winning plan), increasing the overall consumption of content, and reducing users’ cost of obtaining information. It’s also fundamentally about experimentation. As the International Center for Law & Economics (ICLE) recently explained:

Offering some services at subsidized or zero prices frees up resources (and, where applicable, data under a user’s data cap) enabling users to experiment with new, less-familiar alternatives. Where a user might not find it worthwhile to spend his marginal dollar on an unfamiliar or less-preferred service, differentiated pricing loosens the user’s budget constraint, and may make him more, not less, likely to use alternative services.

In December 2015 then-Chairman Tom Wheeler used his newfound discretion to launch a 13-month “inquiry” into free data practices before preliminarily finding some to be in violation of the standard. Without identifying any actual harm, Wheeler concluded that free data plans “may raise” economic and public policy issues that “may harm consumers and competition.”

After assuming the reins at the FCC, Chairman Pai swiftly put an end to that nonsense, saying that the Commission had better things to do (like removing barriers to broadband deployment) than denying free data plans that expand Internet access and are immensely popular, especially among low-income Americans.

The global morass of free data regulation

But as long as the Internet conduct standard remains on the books, it implicitly grants the US’s imprimatur to harmful policies and regulatory capriciousness in other countries that look to the US for persuasive authority. While Chairman Pai’s decisive intervention resolved the free data debate in the US (at least for now), other countries are still grappling with whether to prohibit the practice, allow it, or allow it with various restrictions.

In Europe, the 2016 EC guidelines left the decision of whether to allow the practice in the hands of national regulators. Consequently, some regulators — in Hungary, Sweden, and the Netherlands (although there the ban was recently overturned in court) — have banned free data practices  while others — in Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine — have not. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs, a state of affairs that is compounded by a lack of data on the consequences of various approaches to their regulation.

In Canada this year, the CRTC issued a decision adopting restrictive criteria under which to evaluate free data plans. The criteria include assessing the degree to which the treatment of data is agnostic, whether the free data offer is exclusive to certain customers or certain content providers, the impact on Internet openness and innovation, and whether there is financial compensation involved. The standard is open-ended, and free data plans as they are offered in the US would “likely raise concerns.”

Other regulators are contributing to the confusion through ambiguously framed rules, such as that of the Chilean regulator, Subtel. In a 2014 decision, it found that a free data offer of specific social network apps was in breach of Chile’s Internet rules. In contrast to what is commonly reported, however, Subtel did not ban free data. Instead, it required mobile operators to change how they promote such services, requiring them to state that access to Facebook, Twitter and WhatsApp were offered “without discounting the user’s balance” instead of “at no cost.” It also required them to disclose the amount of time the offer would be available, but imposed no mandatory limit.

In addition to this confusing regulatory make-work governing how operators market free data plans, the Chilean measures also require that mobile operators offer free data to subscribers who pay for a data plan, in order to ensure free data isn’t the only option users have to access the Internet.

The result is that in Chile today free data plans are widely offered by Movistar, Claro, and Entel and include access to apps such as Facebook, WhatsApp, Twitter, Instagram, Pokemon Go, Waze, Snapchat, Apple Music, Spotify, Netflix or YouTube — even though Subtel has nominally declared such plans to be in violation of Chile’s net neutrality rules.

Other regulators are searching for palatable alternatives to both flex their regulatory muscle to govern Internet access, while simultaneously making free data work. The Indian regulator, TRAI, famously banned free data in February 2016. But the story doesn’t end there. After seeing the potential value of free data in unserved and underserved, low-income areas, TRAI proposed implementing government-sanctioned free data. The proposed scheme would provide rural subscribers with 100 MB of free data per month, funded through the country’s universal service fund. To ensure that there would be no vertical agreements between content providers and mobile operators, TRAI recommended introducing third parties, referred to as “aggregators,” that would facilitate mobile-operator-agnostic arrangements.

The result is a nonsensical, if vaguely well-intentioned, threading of the needle between the perceived need to (over-)regulate access providers and the determination to expand access. Notwithstanding the Indian government’s awareness that free data will help to close the digital divide and enhance Internet access, in other words, it nonetheless banned private markets from employing private capital to achieve that very result, preferring instead non-market processes which are unlikely to be nearly as nimble or as effective — and yet still ultimately offer “non-neutral” options for consumers.

Thinking globally, acting locally (by ditching the Internet conduct standard)

Where it is permitted, free data is undergoing explosive adoption among mobile operators. Currently in the US, for example, all major mobile operators offer some form of free data or unlimited plan to subscribers. And, as a result, free data is proving itself as a business model for users’ early stage experimentation and adoption of augmented reality, virtual reality and other cutting-edge technologies that represent the Internet’s next wave — but that also use vast amounts of data. Were the US to cut off free data at the legs under the OIO absent hard evidence of harm, it would substantially undermine this innovation.

The application of the nebulous Internet conduct standard to free data is a microcosm of the current incoherence: It is a rule rife with a parade of uncertainties and only theoretical problems, needlessly saddling companies with enforcement risk, all in the name of preserving and promoting innovation and openness. As even some of the staunchest proponents of net neutrality have recognized, only companies that can afford years of litigation can be expected to thrive in such an environment.

In the face of confusion and uncertainty globally, the US is now poised to provide leadership grounded in sound policy that promotes innovation. As ICLE noted last month, Chairman Pai took a crucial step toward re-imposing economic rigor and the rule of law at the FCC by questioning the unprecedented and ill-supported expansion of FCC authority that undergirds the OIO in general and the Internet conduct standard in particular. Today the agency will take the next step by voting on Chairman Pai’s proposed rulemaking. Wherever the new proceeding leads, it’s a welcome opportunity to analyze the issues with a degree of rigor that has thus far been appallingly absent.

And we should not forget that there’s a direct solution to these ambiguities that would avoid the undulations of subsequent FCC policy fights: Congress could (and should) pass legislation implementing a regulatory framework grounded in sound economics and empirical evidence that allows for consumers to benefit from the vast number of procompetitive vertical agreements (such as free data plans), while still facilitating a means for policing conduct that may actually harm consumers.

The Golden State Warriors are the heavy odds-on favorite to win another NBA Championship this summer, led by former OKC player Kevin Durant. And James Harden is a contender for league MVP. We can’t always turn back the clock on a terrible decision, hastily made before enough evidence has been gathered, but Chairman Pai’s efforts present a rare opportunity to do so.

Today, the International Center for Law & Economics (ICLE) released a study updating our 2014 analysis of the economic effects of the Durbin Amendment to the Dodd-Frank Act.

The new paper, Unreasonable and Disproportionate: How the Durbin Amendment Harms Poorer Americans and Small Businesses, by ICLE scholars, Todd J. Zywicki, Geoffrey A. Manne, and Julian Morris, can be found here; a Fact Sheet highlighting the paper’s key findings is available here.

Introduced as part of the Dodd-Frank Act in 2010, the Durbin Amendment sought to reduce the interchange fees assessed by large banks on debit card transactions. In the words of its primary sponsor, Sen. Richard Durbin, the Amendment aspired to help “every single Main Street business that accepts debit cards keep more of their money, which is a savings they can pass on to their consumers.”

Unfortunately, although the Durbin Amendment did generate benefits for big-box retailers, ICLE’s 2014 analysis found that it had actually harmed many other merchants and imposed substantial net costs on the majority of consumers, especially those from lower-income households.

In the current study, we analyze a welter of new evidence and arguments to assess whether time has ameliorated or exacerbated the Amendment’s effects. Our findings in this report expand upon and reinforce our findings from 2014:

Relative to the period before the Durbin Amendment, almost every segment of the interrelated retail, banking, and consumer finance markets has been made worse off as a result of the Amendment.

Predictably, the removal of billions of dollars in interchange fee revenue has led to the imposition of higher bank fees and reduced services for banking consumers.

In fact, millions of households, regardless of income level, have been adversely affected by the Durbin Amendment through higher overdraft fees, increased minimum balances, reduced access to free checking, higher ATM fees, and lost debit card rewards, among other things.

Nor is there any evidence that merchants have lowered prices for retail consumers; for many small-ticket items, in fact, prices have been driven up.

Contrary to Sen. Durbin’s promises, in other words, increased banking costs have not been offset by lower retail prices.

At the same time, although large merchants continue to reap a Durbin Amendment windfall, there remains no evidence that small merchants have realized any interchange cost savings — indeed, many have suffered cost increases.

And all of these effects fall hardest on the poor. Hundreds of thousands of low-income households have chosen (or been forced) to exit the banking system, with the result that they face higher costs, difficulty obtaining credit, and complications receiving and making payments — all without offset in the form of lower retail prices.

Finally, the 2017 study also details a new trend that was not apparent when we examined the data three years ago: Contrary to our findings then, the two-tier system of interchange fee regulation (which exempts issuing banks with under $10 billion in assets) no longer appears to be protecting smaller banks from the Durbin Amendment’s adverse effects.

This week the House begins consideration of the Amendment’s repeal as part of Rep. Hensarling’s CHOICE Act. Our study makes clear that the Durbin price-control experiment has proven a failure, and that repeal is, indeed, the only responsible option.

Click on the following links to read:

Full Paper

Fact Sheet

Summary

On February 22, 2017, an all-star panel at the Heritage Foundation discussed “Reawakening the Congressional Review Act” – a statute which gives Congress sixty legislative days to disapprove a proposed federal rule (subject to presidential veto), under an expedited review process not subject to Senate filibuster.  Until very recently, the CRA was believed to apply only to very recently promulgated regulations.  Thus, according to conventional wisdom, while the CRA might prove useful in blocking some non-cost-beneficial Obama Administration midnight regulations, it could not be invoked to attack serious regulatory agency overreach dating back many years.

Last week’s panel, however, demonstrated that conventional wisdom is no match for the careful textual analysis of laws – the sort of analysis that too often is given short-shrift by  commentators.  Applying straightforward statutory construction techniques, my Heritage colleague Paul Larkin argued persuasively that the CRA actually reaches back over 20 years to authorize congressional assessment of regulations that were not properly submitted to Congress.  Paul’s short February 15 article on the CRA (reprinted from The Daily Signal), intended for general public consumption, lays it all out, and merits being reproduced in its entirety:

In Washington, there is a saying that regulators never met a rule they didn’t like.  Federal agencies, commonly referred to these days as the “fourth branch of government,” have been binding the hands of the American people for decades with overreaching regulations. 

All the while, Congress sat idly by and let these agencies assume their new legislative role.  What if Congress could not only reverse this trend, but undo years of burdensome regulations dating as far back as the mid-1990s?  It turns out it can, with the Congressional Review Act. 

The Congressional Review Act is Congress’ most recent effort to trim the excesses of the modern administrative state.  Passed into law in 1996, the Congressional Review Act allows Congress to invalidate an agency rule by passing a joint resolution of disapproval, not subject to a Senate filibuster, that the president signs into law. 

Under the Congressional Review Act, Congress is given 60 legislative days to disapprove a rule and receive the president’s signature, after which the rule goes into effect.  But the review act also sets forth a specific procedure for submitting new rules to Congress that executive agencies must carefully follow. 

If they fail to follow these specific steps, Congress can vote to disapprove the rule even if it has long been accepted as part of the Federal Register.  In other words, if the agency failed to follow its obligations under the Congressional Review Act, the 60-day legislative window never officially started, and the rule remains subject to congressional disapproval. 

The legal basis for this becomes clear when we read the text of the Congressional Review Act. 

According to the statute, the period that Congress has to review a rule does not commence until the later of two events: either (1) the date when an agency publishes the rule in the Federal Register, or (2) the date when the agency submits the rule to Congress.

This means that if a currently published rule was never submitted to Congress, then the nonexistent “submission” qualifies as “the later” event, and the rule remains subject to congressional review.

This places dozens of rules going back to 1996 in the congressional crosshairs.

The definition of “rule” under the Congressional Review Act is quite broad—it includes not only the “junior varsity” statutes that an agency can adopt as regulations, but also the agency’s interpretations of those laws. This is vital because federal agencies often use a wide range of documents to strong-arm regulated parties.

The review act reaches regulations, guidance documents, “Dear Colleague” letters, and anything similar.

The Congressional Review Act is especially powerful because once Congress passes a joint resolution of disapproval and the president signs it into law, the rule is nullified and the agency cannot adopt a “substantially similar” rule absent an intervening act of Congress.

This binds the hands of federal agencies to find backdoor ways of re-imposing the same regulations.

The Congressional Review Act gives Congress ample room to void rules that it finds are mistaken.  Congress may find it to be an indispensable tool in its efforts to rein in government overreach.

Now that Congress has a president who is favorable to deregulation, lawmakers should seize this opportunity to find some of the most egregious regulations going back to 1996 that, under the Congressional Review Act, still remain subject to congressional disapproval.

In the coming days, my colleagues will provide some specific regulations that Congress should target.

For a more fulsome exposition of the CRA’s coverage, see Paul’s February 8 Heritage Foundation Legal Memorandum, “The Reach of the Congressional Review Act.”  Hopefully, Congress and the Trump Administration will take advantage of this newly-discovered legal weapon as they explore the most efficacious means to reduce the daunting economic burden of federal overregulation (for a subject matter-specific exploration of the nature and size of that burden, see the most recent Heritage Foundation “Red Tape Rising” report, here).

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.

Yesterday the Chairman and Ranking Member of the House Judiciary Committee issued the first set of policy proposals following their long-running copyright review process. These proposals were principally aimed at ensuring that the IT demands of the Copyright Office were properly met so that it could perform its assigned functions, and to provide adequate authority for it to adapt its policies and practices to the evolving needs of the digital age.

In response to these modest proposals, Public Knowledge issued a telling statement, calling for enhanced scrutiny of these proposals related to an agency “with a documented history of regulatory capture.”

The entirety of this “documented history,” however, is a paper published by Public Knowledge itself alleging regulatory capture—as evidenced by the fact that 13 people had either gone from the Copyright Office to copyright industries or vice versa over the past 20+ years. The original document was brilliantly skewered by David Newhoff in a post on the indispensable blog, Illusion of More:

To support its premise, Public Knowledge, with McCarthy-like righteousness, presents a list—a table of thirteen former or current employees of the Copyright Office who either have worked for private-sector, rights-holding organizations prior to working at the Office or who are  now working for these private entities after their terms at the Office. That thirteen copyright attorneys over a 22-year period might be employed in some capacity for copyright owners is a rather unremarkable observation, but PK seems to think it’s a smoking gun…. Or, as one of the named thirteen, Steven Tepp, observes in his response, PK also didn’t bother to list the many other Copyright Office employees who, “went to Internet and tech companies, the Smithsonian, the FCC, and other places that no one would mistake for copyright industries.” One might almost get the idea that experienced copyright attorneys pursue various career paths or something.

Not content to rest on the laurels of its groundbreaking report of Original Sin, Public Knowledge has now doubled down on its audacity, using its own previous advocacy as the sole basis to essentially impugn an entire agency, without more. But, as advocacy goes, that’s pretty specious. Some will argue that there is an element of disingenuousness in all advocacy, even if it is as benign as failing to identify the weaknesses of one’s arguments—and perhaps that’s true. (We all cite our own work at one time or another, don’t we?) But that’s not the situation we have before us. Instead, Public Knowledge creates its own echo chamber, effectively citing only its own idiosyncratic policy preferences as the “documented” basis for new constraints on the Copyright Office. Even in a world of moral relativism, bubbles of information, and competing narratives about the truth, this should be recognizable as thin gruel.

So why would Public Knowledge expose itself in this manner? What is to be gained by seeking to impugn the integrity of the Copyright Office? There the answer is relatively transparent: PK hopes to capitalize on the opportunity to itself capture Copyright Office policy-making by limiting the discretion of the Copyright Office, and by turning it into an “objective referee” rather than the nation’s steward for ensuring the proper functioning of the copyright system.

PK claims that the Copyright Office should not be involved in making copyright policy, other than perhaps technically transcribing the agreements reached by other parties. Thus, in its “indictment” of the Copyright Office (which it now risibly refers to as the Copyright Office’s “documented history of capture”), PK wrote that:

These statements reflect the many specific examples, detailed in Section II, in which the Copyright Office has acted more as an advocate for rightsholder interests than an objective referee of copyright debates.

Essentially, PK seems to believe that copyright policy should be the province of self-proclaimed “consumer advocates” like PK itself—and under no circumstances the employees of the Copyright Office who might actually deign to promote the interests of the creative community. After all, it is staffed by a veritable cornucopia of copyright industry shills: According to PK’s report, fully 1 of its 400 employees has either left the office to work in the copyright industry or joined the office from industry in each of the last 1.5 years! For reference (not that PK thinks to mention it) some 325 Google employees have worked in government offices in just the past 15 years. And Google is hardly alone in this. Good people get good jobs, whether in government, industry, or both. It’s hardly revelatory.

And never mind that the stated mission of the Copyright Office “is to promote creativity by administering and sustaining an effective national copyright system,” and that “the purpose of the copyright system has always been to promote creativity in society.” And never mind that Congress imbued the Office with the authority to make regulations (subject to approval by the Librarian of Congress) and directed the Copyright Office to engage in a number of policy-related functions, including:

  1. Advising Congress on national and international issues relating to copyright;
  2. Providing information and assistance to Federal departments and agencies and the Judiciary on national and international issues relating to copyright;
  3. Participating in meetings of international intergovernmental organizations and meetings with foreign government officials relating to copyright; and
  4. Conducting studies and programs regarding copyright.

No, according to Public Knowledge the Copyright Office is to do none of these things, unless it does so as an “objective referee of copyright debates.” But nowhere in the legislation creating the Office or amending its functions—nor anywhere else—is that limitation to be found; it’s just created out of whole cloth by PK.

The Copyright Office’s mission is not that of a content neutral referee. Rather, the Copyright Office is charged with promoting effective copyright protection. PK is welcome to solicit Congress to change the Copyright Act and the Office’s mandate. But impugning the agency for doing what it’s supposed to do is a deceptive way of going about it. PK effectively indicts and then convicts the Copyright Office for following its mission appropriately, suggesting that doing so could only have been the result of undue influence from copyright owners. But that’s manifestly false, given its purpose.

And make no mistake why: For its narrative to work, PK needs to define the Copyright Office as a neutral party, and show that its neutrality has been unduly compromised. Only then can Public Knowledge justify overhauling the office in its own image, under the guise of magnanimously returning it to its “proper,” neutral role.

Public Knowledge’s implication that it is a better defender of the “public” interest than those who actually serve in the public sector is a subterfuge, masking its real objective of transforming the nature of copyright law in its own, benighted image. A questionable means to a noble end, PK might argue. Not in our book. This story always turns out badly.

I just posted a new ICLE white paper, co-authored with former ICLE Associate Director, Ben Sperry:

When Past Is Not Prologue: The Weakness of the Economic Evidence Against Health Insurance Mergers.

Yesterday the hearing in the DOJ’s challenge to stop the Aetna-Humana merger got underway, and last week phase 1 of the Cigna-Anthem merger trial came to a close.

The DOJ’s challenge in both cases is fundamentally rooted in a timeworn structural analysis: More consolidation in the market (where “the market” is a hotly-contested issue, of course) means less competition and higher premiums for consumers.

Following the traditional structural playbook, the DOJ argues that the Aetna-Humana merger (to pick one) would result in presumptively anticompetitive levels of concentration, and that neither new entry not divestiture would suffice to introduce sufficient competition. It does not (in its pretrial brief, at least) consider other market dynamics (including especially the complex and evolving regulatory environment) that would constrain the firm’s ability to charge supracompetitive prices.

Aetna & Humana, for their part, contend that things are a bit more complicated than the government suggests, that the government defines the relevant market incorrectly, and that

the evidence will show that there is no correlation between the number of [Medicare Advantage organizations] in a county (or their shares) and Medicare Advantage pricing—a fundamental fact that the Government’s theories of harm cannot overcome.

The trial will, of course, feature expert economic evidence from both sides. But until we see that evidence, or read the inevitable papers derived from it, we are stuck evaluating the basic outlines of the economic arguments based on the existing literature.

A host of antitrust commentators, politicians, and other interested parties have determined that the literature condemns the mergers, based largely on a small set of papers purporting to demonstrate that an increase of premiums, without corresponding benefit, inexorably follows health insurance “consolidation.” In fact, virtually all of these critics base their claims on a 2012 case study of a 1999 merger (between Aetna and Prudential) by economists Leemore Dafny, Mark Duggan, and Subramaniam Ramanarayanan, Paying a Premium on Your Premium? Consolidation in the U.S. Health Insurance Industry, as well as associated testimony by Prof. Dafny, along with a small number of other papers by her (and a couple others).

Our paper challenges these claims. As we summarize:

This white paper counsels extreme caution in the use of past statistical studies of the purported effects of health insurance company mergers to infer that today’s proposed mergers—between Aetna/Humana and Anthem/Cigna—will likely have similar effects. Focusing on one influential study—Paying a Premium on Your Premium…—as a jumping off point, we highlight some of the many reasons that past is not prologue.

In short: extrapolated, long-term, cumulative, average effects drawn from 17-year-old data may grab headlines, but they really don’t tell us much of anything about the likely effects of a particular merger today, or about the effects of increased concentration in any particular product or geographic market.

While our analysis doesn’t necessarily undermine the paper’s limited, historical conclusions, it does counsel extreme caution for inferring the study’s applicability to today’s proposed mergers.

By way of reference, Dafny, et al. found average premium price increases from the 1999 Aetna/Prudential merger of only 0.25 percent per year for two years following the merger in the geographic markets they studied. “Health Insurance Mergers May Lead to 0.25 Percent Price Increases!” isn’t quite as compelling a claim as what critics have been saying, but it’s arguably more accurate (and more relevant) than the 7 percent price increase purportedly based on the paper that merger critics like to throw around.

Moreover, different markets and a changed regulatory environment alone aren’t the only things suggesting that past is not prologue. When we delve into the paper more closely we find even more significant limitations on the paper’s support for the claims made in its name, and its relevance to the current proposed mergers.

The full paper is available here.

On November 9, pharmaceutical stocks soared as Donald Trump’s election victory eased concerns about government intervention in drug pricing. Shares of Pfizer rose 8.5%, Allergan PLC was up 8%, and biotech Celgene jumped 10.4%. Drug distributors also gained, with McKesson up 6.4% and Express Scripts climbing 3.4%. Throughout the campaign, Clinton had vowed to take on the pharmaceutical industry and proposed various reforms to reign in drug prices, from levying fines on drug companies that imposed unjustified price increases to capping patients’ annual expenditures on drugs. Pharmaceutical stocks had generally underperformed this year as the market, like much of America, awaited a Clinton victory.

In contrast, Trump generally had less to say on the subject of drug pricing, hence the market’s favorable response to his unexpected victory. Yet, as the end of the first post-election month draws near, we are still uncertain whether Trump is friend or foe to the pharmaceutical industry. Trump’s only proposal that directly impacts the industry would allow the government to negotiate the prices of Medicare Part D drugs with drug makers. Although this proposal would likely have little impact on prices because existing Part D plans already negotiate prices with drug makers, there is a risk that this “negotiation” could ultimately lead to price controls imposed on the industry. And as I have previously discussed, price controls—whether direct or indirect—are a bad idea for prescription drugs: they lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage, drug shortages in certain markets, and reduced incentives for innovation.

Several of Trump’s other health proposals have mixed implications for the industry. For example, a repeal or overhaul of the Affordable Care Act could eliminate the current tax on drug makers and loosen requirements for Medicaid drug rebates and Medicare part D discounts. On the other hand, if repealing the ACA reduces the number of people insured, spending on pharmaceuticals would fall. Similarly, if Trump renegotiates international trade deals, pharmaceutical firms could benefit from stronger markets or longer patent exclusivity rights, or they could suffer if foreign countries abandon trade agreements altogether or retaliate with disadvantageous terms.

Yet, with drug spending up 8.5 percent last year and recent pricing scandals launched by 500+ percentage increases in individual drugs (i.e., Martin Shkreli, Valeant Pharmaceuticals, Mylan), the current debate over drug pricing is unlikely to fade. Even a Republican-led Congress and White House is likely to heed the public outcry and do something about drug prices.

Drug makers would be wise to stave off any government-imposed price restrictions by voluntarily limiting price increases on important drugs. Major pharmaceutical company Allergan has recently done just this by issuing a “social contract with patients” that made several drug pricing commitments to its customers. Among other assurances, Allergan has promised to limit price increases to single-digit percentage increases and no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry. Last year throughout the pharmaceutical industry, the prices of the most commonly-used brand drugs increased by over 16 percent and, in the last two years before patent expiry, drug makers increased the list prices of drugs by an average of 35 percent. Thus, Allergan’s commitment will produce significant savings over the life of a product, creating hundreds of millions of dollars in savings to health plans, patients, and the health care system.

If Allergan can make this commitment for its entire drug inventory—over 80+ drugs—why haven’t other companies done the same? Similar commitments by other drug makers might be enough to prevent lawmakers from turning to market-distorting reforms, such as price controls, that could end up doing more harm than good for consumers, the pharmaceutical industry, and long-term innovation.