On February 22, 2017, an all-star panel at the Heritage Foundation discussed “Reawakening the Congressional Review Act” – a statute which gives Congress sixty legislative days to disapprove a proposed federal rule (subject to presidential veto), under an expedited review process not subject to Senate filibuster.  Until very recently, the CRA was believed to apply only to very recently promulgated regulations.  Thus, according to conventional wisdom, while the CRA might prove useful in blocking some non-cost-beneficial Obama Administration midnight regulations, it could not be invoked to attack serious regulatory agency overreach dating back many years.

Last week’s panel, however, demonstrated that conventional wisdom is no match for the careful textual analysis of laws – the sort of analysis that too often is given short-shrift by  commentators.  Applying straightforward statutory construction techniques, my Heritage colleague Paul Larkin argued persuasively that the CRA actually reaches back over 20 years to authorize congressional assessment of regulations that were not properly submitted to Congress.  Paul’s short February 15 article on the CRA (reprinted from The Daily Signal), intended for general public consumption, lays it all out, and merits being reproduced in its entirety:

In Washington, there is a saying that regulators never met a rule they didn’t like.  Federal agencies, commonly referred to these days as the “fourth branch of government,” have been binding the hands of the American people for decades with overreaching regulations. 

All the while, Congress sat idly by and let these agencies assume their new legislative role.  What if Congress could not only reverse this trend, but undo years of burdensome regulations dating as far back as the mid-1990s?  It turns out it can, with the Congressional Review Act. 

The Congressional Review Act is Congress’ most recent effort to trim the excesses of the modern administrative state.  Passed into law in 1996, the Congressional Review Act allows Congress to invalidate an agency rule by passing a joint resolution of disapproval, not subject to a Senate filibuster, that the president signs into law. 

Under the Congressional Review Act, Congress is given 60 legislative days to disapprove a rule and receive the president’s signature, after which the rule goes into effect.  But the review act also sets forth a specific procedure for submitting new rules to Congress that executive agencies must carefully follow. 

If they fail to follow these specific steps, Congress can vote to disapprove the rule even if it has long been accepted as part of the Federal Register.  In other words, if the agency failed to follow its obligations under the Congressional Review Act, the 60-day legislative window never officially started, and the rule remains subject to congressional disapproval. 

The legal basis for this becomes clear when we read the text of the Congressional Review Act. 

According to the statute, the period that Congress has to review a rule does not commence until the later of two events: either (1) the date when an agency publishes the rule in the Federal Register, or (2) the date when the agency submits the rule to Congress.

This means that if a currently published rule was never submitted to Congress, then the nonexistent “submission” qualifies as “the later” event, and the rule remains subject to congressional review.

This places dozens of rules going back to 1996 in the congressional crosshairs.

The definition of “rule” under the Congressional Review Act is quite broad—it includes not only the “junior varsity” statutes that an agency can adopt as regulations, but also the agency’s interpretations of those laws. This is vital because federal agencies often use a wide range of documents to strong-arm regulated parties.

The review act reaches regulations, guidance documents, “Dear Colleague” letters, and anything similar.

The Congressional Review Act is especially powerful because once Congress passes a joint resolution of disapproval and the president signs it into law, the rule is nullified and the agency cannot adopt a “substantially similar” rule absent an intervening act of Congress.

This binds the hands of federal agencies to find backdoor ways of re-imposing the same regulations.

The Congressional Review Act gives Congress ample room to void rules that it finds are mistaken.  Congress may find it to be an indispensable tool in its efforts to rein in government overreach.

Now that Congress has a president who is favorable to deregulation, lawmakers should seize this opportunity to find some of the most egregious regulations going back to 1996 that, under the Congressional Review Act, still remain subject to congressional disapproval.

In the coming days, my colleagues will provide some specific regulations that Congress should target.

For a more fulsome exposition of the CRA’s coverage, see Paul’s February 8 Heritage Foundation Legal Memorandum, “The Reach of the Congressional Review Act.”  Hopefully, Congress and the Trump Administration will take advantage of this newly-discovered legal weapon as they explore the most efficacious means to reduce the daunting economic burden of federal overregulation (for a subject matter-specific exploration of the nature and size of that burden, see the most recent Heritage Foundation “Red Tape Rising” report, here).

  1. Background

Some of the most pernicious and welfare-inimical anticompetitive activity stems from the efforts of firms to use governmental regulation to raise rivals’ costs or totally exclude them from the market (see, for example, here).  The surest cure to such economic harm is, of course, the elimination or reform of anticompetitive government laws and regulations, but that is hard to do, given the existence of well-entrenched interest groups who have an interest in lobbying to protect their special legally-bestowed privileges.

A somewhat different potential limitation on effective competition associated with government arises from the invocation of governmental processes – in particular, judicial and regulatory filings and petitions – to harm competitors and maintain a protected position in the marketplace.  Dealing effectively with this problem presents its own set of difficulties.  Protecting the right to seek governmental redress consistent with existing rules is a key part of our system of limited government and the rule of law.  Indeed, the First Amendment to the U.S. Constitution specifically protects “the right of the people . . . to petition the Government for a redress of grievances”, indicating that government must tread carefully indeed before taking any action that could be deemed as a curtailment of such petitioning.  This has particular salience for antitrust, as Scalia Law School Professor David Bernstein has explained in The Heritage Guide to the Constitution:

[T]he right to petition . . . continues to have some independent weight.  Most importantly, under the Noerr-Pennington doctrine, an effort to influence the exercise of government power, even for the purpose of gaining an anticompetitive advantage, does not create liability under the antitrust laws.  Eastern Railroad Presidents Conference v. Noerr Motor Freight, Inc. (1961); United Mine Workers of America v. Pennington (1965). The Supreme Court initially adopted this doctrine under the guise of freedom of speech, but it more precisely finds its constitutional home in the right to petition. Unlike speech, which can often be punished in the antitrust context, as when corporate officers verbally agree to collude, the right to petition confers absolute immunity on efforts to influence government policy in a noncorrupt way.

The Noerr-Pennington doctrine does not, however, totally preclude antitrust enforcers from scrutinizing filings designed to undermine competition.  If a private party is using petitioning as a mere “sham” to impose harm on competitors, without regard to the merits of its claims, Noerr immunity does not apply.  In California Motor Transport v. Trucking Unlimited, 404 U.S. 508 (1972), the Supreme Court held that access to the courts and administrative agencies is an aspect of the right to petition, and hence Noerr’s protection generally extends to administrative and judicial proceedings, as well as to efforts to influence legislative and executive action.  Nevertheless, in so holding, the California Motor Transport Court determined that Noerr did not shield defendants’ intervention in licensing proceedings involving their competitors, because the intervention did not stem from a good faith effort to enforce the law, but rather was solely aimed at imposing costs on and harassing the competitors.  Subsequently, however, in Professional Real Estate Investors v. Columbia Pictures Industries, 508 U.S. 49 (1993) (PRE), the Supreme Court clarified that a high hurdle must be surmounted to demonstrate that petitioning through litigation is a “sham,” namely that (1) the lawsuit in question is “objectively baseless” (“no reasonable litigant could realistically expect success on the merits”) and (2) the suit must reflect a subjective intent to use the governmental process – as opposed to the outcome of that process – as an anticompetitive weapon.

In 2006, the U.S. Federal Trade Commission (FTC) issued a staff report on how to maximize competition values embodied in the antitrust laws while fully respecting the core values identified in Noerr when analyzing three types of conduct:  filings that seek only a ministerial government response, material misrepresentations, and repetitive petitioning.  More specifically, the report recommended that the FTC seek appropriate opportunities, in litigation or amicus curiae filings, to:  (1) clarify that conduct protected by Noerr does not extend to filings, outside of the political arena, that seek no more than a ministerial government act; (2) clarify that conduct protected by Noerr does not extend to misrepresentations, outside of the political arena, that involve material misrepresentations to government bodies in the regulatory context (such as government standard setting and drug approval proceedings, for example); and (3) clarify that conduct protected by Noerr does not extend to patterns of repetitive petitioning, outside of the political arena, filed without regard to merit that employ government processes, rather than the outcome of those processes, to harm competitors in an attempt to suppress competition.

Since the issuance of the 2006 staff report, however, the FTC has not aggressively pursued litigation to narrow the scope of the Noerr doctrine (perhaps reflecting at least in part the difficulties attending the bringing of good cases, in light of PRE’s requirements).  Rather, the Commission’s efforts to curb antitrust immunity have centered primarily on constraining the reach of the “state action” doctrine (anticompetitive conduct flying under the color of state authority), an area in which it has achieved some notable successes (see, for example, here).   

  1. The FTC’s February 2017 Shire Viropharma Injunctive Action

There is at least one indication, however, that the FTC may be turning anew to the problem of anticompetitive petitioning.  On February 7, 2017, the Commission filed a complaint in federal district court charging Shire ViroPharma Inc. (ViroPharma) with violating the antitrust laws by abusing government processes to delay generic competition to its branded prescription drug, Vancocin HCl Capsules.  The complaint alleges that because of ViroPharma’s actions, consumers and other purchasers paid hundreds of millions of dollars more for their medication.

Vancocin Capsules are used to treat C.difficile-associated diarrhea, or CDAD, a sometimes life-threatening bacterial infection. According to the complaint, Vancocin Capsules are not reasonably interchangeable with any other medications used to treat CDAD, and no other medication constrained ViroPharma’s pricing of Vancocin Capsules. After ViroPharma acquired the rights to Vancocin Capsules in 2004, it raised the price of the drug significantly and continued to do so through 2011.

The FTC alleges that to maintain its monopoly, ViroPharma waged a campaign of serial, repetitive, and unsupported filings with the U.S. Food and Drug Administration (FDA) and courts to delay the FDA’s approval of generic Vancocin Capsules, and exclude competition. According to the FTC, ViroPharma submitted 43 filings with the FDA and filed three lawsuits against the FDA between 2006 and 2012. The FTC asserts that the number and frequency of ViroPharma’s petitioning at the FDA are many multiples beyond that by any drug company related to any other drug.  The FTC further claims that ViroPharma knew that it was the FDA’s practice to refrain from approving any generic applications until it resolved all pending relevant “citizen petition” filings.  According to the FTC, Viropharma intended for its serial filings to delay the approval of generics, and thus forestall competition and price reductions.

The FTC seeks a court order permanently prohibiting ViroPharma from submitting repetitive and baseless filings with the FDA and the courts, and from similar and related conduct as well as any other necessary equitable relief, including restitution and disgorgement.

  1. Conclusion

Win or lose, the FTC is to be commended for seeking a federal court clarification of what constitutes “baseless” petitioning for purposes of Noerr.  As numerous scholars have pointed out, the Noerr “petitioning” doctrine is riddled with confusion (see, for example, here), and Supreme Court attention to this topic may once again be ripe.  The most cost-effective way to reduce the economic burden of anticompetitive petitioning, however, may be not through litigation, which is time-consuming and uncertain (although it may play a useful role), but rather through regulatory reform that reduces the opportunities for manipulating overly complex regulatory systems in an anticompetitive fashion.  Stay tuned.

 

My colleague, Neil Turkewitz, begins his fine post for Fair Use Week (read: crashing Fair Use Week) by noting that

Many of the organizations celebrating fair use would have you believe, because it suits their analysis, that copyright protection and the public interest are diametrically opposed. This is merely a rhetorical device, and is a complete fallacy.

If I weren’t a recovering law professor, I would just end there: that about sums it up, and “the rest is commentary,” as they say. Alas….  

All else equal, creators would like as many people to license their works as possible; there’s no inherent incompatibility between “incentives and access” (which is just another version of the fallacious “copyright protection versus the public interest” trope). Everybody wants as much access as possible. Sure, consumers want to pay as little as possible for it, and creators want to be paid as much as possible. That’s a conflict, and at the margin it can seem like a conflict between access and incentives. But it’s not a fundamental, philosophical, and irreconcilable difference — it’s the last 15 minutes of negotiation before the contract is signed.

Reframing what amounts to a fundamental agreement into a pitched battle for society’s soul is indeed a purely rhetorical device — and a mendacious one, at that.

The devil is in the details, of course, and there are still disputes on the margin, as I said. But it helps to know what they’re really about, and why they are so far from the fanciful debates the copyright scolds wish we were having.

First, price is, in fact, a big deal. For the creative industries it can be the difference between, say, making one movie or a hundred, and for artists is can be the difference between earning a livelihood writing songs or packing it in for a desk job.

But despite their occasional lip service to the existence of trade-offs, many “fair-users” see price — i.e., licensing agreements — as nothing less than a threat to social welfare. After all, the logic runs, if copies can be made at (essentially) zero marginal cost, a positive price is just extortion. They say, “more access!,” but they don’t mean, “more access at an agreed-upon price;” they mean “zero-price access, and nothing less.” These aren’t the same thing, and when “fair use” is a stand-in for “zero-price use,” fair-users moving the goalposts — and being disingenuous about it.

The other, related problem, of course, is piracy. Sometimes rightsholders’ objections to the expansion of fair use are about limiting access. But typically that’s true only where fine-tuned contracting isn’t feasible, and where the only realistic choice they’re given is between no access for some people, and pervasive (and often unstoppable) piracy. There are any number of instances where rightsholders have no realistic prospect of efficiently negotiating licensing terms and receiving compensation, and would welcome greater access to their works even without a license — as long as the result isn’t also (or only) excessive piracy. The key thing is that, in such cases, opposition to fair use isn’t opposition to reasonable access, even free access. It’s opposition to piracy.

Time-shifting with VCRs and space-shifting with portable mp3 players (to take two contentious historical examples) fall into this category (even if they are held up — as they often are — by the fair-users as totems of their fanciful battle ). At least at the time of the Sony and Diamond Rio cases, when there was really no feasible way to enforce licenses or charge differential prices for such uses, the choice rightsholders faced was effectively all-or-nothing, and they had to pick one. I’m pretty sure, all else equal, they would have supported such uses, even without licenses and differential compensation — except that the piracy risk was so significant that it swamped the likely benefits, tilting the scale toward “nothing” instead of “all.”

Again, the reality is that creators and rightsholders were confronted with a choice between two imperfect options; neither was likely “right,” and they went with the lesser evil. But one can’t infer from that constrained decision an inherent antipathy to fair use. Sadly, such decisions have to be made in the real world, not law reviews and EFF blog posts. As economists Benjamin Klein, Andres Lerner and Kevin Murphy put it regarding the Diamond Rio case:

[R]ather than representing an attempt by copyright-holders to increase their profits by controlling legally established “fair uses,”… the obvious record-company motivation is to reduce the illegal piracy that is encouraged by the technology. Eliminating a “fair use” [more accurately, “opposing an expansion of fair use” -ed.] is not a benefit to the record companies; it is an unfortunate cost they have to bear to solve the much larger problem of infringing uses. The record companies face competitive pressure to avoid these costs by developing technologies that distinguish infringing from non-infringing copying.

This last point is important, too. Fair-users don’t like technological protection measures, either, even if they actually facilitate licensing and broader access to copyrighted content. But that really just helps to reveal the poverty of their position. They should welcome technology that expands access, even if it also means that it enables rightsholders to fine-tune their licenses and charge a positive price. Put differently: Why do they hate Spotify!?

I’m just hazarding a guess here, but I suspect that the antipathy to technological solutions goes well beyond the short-term limits on some current use of content that copyright minimalists think shouldn’t be limited. If technology, instead of fair use, is truly determinative of the extent of zero-price access, then their ability to seriously influence (read: rein in) the scope of copyright is diminished. Fair use is amorphous. They can bring cases, they can lobby Congress, they can pen strongly worded blog posts, and they can stage protests. But they can’t do much to stop technological progress. Of course, technology does at least as much to limit the enforceability of licenses and create new situations where zero-price access is the norm. But still, R&D is a lot harder than PR.

What’s more, if technology were truly determinative, it would frequently mean that former fair uses could become infringing at some point (or vice versa, of course). Frankly, there’s no reason for time-shifting of TV content to continue to be considered a fair use today. We now have the technology to both enable time shifting and to efficiently license content for the purpose, charge a differential price for it, and enforce the terms. In fact, all of that is so pervasive today that most users do pay for time-shifting technologies, under license terms that presumably define the scope of their right to do so; they just may not have read the contract. Where time-shifting as a fair use rears its ugly head today is in debates over new, infringing technology where, in truth, the fair use argument is really a malleable pretext to advocate for a restriction on the scope of copyright (e.g., Aereo).

In any case, as the success of business models like Spotify and Netflix (to say nothing of Comcast’s X1 interface and new Xfinity Stream app) attest, technology has enabled users to legitimately engage in what was once conceivable seemingly only under fair use. Yes, at a price — one that millions of people are willing to pay. It is surely the case that rightsholders’ licensing of technologies like these have made content more accessible, to more people, and with higher-quality service, than a regime of expansive unlicensed use could ever have done.

At the same time, let’s not forget that, often, even when they could efficiently distribute content only at a positive price, creators offer up scads of content for free, in myriad ways. Sure, the objective is to maximize revenue overall by increasing exposure, price discriminating, or enhancing the quality of paid-for content in some way — but so what? More content is more content, and easier access is easier access. All of that uncompensated distribution isn’t rightsholders nodding toward the copyright scolds’ arguments; it’s perfectly consistent with licensing. Obviously, the vast majority of music, for example, is listened-to subject to license agreements, not because of fair use exceptions or rightsholders’ largesse.

For the vast majority of creators, users and uses, licensed access works, and gets us massive amounts of content and near ubiquitous access. The fair use disputes we do have aren’t really about ensuring broad access; that’s already happening. Rather, those disputes are either niggling over the relatively few ambiguous margins on the one hand, or, on the other, fighting the fair-users’ manufactured, existential fight over whether copyright exceptions will subsume the rule. The former is to be expected: Copyright boundaries will always be imperfect, and courts will always be asked to make the close calls. The latter, however, is simply a drain on resources that could be used to create more content, improve its quality, distribute it more broadly, or lower prices.

Copyright law has always been, and always will be, operating in the shadow of technology — technology both for distribution and novel uses, as well as for pirating content. The irony is that, as digital distribution expands, it has dramatically increased the risk of piracy, even as copyright minimalists argue that the low costs of digital access justify a more expansive interpretation of fair use — which would, in turn, further increase the risk of piracy.

Creators’ opposition to this expansion has nothing to do with opposition to broad access to content, and everything to do with ensuring that piracy doesn’t overwhelm their ability to get paid, and to produce content in the first place.

Even were fair use to somehow disappear tomorrow, there would be more and higher-quality content, available to more people in more places, than ever before. But creators have no interest in seeing fair use disappear. What they do have is an interest in is licensing their content as broadly as possible when doing so is efficient, and in minimizing piracy. Sometimes legitimate fair-use questions get caught in the middle. We could and should have a reasonable debate over the precise contours of fair use in such cases. But the false dichotomy of creators against users makes that extremely difficult. Until the disingenuous rhetoric is clawed back, we’re stuck with needless fights that don’t benefit either users or creators — although they do benefit the policy scolds, academics, wonks and businesses that foment them.

The Legatum Institute (Legatum) is “an international think tank based in London and a registered UK charity [that] . . . focuses on understanding, measuring, and explaining the journey from poverty to prosperity for individuals, communities, and nations.”  Legatum’s annual “Legatum Prosperity Index . . . measure[s] and track[s] the performance of 149 countries of the world across multiple categories including health, education, the economy, social capital, and more.”

Among other major Legatum initiatives is a “Special Trade Commission” (STC) created in the wake of the United Kingdom’s (UK) vote to leave the European Union (Brexit).  According to Legatum, “the STC aims to present a roadmap for the many trade negotiations which the UK will need to undertake now.  It seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  STC Commissioners (I am one of them) include former international trade negotiators and academic experts from Australia, New Zealand, Singapore, Switzerland, Canada, Mexico, the United Kingdom and the United States (see here).  The Commissioners serve in their private capacities, representing their personal viewpoints.  Since last summer, the STC has released (and will continue to release) a variety of papers on the specific legal and economic implications of Brexit negotiations, available on Legatum’s website (see here, here, here, here, and here).

From February 6-8 I participated in the inaugural STC Conference in London, summarized by Legatum as follows:

During the Conference the[] [STC Commissioners] began to outline a vision for Britain’s exit from the European Union and the many trade negotiations that the UK will need to undertake. They discussed the state of transatlantic trade, the likely impact of the Trump administration on those ties as well as the NAFTA [North American Free Trade Agreement among the United States, Canada, and Mexico) renegotiation, the prospects for TTIP [Transatlantic Trade and Investment Partnership negotiations between the United States and the European Union, no longer actively being pursued] and the resurrection of TPP [Trans-Pacific Partnership negotiations between the United States and certain Pacific Rim nations, U.S. participation withdrawn by President Trump] the future of the WTO [World Trade Organization] and the opportunities for Britain to pursue unilateral, plurilateral and multilateral liberalisation. A future Prosperity Zone between like-minded countries was repeatedly highlighted as a key opportunity for post-Brexit Britain to engage in a high-standards, growth-creating trade agreement.

The Commissioners spoke publicly to a joint meeting attended by the House of Commons and the House of Lords as well as the International Trade Committee in the House of Commons and at a public event hosted at the Legatum Institute where they shared their expertise and recommendations for the UK’s exit strategy.

The broad theme of the STC Commissioners’ presentations was that the Brexit process, if handled appropriately, can set the stage for greater economic liberalization, international trade expansion, and heightened economic growth and prosperity, in the United Kingdom and elsewhere.  In particular, the STC recommended that the UK Government pursue four different paths simultaneously over the next several years, in connection with its withdrawal from the European Union:

  1. Work to further lower UK trade barriers beyond the levels set by the UK’s current World Trade Organization (WTO) commitments, by pledging to apply a tariff for some products below its WTO “bound” tariff rate commitments to levels well below the “Common External Tariff” rates the UK currently applies to non-EU imports as an EU member; and by unilaterally liberalizing other aspects of its trade policy, in areas such as government procurement, for example.
  2. Propose plurilateral free trade agreements between the UK and a few like-minded nations that have among the world’s most free and open economies, such as Australia, New Zealand, and Singapore; and work to further liberalize global technical standards through active participation in such organizations as the Basel Convention (cross-boundary hazardous waste disposal) and IOSCO (international securities regulation).
  3. Propose bilateral free trade agreements between the UK and the United States, Switzerland, and perhaps other countries, designed to expand commerce with key UK trading partners, as well as securing a comprehensive free trade agreement with the EU.
  4. Unilaterally reduce UK regulatory burdens without regard to trade negotiations as part of a domestic “competitiveness agenda,” involving procompetitive regulatory reform and the elimination of tariff to the greatest extent feasible; a UK Government productivity commission employing cost-benefit analysis could be established to carry out this program (beginning in the late 1980s, the Australian Government reduced its regulatory burdens and spurred economic growth, with the assistance of a national productivity commission).

These “four pillars” of trade-liberalizing reform are complementary and self-reinforcing.  The reduction of UK trade barriers should encourage other countries to liberalize and consider joining plurilateral free trade agreements already negotiated with the UK, or perhaps consider exploring their own bilateral trade arrangements with the UK.  Furthermore, individual nations’ incentives to gain greater access to the UK market through trade negotiations should be further enhanced by the unilateral reduction of UK regulatory constraints.

As trade barriers drop, UK consumers (including poorer consumers) should perceive a direct benefit from economic liberalization, providing political support for continued liberalization.  And the economic growth and innovation spurred by this virtuous cycle should encourage the European Union and its member states to “join the club” by paring back common external tariffs and by loosening regulatory impediments to international competition, such as restrictive standards and licensing schemes.  In short, the four paths provide the outlines for a “win-win” strategy that would be beneficial to the UK and its trading partners, both within and outside of the EU.

Admittedly, the STC’s proposals may have to overcome opposition from well-organized interest groups who would be harmed by liberalization, and may be viewed with some skepticism by some risk averse government officials and politicians.  The task of the STC will be to continue to work with the UK Government and outside stakeholders to convince them that Brexit strategies centered on bilateral and plurilateral trade liberalization, in tandem with regulatory relief, provide a way forward that will prove mutually beneficial to producers and consumers in the UK – and in other nations as well.

Stay tuned.

 

 

 

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

The American Bar Association Antitrust Section’s Presidential Transition Report (“Report”), released on January 24, provides a helpful practitioners’ perspective on the state of federal antitrust and consumer protection enforcement, and propounds a variety of useful recommendations for marginal improvements in agency practices, particularly with respect to improving enforcement transparency and reducing enforcement-related costs.  It also makes several good observations on the interplay of antitrust and regulation, and commendably notes the importance of promoting U.S. leadership in international antitrust policy.  This is all well and good.  Nevertheless, the Report’s discussion of various substantive topics poses a number of concerns that seriously detract from its utility, which I summarize below.  Accordingly, I recommend that the new Administration accord respectful attention to the Report’s discussion of process improvements, and international developments, but ignore the Report’s discussion of novel substantive antitrust theories, vertical restraints, and intellectual property.

1.  The Big Picture: Too Much Attention Paid to Antitrust “Possibility Theorems”

In discussing substance, the Report trots out all the theoretical stories of possible anticompetitive harm raised over the last decade or so, such as “product hopping” (“minor” pharmaceutical improvements based on new patents that are portrayed as exclusionary devices), “contracts that reference rivals” (discount schemes that purportedly harm competition by limiting sourcing from a supplier’s rivals), “hold-ups” by patentees (demands by patentees for “overly high” royalties on their legitimate property rights), and so forth.  What the Report ignores is the costs that these new theories impose on the competitive system, and, in particular, on incentives to innovate.  These new theories often are directed at innovative novel business practices that may have the potential to confer substantial efficiency benefits – including enhanced innovation and economic growth – on the American economy.  Unproven theories of harm may disincentivize such practices and impose a hidden drag on the economy.  (One is reminded of Nobel Laureate Ronald Coase’s lament (see here) that “[i]f an economist finds something . . . that he does not understand, he looks for a monopoly explanation. And as in this field we are rather ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on monopoly explanations frequent.”)  Although the Report generally avoids taking a position on these novel theories, the lip service it gives implicitly encourages federal antitrust agency investigations designed to deploy these shiny new antitrust toys.  This in turn leads to a misallocation of resources (unequivocally harmful activity, especially hard core cartel conduct, merits the highest priority) and generates potentially high error and administrative costs, at odds with a sensible decision-theoretic approach to antitrust administration (see here and here).  In sum, the Trump Administration should pay no attention to the Report’s commentary on new substantive antitrust theories.

2.  Vertical Contractual Restraints

The Report inappropriately (and, in my view, amazingly) suggests that antitrust enforcers should give serious attention to vertical contractual restraints:

Recognizing that the current state of RPM law in both minimum and maximum price contexts requires sophisticated balancing of pro- and anti-competitive tendencies, the dearth of guidance from the Agencies in the form of either guidelines or litigated cases leaves open important questions in an area of law that can have a direct and substantial impact on consumers. For example, it would be beneficial for the Agencies to provide guidance on how they think about balancing asserted quality and service benefits that can flow from maintaining minimum prices for certain types of products against the potential that RPM reduces competition to the detriment of consumers. Perhaps equally important, the Agencies should provide guidance on how they would analyze the vigor of interbrand competition in markets where some producers have restricted intrabrand competition among distributors of their products.    

The U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) largely have avoided bringing pure contractual vertical restraints cases in recent decades, and for good reason.  Although vertical restraints theoretically might be used to facilitate horizontal collusion (say, to enforce a distributors’ cartel) or anticompetitive exclusion (say, to enable a dominant manufacturer to deny rivals access to efficient distribution), such cases appear exceedingly rare.  Real world empirical research suggests vertical restraints generally are procompetitive (see, for example, here).  What’s more, a robust theoretical literature supports efficiency-based explanations for vertical restraints (see, for example, here), as recognized by the U.S. Supreme Court in its 2007 Leegin decision.  An aggressive approach to vertical restraints enforcement would ignore this economic learning, likely yield high error costs, and dissuade businesses from considering efficient vertical contracts, to the detriment of social welfare.  Moreover, antitrust prosecutorial resources are limited, and optimal policy indicates they should be directed to the most serious competitive problems.  The Report’s references to “open important questions” and the need for “guidance” on vertical restraints appears oblivious to these realities.  Furthermore, the Report’s mention of “balancing” interbrand versus intrabrand effects reflects a legalistic approach to vertical contracts that is at odds with modern economic analysis.

In short, the Report’s discussion of vertical restraints should be accorded no weight by new enforcers, and antitrust prosecutors would be well advised not to include vertical restraints investigations on their list of priorities.

3.  IP Issues

The Report recommends that the DOJ and FTC (“Agencies”) devote substantial attention to issues related to the unilateral exercise of patent rights, “holdup” and “holdout”:

We . . . recommend that the Agencies gather reliable and credible information on—and propose a framework for evaluating—holdup and holdout, and the circumstances in which either may be anticompetitive. The Agencies are particularly well-suited to gather evidence and assess competitive implications of such practices, which could then inform policymaking, advocacy, and potential cases. The Agencies’ perspectives could contribute valuable insights to the larger antitrust community.

Gathering information with an eye to bringing potential antitrust cases involving the unilateral exercise of patent rights through straightforward patent licensing involves a misapplication of resources.  As Professor Josh Wright and Judge Douglas Ginsburg, among others, have pointed out, antitrust is not well-suited to dealing with disputes between patentees and licensees over licensing rates – private law remedies are best designed to handle such contractual controversies (see, for example, here).  Furthermore, using antitrust law to depress returns to unilateral patent licenses threatens to reduce dynamic efficiency and create disincentives for innovation (see FTC Commissioner (and currently Acting Chairman) Maureen Ohlhausen’s thoughtful article, here).  The Report regrettably ignores this important research.  The Report instead should have called upon the FTC and DOJ to drop their ill-conceived recent emphasis on unilateral patent exploitation, and to focus instead on problems of collusion among holders of competing patented technologies.

That is not all.  The Report’s “suggest[ion] that the [federal antitrust] Agencies consider offering guidance to the ITC [International Trade Commission] about potential SEP holdup and holdout” is a recipe for weakening legitimate U.S. patent rights that are threatened by foreign infringers.  American patentees already face challenges from over a decade’s worth of Supreme Court decisions that have constrained the value of their holdings.  As I have explained elsewhere, efforts to limit the ability of the ITC to issue exclusion orders in the face of infringement overseas further diminishes the value of American patents and disincentivizes innovation (see here).  What’s worse, the Report is not only oblivious of this reality, it goes out of its way to “put a heavy thumb on the scale” in favor of patent infringers, stating (footnote omitted):

If the ITC were to issue exclusion orders to SEP owners under circumstances in which injunctions would not be appropriate under the [Supreme Court’s] eBay standard [for patent litigation], the inconsistency could induce SEP owners to strategically use the ITC in an effort to achieve settlements of patent disputes on terms that might require payment of supracompetitive royalties.  Though it is not likely how likely this is or whether the risk has led to supracompetitive prices in the past, this dynamic could lead to holdup by SEP owners and unconscionably higher royalties.

This commentary on the possibility of “unconscionable” royalties reads like a press release authored by patent infringers.  In fact, there is a dearth of evidence of hold-up, let alone hold-up-related “unconscionable” royalties.  Moreover, it is most decidedly not the role of antitrust enforcers to rule on the “unconscionability” of the unilateral pricing decision of a patent holder (apparently the Report writers forgot to consult Justice Scalia’s Trinko opinion, which emphasizes the right of a monopolist to charge a monopoly price).  Furthermore, not only is this discussion wrong-headed, it flies in the face of concerns expressed elsewhere in the Report regarding ill-advised mandates imposed by foreign antitrust enforcement authorities.  (Recently certain foreign enforcers have shown themselves all too willing to countenance “excessive” patent royalty claims in cases involving American companies).

Finally, other IP-related references in the Report similarly show a lack of regulatory humility.  Theoretical harms from the disaggregation of complementary patents, and from “product hopping” patents (see above), among other novel practices, implicitly encourage the FTC and DOJ (not to mention private parties) to consider bringing cases based on expansive theories of liability, without regard to the costs of the antitrust system as a whole (including the chilling of innovative business activity).  Such cases might benefit the antitrust bar, but prioritizing them would be at odds with the key policy objective of antitrust, the promotion of consumer welfare.

 

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

The Federal Trade Commission’s (FTC) regrettable January 17 filing of a federal court injunctive action against Qualcomm, in the waning days of the Obama Administration, is a blow to its institutional integrity and well-earned reputation as a top notch competition agency.

Stripping away the semantic gloss, the heart of the FTC’s complaint is that Qualcomm is charging smartphone makers “too much” for licenses needed to practice standardized cellular communications technologies – technologies that Qualcomm developed. This complaint flies in the face of the Supreme Court’s teaching in Verizon v. Trinko that a monopolist has every right to charge monopoly prices and thereby enjoy the full fruits of its legitimately obtained monopoly. But Qualcomm is more than one exceptionally ill-advised example of prosecutorial overreach, that (hopefully) will fail and end up on the scrapheap of unsound federal antitrust initiatives. The Qualcomm complaint undoubtedly will be cited by aggressive foreign competition authorities as showing that American antitrust enforcement now recognizes mere “excessive pricing” as a form of “monopoly abuse” – therefore justifying “excessive pricing” cases that are growing like topsy abroad, especially in East Asia.

Particularly unfortunate is the fact that the Commission chose to authorize the filing by a 2-1 vote, which ignored Commissioner Maureen Ohlhausen’s pithy dissent – a rarity in cases involving the filing of federal lawsuits. Commissioner Ohlhausen’s analysis skewers the legal and economic basis for the FTC’s complaint, and her summary, which includes an outstanding statement of basic antitrust enforcement principles, is well worth noting (footnote omitted):

My practice is not to write dissenting statements when the Commission, against my vote, authorizes litigation. That policy reflects several principles. It preserves the integrity of the agency’s mission, recognizes that reasonable minds can differ, and supports the FTC’s staff, who litigate demanding cases for consumers’ benefit. On the rare occasion when I do write, it has been to avoid implying that I disagree with the complaint’s theory of liability.

I do not depart from that policy lightly. Yet, in the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections.

Let us hope that President Trump makes it an early and high priority to name Commissioner Ohlhausen Acting Chairman of the FTC. The FTC simply cannot afford any more embarrassing and ill-reasoned antitrust initiatives that undermine basic principles of American antitrust enforcement and may be used by foreign competition authorities to justify unwarranted actions against American firms. Maureen Ohlhausen can be counted upon to provide needed leadership in moving the Commission in a sounder direction.

P.S. I have previously published a commentary at this site regarding an unwarranted competition law Statement of Objections directed at Google by the European Commission, a matter which did not involve patent licensing. And for a more general critique of European competition policy along these lines, see here.

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.