The Legatum Institute (Legatum) is “an international think tank based in London and a registered UK charity [that] . . . focuses on understanding, measuring, and explaining the journey from poverty to prosperity for individuals, communities, and nations.”  Legatum’s annual “Legatum Prosperity Index . . . measure[s] and track[s] the performance of 149 countries of the world across multiple categories including health, education, the economy, social capital, and more.”

Among other major Legatum initiatives is a “Special Trade Commission” (STC) created in the wake of the United Kingdom’s (UK) vote to leave the European Union (Brexit).  According to Legatum, “the STC aims to present a roadmap for the many trade negotiations which the UK will need to undertake now.  It seeks to re-focus the public discussion on Brexit to a positive conversation on opportunities, rather than challenges, while presenting empirical evidence of the dangers of not following an expansive trade negotiating path.”  STC Commissioners (I am one of them) include former international trade negotiators and academic experts from Australia, New Zealand, Singapore, Switzerland, Canada, Mexico, the United Kingdom and the United States (see here).  The Commissioners serve in their private capacities, representing their personal viewpoints.  Since last summer, the STC has released (and will continue to release) a variety of papers on the specific legal and economic implications of Brexit negotiations, available on Legatum’s website (see here, here, here, here, and here).

From February 6-8 I participated in the inaugural STC Conference in London, summarized by Legatum as follows:

During the Conference the[] [STC Commissioners] began to outline a vision for Britain’s exit from the European Union and the many trade negotiations that the UK will need to undertake. They discussed the state of transatlantic trade, the likely impact of the Trump administration on those ties as well as the NAFTA [North American Free Trade Agreement among the United States, Canada, and Mexico) renegotiation, the prospects for TTIP [Transatlantic Trade and Investment Partnership negotiations between the United States and the European Union, no longer actively being pursued] and the resurrection of TPP [Trans-Pacific Partnership negotiations between the United States and certain Pacific Rim nations, U.S. participation withdrawn by President Trump] the future of the WTO [World Trade Organization] and the opportunities for Britain to pursue unilateral, plurilateral and multilateral liberalisation. A future Prosperity Zone between like-minded countries was repeatedly highlighted as a key opportunity for post-Brexit Britain to engage in a high-standards, growth-creating trade agreement.

The Commissioners spoke publicly to a joint meeting attended by the House of Commons and the House of Lords as well as the International Trade Committee in the House of Commons and at a public event hosted at the Legatum Institute where they shared their expertise and recommendations for the UK’s exit strategy.

The broad theme of the STC Commissioners’ presentations was that the Brexit process, if handled appropriately, can set the stage for greater economic liberalization, international trade expansion, and heightened economic growth and prosperity, in the United Kingdom and elsewhere.  In particular, the STC recommended that the UK Government pursue four different paths simultaneously over the next several years, in connection with its withdrawal from the European Union:

  1. Work to further lower UK trade barriers beyond the levels set by the UK’s current World Trade Organization (WTO) commitments, by pledging to apply a tariff for some products below its WTO “bound” tariff rate commitments to levels well below the “Common External Tariff” rates the UK currently applies to non-EU imports as an EU member; and by unilaterally liberalizing other aspects of its trade policy, in areas such as government procurement, for example.
  2. Propose plurilateral free trade agreements between the UK and a few like-minded nations that have among the world’s most free and open economies, such as Australia, New Zealand, and Singapore; and work to further liberalize global technical standards through active participation in such organizations as the Basel Convention (cross-boundary hazardous waste disposal) and IOSCO (international securities regulation).
  3. Propose bilateral free trade agreements between the UK and the United States, Switzerland, and perhaps other countries, designed to expand commerce with key UK trading partners, as well as securing a comprehensive free trade agreement with the EU.
  4. Unilaterally reduce UK regulatory burdens without regard to trade negotiations as part of a domestic “competitiveness agenda,” involving procompetitive regulatory reform and the elimination of tariff to the greatest extent feasible; a UK Government productivity commission employing cost-benefit analysis could be established to carry out this program (beginning in the late 1980s, the Australian Government reduced its regulatory burdens and spurred economic growth, with the assistance of a national productivity commission).

These “four pillars” of trade-liberalizing reform are complementary and self-reinforcing.  The reduction of UK trade barriers should encourage other countries to liberalize and consider joining plurilateral free trade agreements already negotiated with the UK, or perhaps consider exploring their own bilateral trade arrangements with the UK.  Furthermore, individual nations’ incentives to gain greater access to the UK market through trade negotiations should be further enhanced by the unilateral reduction of UK regulatory constraints.

As trade barriers drop, UK consumers (including poorer consumers) should perceive a direct benefit from economic liberalization, providing political support for continued liberalization.  And the economic growth and innovation spurred by this virtuous cycle should encourage the European Union and its member states to “join the club” by paring back common external tariffs and by loosening regulatory impediments to international competition, such as restrictive standards and licensing schemes.  In short, the four paths provide the outlines for a “win-win” strategy that would be beneficial to the UK and its trading partners, both within and outside of the EU.

Admittedly, the STC’s proposals may have to overcome opposition from well-organized interest groups who would be harmed by liberalization, and may be viewed with some skepticism by some risk averse government officials and politicians.  The task of the STC will be to continue to work with the UK Government and outside stakeholders to convince them that Brexit strategies centered on bilateral and plurilateral trade liberalization, in tandem with regulatory relief, provide a way forward that will prove mutually beneficial to producers and consumers in the UK – and in other nations as well.

Stay tuned.

 

 

 

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

The American Bar Association Antitrust Section’s Presidential Transition Report (“Report”), released on January 24, provides a helpful practitioners’ perspective on the state of federal antitrust and consumer protection enforcement, and propounds a variety of useful recommendations for marginal improvements in agency practices, particularly with respect to improving enforcement transparency and reducing enforcement-related costs.  It also makes several good observations on the interplay of antitrust and regulation, and commendably notes the importance of promoting U.S. leadership in international antitrust policy.  This is all well and good.  Nevertheless, the Report’s discussion of various substantive topics poses a number of concerns that seriously detract from its utility, which I summarize below.  Accordingly, I recommend that the new Administration accord respectful attention to the Report’s discussion of process improvements, and international developments, but ignore the Report’s discussion of novel substantive antitrust theories, vertical restraints, and intellectual property.

1.  The Big Picture: Too Much Attention Paid to Antitrust “Possibility Theorems”

In discussing substance, the Report trots out all the theoretical stories of possible anticompetitive harm raised over the last decade or so, such as “product hopping” (“minor” pharmaceutical improvements based on new patents that are portrayed as exclusionary devices), “contracts that reference rivals” (discount schemes that purportedly harm competition by limiting sourcing from a supplier’s rivals), “hold-ups” by patentees (demands by patentees for “overly high” royalties on their legitimate property rights), and so forth.  What the Report ignores is the costs that these new theories impose on the competitive system, and, in particular, on incentives to innovate.  These new theories often are directed at innovative novel business practices that may have the potential to confer substantial efficiency benefits – including enhanced innovation and economic growth – on the American economy.  Unproven theories of harm may disincentivize such practices and impose a hidden drag on the economy.  (One is reminded of Nobel Laureate Ronald Coase’s lament (see here) that “[i]f an economist finds something . . . that he does not understand, he looks for a monopoly explanation. And as in this field we are rather ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on monopoly explanations frequent.”)  Although the Report generally avoids taking a position on these novel theories, the lip service it gives implicitly encourages federal antitrust agency investigations designed to deploy these shiny new antitrust toys.  This in turn leads to a misallocation of resources (unequivocally harmful activity, especially hard core cartel conduct, merits the highest priority) and generates potentially high error and administrative costs, at odds with a sensible decision-theoretic approach to antitrust administration (see here and here).  In sum, the Trump Administration should pay no attention to the Report’s commentary on new substantive antitrust theories.

2.  Vertical Contractual Restraints

The Report inappropriately (and, in my view, amazingly) suggests that antitrust enforcers should give serious attention to vertical contractual restraints:

Recognizing that the current state of RPM law in both minimum and maximum price contexts requires sophisticated balancing of pro- and anti-competitive tendencies, the dearth of guidance from the Agencies in the form of either guidelines or litigated cases leaves open important questions in an area of law that can have a direct and substantial impact on consumers. For example, it would be beneficial for the Agencies to provide guidance on how they think about balancing asserted quality and service benefits that can flow from maintaining minimum prices for certain types of products against the potential that RPM reduces competition to the detriment of consumers. Perhaps equally important, the Agencies should provide guidance on how they would analyze the vigor of interbrand competition in markets where some producers have restricted intrabrand competition among distributors of their products.    

The U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) largely have avoided bringing pure contractual vertical restraints cases in recent decades, and for good reason.  Although vertical restraints theoretically might be used to facilitate horizontal collusion (say, to enforce a distributors’ cartel) or anticompetitive exclusion (say, to enable a dominant manufacturer to deny rivals access to efficient distribution), such cases appear exceedingly rare.  Real world empirical research suggests vertical restraints generally are procompetitive (see, for example, here).  What’s more, a robust theoretical literature supports efficiency-based explanations for vertical restraints (see, for example, here), as recognized by the U.S. Supreme Court in its 2007 Leegin decision.  An aggressive approach to vertical restraints enforcement would ignore this economic learning, likely yield high error costs, and dissuade businesses from considering efficient vertical contracts, to the detriment of social welfare.  Moreover, antitrust prosecutorial resources are limited, and optimal policy indicates they should be directed to the most serious competitive problems.  The Report’s references to “open important questions” and the need for “guidance” on vertical restraints appears oblivious to these realities.  Furthermore, the Report’s mention of “balancing” interbrand versus intrabrand effects reflects a legalistic approach to vertical contracts that is at odds with modern economic analysis.

In short, the Report’s discussion of vertical restraints should be accorded no weight by new enforcers, and antitrust prosecutors would be well advised not to include vertical restraints investigations on their list of priorities.

3.  IP Issues

The Report recommends that the DOJ and FTC (“Agencies”) devote substantial attention to issues related to the unilateral exercise of patent rights, “holdup” and “holdout”:

We . . . recommend that the Agencies gather reliable and credible information on—and propose a framework for evaluating—holdup and holdout, and the circumstances in which either may be anticompetitive. The Agencies are particularly well-suited to gather evidence and assess competitive implications of such practices, which could then inform policymaking, advocacy, and potential cases. The Agencies’ perspectives could contribute valuable insights to the larger antitrust community.

Gathering information with an eye to bringing potential antitrust cases involving the unilateral exercise of patent rights through straightforward patent licensing involves a misapplication of resources.  As Professor Josh Wright and Judge Douglas Ginsburg, among others, have pointed out, antitrust is not well-suited to dealing with disputes between patentees and licensees over licensing rates – private law remedies are best designed to handle such contractual controversies (see, for example, here).  Furthermore, using antitrust law to depress returns to unilateral patent licenses threatens to reduce dynamic efficiency and create disincentives for innovation (see FTC Commissioner (and currently Acting Chairman) Maureen Ohlhausen’s thoughtful article, here).  The Report regrettably ignores this important research.  The Report instead should have called upon the FTC and DOJ to drop their ill-conceived recent emphasis on unilateral patent exploitation, and to focus instead on problems of collusion among holders of competing patented technologies.

That is not all.  The Report’s “suggest[ion] that the [federal antitrust] Agencies consider offering guidance to the ITC [International Trade Commission] about potential SEP holdup and holdout” is a recipe for weakening legitimate U.S. patent rights that are threatened by foreign infringers.  American patentees already face challenges from over a decade’s worth of Supreme Court decisions that have constrained the value of their holdings.  As I have explained elsewhere, efforts to limit the ability of the ITC to issue exclusion orders in the face of infringement overseas further diminishes the value of American patents and disincentivizes innovation (see here).  What’s worse, the Report is not only oblivious of this reality, it goes out of its way to “put a heavy thumb on the scale” in favor of patent infringers, stating (footnote omitted):

If the ITC were to issue exclusion orders to SEP owners under circumstances in which injunctions would not be appropriate under the [Supreme Court’s] eBay standard [for patent litigation], the inconsistency could induce SEP owners to strategically use the ITC in an effort to achieve settlements of patent disputes on terms that might require payment of supracompetitive royalties.  Though it is not likely how likely this is or whether the risk has led to supracompetitive prices in the past, this dynamic could lead to holdup by SEP owners and unconscionably higher royalties.

This commentary on the possibility of “unconscionable” royalties reads like a press release authored by patent infringers.  In fact, there is a dearth of evidence of hold-up, let alone hold-up-related “unconscionable” royalties.  Moreover, it is most decidedly not the role of antitrust enforcers to rule on the “unconscionability” of the unilateral pricing decision of a patent holder (apparently the Report writers forgot to consult Justice Scalia’s Trinko opinion, which emphasizes the right of a monopolist to charge a monopoly price).  Furthermore, not only is this discussion wrong-headed, it flies in the face of concerns expressed elsewhere in the Report regarding ill-advised mandates imposed by foreign antitrust enforcement authorities.  (Recently certain foreign enforcers have shown themselves all too willing to countenance “excessive” patent royalty claims in cases involving American companies).

Finally, other IP-related references in the Report similarly show a lack of regulatory humility.  Theoretical harms from the disaggregation of complementary patents, and from “product hopping” patents (see above), among other novel practices, implicitly encourage the FTC and DOJ (not to mention private parties) to consider bringing cases based on expansive theories of liability, without regard to the costs of the antitrust system as a whole (including the chilling of innovative business activity).  Such cases might benefit the antitrust bar, but prioritizing them would be at odds with the key policy objective of antitrust, the promotion of consumer welfare.

 

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

The Federal Trade Commission’s (FTC) regrettable January 17 filing of a federal court injunctive action against Qualcomm, in the waning days of the Obama Administration, is a blow to its institutional integrity and well-earned reputation as a top notch competition agency.

Stripping away the semantic gloss, the heart of the FTC’s complaint is that Qualcomm is charging smartphone makers “too much” for licenses needed to practice standardized cellular communications technologies – technologies that Qualcomm developed. This complaint flies in the face of the Supreme Court’s teaching in Verizon v. Trinko that a monopolist has every right to charge monopoly prices and thereby enjoy the full fruits of its legitimately obtained monopoly. But Qualcomm is more than one exceptionally ill-advised example of prosecutorial overreach, that (hopefully) will fail and end up on the scrapheap of unsound federal antitrust initiatives. The Qualcomm complaint undoubtedly will be cited by aggressive foreign competition authorities as showing that American antitrust enforcement now recognizes mere “excessive pricing” as a form of “monopoly abuse” – therefore justifying “excessive pricing” cases that are growing like topsy abroad, especially in East Asia.

Particularly unfortunate is the fact that the Commission chose to authorize the filing by a 2-1 vote, which ignored Commissioner Maureen Ohlhausen’s pithy dissent – a rarity in cases involving the filing of federal lawsuits. Commissioner Ohlhausen’s analysis skewers the legal and economic basis for the FTC’s complaint, and her summary, which includes an outstanding statement of basic antitrust enforcement principles, is well worth noting (footnote omitted):

My practice is not to write dissenting statements when the Commission, against my vote, authorizes litigation. That policy reflects several principles. It preserves the integrity of the agency’s mission, recognizes that reasonable minds can differ, and supports the FTC’s staff, who litigate demanding cases for consumers’ benefit. On the rare occasion when I do write, it has been to avoid implying that I disagree with the complaint’s theory of liability.

I do not depart from that policy lightly. Yet, in the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections.

Let us hope that President Trump makes it an early and high priority to name Commissioner Ohlhausen Acting Chairman of the FTC. The FTC simply cannot afford any more embarrassing and ill-reasoned antitrust initiatives that undermine basic principles of American antitrust enforcement and may be used by foreign competition authorities to justify unwarranted actions against American firms. Maureen Ohlhausen can be counted upon to provide needed leadership in moving the Commission in a sounder direction.

P.S. I have previously published a commentary at this site regarding an unwarranted competition law Statement of Objections directed at Google by the European Commission, a matter which did not involve patent licensing. And for a more general critique of European competition policy along these lines, see here.

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.

During 2016 it became fashionable in certain circles to decry “lax” merger enforcement and to call for a more aggressive merger enforcement policy (see, for example, the American Antitrust Institute’s September 2016 paper on competition policy, critiqued by me in this blog post).  Interventionists promoting “tougher” merger enforcement have cited Professor John Kwoka’s 2015 book, Mergers, Merger Control, and Remedies in support of the proposition that U.S. antitrust enforcers have been “excessively tolerant” in analyzing proposed mergers.

In that regard, a recent paper by two outstanding FTC economists (Michael Vita and David Osinski) is well worth noting.  It makes a strong (and, in my view, persuasive) case that Kwoka’s research is fatally flawed.  The following excerpt, drawn from the introduction and conclusion of the paper (Mergers, Merger Control, and Remedies:  A Critical Review), merits close attention:

John Kwoka’s recently published Mergers, Merger Control, and Remedies (2015) has received considerable attention from both antitrust practitioners and academics. The book features a meta-analysis of retrospective studies of consummated mergers, joint ventures, and other horizontal arrangements. Based on summary statistics derived from these studies, Kwoka concludes that domestic antitrust agencies are excessively tolerant in their merger enforcement; that merger remedies are ineffective at mitigating market power; and that merger enforcement has become increasingly lax over time. We review both his evidence and his empirical methods, and conclude that serious deficiencies in both undermine the basis for these conclusions. . . .

We sympathize with the goal of using retrospective analyses to assess the performance of the antitrust agencies and to identify possible improvements. Unfortunately, Kwoka has drawn inferences and reached conclusions about contemporary merger enforcement policy that are unjustified by his data and his methods. His critique of negotiated remedies in merger cases relies on a small number of transactions; a close reading reveals that a number of them are silent on the effectiveness of the associated remedies. His data sample lacks diversity, relying heavily on a small number of studies conducted on a small and unrepresentative set of industries. His statistical methodology departs from well-established techniques for conducting meta-analyses, making it impossible for readers to assess the strength of his evidence using standard statistical tools. His conclusions about the growing permissiveness of enforcement policies lack substantiation. Overall, we are unpersuaded that his evidence can support such broad and general policy conclusions.

Hopefully, the new leadership at the Federal Trade Commission and at the Justice Department’s Antitrust Division will carefully scrutinize this and other recent research on mergers in devising their merger enforcement policy.  Additional research on the effects of mergers, including an evaluation of their static and dynamic efficiencies, is highly warranted.  Enforcers should not lose sight of the fact that disincentivizing efficient mergers could undermine a vibrant market for corporate control in general, as well as precluding the net creation of economic surplus in specific cases.

During a presidential transition, it is an old Washington parlor game to discuss public policy tweaks and personnel changes, with speculation often focusing on former political appointees who are linked to the new President.  But with the election of Donald Trump, who has not previously served in government, many pundits’ crystal balls may be a bit cloudier than normal.  Well, help is on the way – at least for antitrust policy mavens.

On January 24, the Heritage Foundation will bring together an all-star cast of current and former top government officials to try and burn away the mists of uncertainty as it hosts its third annual antitrust policy conference (moderated by me).  The all-star cast, which includes former antitrust chiefs at the Justice Department and Federal Trade Commission and a current FTC Commissioner, will turn its attention to both domestic and international antitrust matters.  Antitrust is now a matter of global economic policy concern, and the Trump Administration’s reaction to antitrust developments around the world (including concerns about due process and industrial policy abuses overseas) may prove particularly important for American firms and the U.S. economy.

All antitrust fans are urged to attend the conference, which will be held at Heritage’s Lehrman Auditorium from 10 a.m. to 4:30 p.m. on the 24th.  You can register online to attend in person, or follow the conference’s webcast at Heritage.org.

I hope to see you there!

Yesterday the Chairman and Ranking Member of the House Judiciary Committee issued the first set of policy proposals following their long-running copyright review process. These proposals were principally aimed at ensuring that the IT demands of the Copyright Office were properly met so that it could perform its assigned functions, and to provide adequate authority for it to adapt its policies and practices to the evolving needs of the digital age.

In response to these modest proposals, Public Knowledge issued a telling statement, calling for enhanced scrutiny of these proposals related to an agency “with a documented history of regulatory capture.”

The entirety of this “documented history,” however, is a paper published by Public Knowledge itself alleging regulatory capture—as evidenced by the fact that 13 people had either gone from the Copyright Office to copyright industries or vice versa over the past 20+ years. The original document was brilliantly skewered by David Newhoff in a post on the indispensable blog, Illusion of More:

To support its premise, Public Knowledge, with McCarthy-like righteousness, presents a list—a table of thirteen former or current employees of the Copyright Office who either have worked for private-sector, rights-holding organizations prior to working at the Office or who are  now working for these private entities after their terms at the Office. That thirteen copyright attorneys over a 22-year period might be employed in some capacity for copyright owners is a rather unremarkable observation, but PK seems to think it’s a smoking gun…. Or, as one of the named thirteen, Steven Tepp, observes in his response, PK also didn’t bother to list the many other Copyright Office employees who, “went to Internet and tech companies, the Smithsonian, the FCC, and other places that no one would mistake for copyright industries.” One might almost get the idea that experienced copyright attorneys pursue various career paths or something.

Not content to rest on the laurels of its groundbreaking report of Original Sin, Public Knowledge has now doubled down on its audacity, using its own previous advocacy as the sole basis to essentially impugn an entire agency, without more. But, as advocacy goes, that’s pretty specious. Some will argue that there is an element of disingenuousness in all advocacy, even if it is as benign as failing to identify the weaknesses of one’s arguments—and perhaps that’s true. (We all cite our own work at one time or another, don’t we?) But that’s not the situation we have before us. Instead, Public Knowledge creates its own echo chamber, effectively citing only its own idiosyncratic policy preferences as the “documented” basis for new constraints on the Copyright Office. Even in a world of moral relativism, bubbles of information, and competing narratives about the truth, this should be recognizable as thin gruel.

So why would Public Knowledge expose itself in this manner? What is to be gained by seeking to impugn the integrity of the Copyright Office? There the answer is relatively transparent: PK hopes to capitalize on the opportunity to itself capture Copyright Office policy-making by limiting the discretion of the Copyright Office, and by turning it into an “objective referee” rather than the nation’s steward for ensuring the proper functioning of the copyright system.

PK claims that the Copyright Office should not be involved in making copyright policy, other than perhaps technically transcribing the agreements reached by other parties. Thus, in its “indictment” of the Copyright Office (which it now risibly refers to as the Copyright Office’s “documented history of capture”), PK wrote that:

These statements reflect the many specific examples, detailed in Section II, in which the Copyright Office has acted more as an advocate for rightsholder interests than an objective referee of copyright debates.

Essentially, PK seems to believe that copyright policy should be the province of self-proclaimed “consumer advocates” like PK itself—and under no circumstances the employees of the Copyright Office who might actually deign to promote the interests of the creative community. After all, it is staffed by a veritable cornucopia of copyright industry shills: According to PK’s report, fully 1 of its 400 employees has either left the office to work in the copyright industry or joined the office from industry in each of the last 1.5 years! For reference (not that PK thinks to mention it) some 325 Google employees have worked in government offices in just the past 15 years. And Google is hardly alone in this. Good people get good jobs, whether in government, industry, or both. It’s hardly revelatory.

And never mind that the stated mission of the Copyright Office “is to promote creativity by administering and sustaining an effective national copyright system,” and that “the purpose of the copyright system has always been to promote creativity in society.” And never mind that Congress imbued the Office with the authority to make regulations (subject to approval by the Librarian of Congress) and directed the Copyright Office to engage in a number of policy-related functions, including:

  1. Advising Congress on national and international issues relating to copyright;
  2. Providing information and assistance to Federal departments and agencies and the Judiciary on national and international issues relating to copyright;
  3. Participating in meetings of international intergovernmental organizations and meetings with foreign government officials relating to copyright; and
  4. Conducting studies and programs regarding copyright.

No, according to Public Knowledge the Copyright Office is to do none of these things, unless it does so as an “objective referee of copyright debates.” But nowhere in the legislation creating the Office or amending its functions—nor anywhere else—is that limitation to be found; it’s just created out of whole cloth by PK.

The Copyright Office’s mission is not that of a content neutral referee. Rather, the Copyright Office is charged with promoting effective copyright protection. PK is welcome to solicit Congress to change the Copyright Act and the Office’s mandate. But impugning the agency for doing what it’s supposed to do is a deceptive way of going about it. PK effectively indicts and then convicts the Copyright Office for following its mission appropriately, suggesting that doing so could only have been the result of undue influence from copyright owners. But that’s manifestly false, given its purpose.

And make no mistake why: For its narrative to work, PK needs to define the Copyright Office as a neutral party, and show that its neutrality has been unduly compromised. Only then can Public Knowledge justify overhauling the office in its own image, under the guise of magnanimously returning it to its “proper,” neutral role.

Public Knowledge’s implication that it is a better defender of the “public” interest than those who actually serve in the public sector is a subterfuge, masking its real objective of transforming the nature of copyright law in its own, benighted image. A questionable means to a noble end, PK might argue. Not in our book. This story always turns out badly.