1. Background: The Murr v. Wisconsin Case

On June 23, in a 5-3 decision by Justice Anthony Kennedy (Justice Ruth Bader Ginsburg, Stephen Breyer, Sonia Sotomayor, and Elena Kagan joined; Justice Neil Gorsuch did not participate), the U.S. Supreme Court upheld  the Wisconsin State Court of Appeals’ ruling that two waterfront lots should be treated as a single unit in a “regulatory takings” case.  The Murrs are siblings who inherited two adjacent waterfront properties from their parents, and they wanted to sell one of the lots and develop the other.  Unfortunately for the Murrs, the lots had been merged under local zoning regulations, and the local county board of assessments denied the Murrs’ request for a zoning variance to allow their plan to proceed.

The Murrs challenged this in state court, arguing that the state had effectively taken their second property by depriving them of practically all use without paying just compensation, as required by the Takings Clause of the Fifth Amendment.  Affirming a lower state court, the Wisconsin Appeals Court held that the takings analysis properly focused on the two lots together and that, using that framework, the merger regulations did not effectuate a taking.

The U.S. Supreme Court granted the Murrs’ writ of certiorari.  The Supreme Court found that in determining what the relevant unit of property is, courts must ask whether the owner would have a reasonable expectation to believe the property would be treated as a single or separate units.  The Court held that in regulatory takings assessments courts must give substantial weight to how state and local law treat the property, evaluate the property’s physical characteristics, and assess the property’s value under the challenged regulation.  The majority concluded that with regard to the Murrs’ property, there was a valid merger under state law, the terrain and shape of the lots made it clear that the merged lot’s use might be limited, and the second lot brought prospective value to the first. Thus, the lots should be treated as one parcel and they did not suffer a compensable taking, since the Murrs were not deprived of all economically beneficial use of the property.

Chief Justice John Roberts dissented (joined by Justices Clarence Thomas and Samuel Alito), noting that the Takings Clause protects private property rights “as state law created and defines them” and the majority’s “malleable definition of ‘private property’…undermines that protection.”  Thus, “[s]tate law defines the boundaries of distinct parcels of land, and those boundaries should determine the ‘private property’ at issue in regulatory takings cases.  Whether a regulation effects a taking of that property is a separate question, one in which common ownership of adjacent property may be taken into account.”

The always thoughtful Justice Thomas penned a separate dissent, suggesting that the Court should reconsider its regulatory takings jurisprudence to see “whether it can be grounded in the original public meaning” of the relevant constitutional provisions.

  1. The Supreme Court Should Reject the Confusing Dichotomy Between Physical and Regulatory Takings and Apply a Simpler Uniform Standard, One that Better Protects the Property Interests Safeguarded by the Fifth Amendment’s Takings Clause

Unfortunately, far from clarifying regulatory takings analysis, the Murr decision further muddies the doctrinal waters in this area.  Justice Kennedy’s majority decision creates a new inherently ambiguous balancing test that gives substantial leeway to localities to adjust regulatory demarcations and property line divisions without paying compensation to harmed property owners.

Although the three-Justice dissent sets forth a more full-throated paean to property rights, it does little to clarify how to determine when a regulatory taking occurs.  Instead, it approvingly cites prior less than helpful Supreme Court pronouncements on the topic:

Governments can infringe private property interests for public use not only through   [direct] appropriations, but through regulations as well. . . .  Our regulatory takings decisions . . .  have recognized that, “while property may be regulated to a certain extent, if regulation goes too far it will be recognized as a taking.”  This rule strikes a balance between property owners’ rights and the government’s authority to advance the common good. Owners can rest assured that they will be compensated for particularly onerous regulatory actions, while governments maintain the freedom to adjust the benefits and burdens of property ownership without incurring crippling costs from each alteration. . . .  For the vast array of regulations that [do not deny all economically beneficial or productive use of land and thus automatically constitute a taking,] . . . a flexible approach is more fitting.  The factors to consider are wide ranging, and include the economic impact of the regulation, the owner’s investment-backed expectations, and the character of the government action.  The ultimate question is whether the government’s imposition on a property has forced the owner “to bear public burdens which, in all fairness and justice, should be borne by the public as a whole.” 

Such a weighing of “wide-ranging factors” to determine whether or not a taking has occurred is inherently subjective and prone to manipulation by local authorities.  It enables them to marshal a list of Court-approved phrases to explain why a regulation does not go “too far” and take property – even though it may substantially destroy property value.

What is missing from the opinions in Murr is the recognition that any substantial net reduction in the value of a piece of property (subdivided or not) takes a certain property interest.  It is black letter law that there is not a single undivided property right inhering in an item of property, but, rather, multiple property interests – a “bundle of sticks” – that can be taken in whole or in part.  Under current Supreme Court jurisprudence, if the government directly seizes (or physically occupies) a particular stick, compensation is owed for the reduction in overall property value stemming from that stick’s loss.  This is the case of a physical “per se” taking.  But if the government instead enacts a rule preventing that stick from being sold or embellished by the bundle’s owner (think of the Murrs’ plan to sell one plot and develop the other), the owner likewise suffers similar reduced overall property value due to restrictions on the stick.  Under existing Supreme Court case law, however, the loss in value in the second case, unlike the first case, may well not be compensable, because the owner has not been deprived “of all beneficial use” of the overall property.  Supreme Court case law indicates that a taking may exist in the second case, depending upon a regulation’s impact, its interference in investment-backed expectations, and the character of its actions.  As a practical matter, this infelicitous, indeterminate balancing test very seldom results in a taking being found.  As a result, government is incentivized to invade property rights by using regulations, rather than physical appropriations, thereby undermining the Taking Clause’s requirement that “private property [not] be taken for public use, without just compensation.”

There is a far better way to deal with the problem of government regulatory intrusions on private property rights, one that recognizes that regulatory deprivation of any stick in the bundle should be compensable.  Professor Richard Epstein, distinguished property law scholar extraordinaire, points the way in his very recent article posted at the NYU Journal of Law and Liberty blog 18 days before Murr was handed down.  While Professor Epstein’s brilliant essay merits a close read, his key points are as follows:

I have used the occasion of yet another takings case before the Supreme Court, Murr v. Wisconsin, to comment on the structure of the takings law as it is, and as it ought to be.  On the former count, it is quite clear that the entire structure of the modern law of physical and regulatory takings tends to fixate on the ratio of the value of property rights taken to the value of the full bundle of rights before the regulation was put into place.  But there is no explanation as to why this ratio has any significance in light of the standard rule in physical-takings cases that the fair market value of the rights taken affords the correct measure of compensation so long as the taking is for a public use when no police-power justification is available.  Within this peculiar framework, it is a mistake to make the right of compensation for the loss of development rights under the Wisconsin ordinance turn on the technicalities of the chain of title to a particular plot.  This seems a uniquely inappropriate reason to deny compensation for the loss of development rights.

Any analysis of Murr is inherently messy, and it leaves open the endless challenge of reconciling this case with a wide range of other cases that cannot decide whether two contiguous parcels held by different titles can be a collective denominator in takings cases.  [But] . . . the muddle and confusion of the current law is largely obviated by the simple proposition that, prima facie, the more the government takes, the more it pays.  That rule applies to the outright taking of any given parcel of land or to the taking of a divided interest in property. In all of these cases, the shifts in what is taken do not create odd and indefensible discontinuities, but only raise valuation questions as to the size of the loss, taking into account any return benefits that a property owner may receive when the taking is part of some comprehensive scheme. But those issues are routinely encountered in all physical-takings cases. In all instances, police-power justifications, tied closely to the law of nuisance, may be invoked, and in cases of comprehensive regulation, courts must be alert to determine whether the scheme that takes rights away also affords compensation in-kind from the parallel restrictions on others in the scheme. Under this view, the full range of divided interests, be they air rights, mineral rights, liens, covenants, or easements, are fully compensable. The untenable discontinuities under current doctrine disappear.

Let us hope that in the future, the Supreme Court will take to heart Justice Thomas’s recommendation that the Court return to first principles, and, in so doing, seriously consider the economically and jurisprudentially sophisticated analysis adumbrated in Professor Epstein’s inspired essay.                  

  1. Background

On June 19, in Matal v. Tam, the U.S. Supreme Court (Justice Gorsuch did not participate in the case) affirmed the Federal Circuit’s ruling that the Lanham Act’s “disparagement clause” is unconstitutional under the First Amendment’s free speech clause.  The Patent and Trademark Office denied the Slants’ (an Asian rock group) federal trademark registration, relying on the Lanham Act’s prohibition on trademarks that “which may disparage . . . persons, living or dead, institutions, beliefs, or national symbols, or bring them into contempt, or disrepute.”  The Court held that trademarks are not government speech, pointing out that the government “does not dream up these marks.”  With the exception of marks scrutinized under the disparagement clause, trademarks are not reviewed for compliance with government policies.  Writing for the Court, Justice Samuel Alito (joined by Chief Justice John Roberts, Justice Clarence Thomas, and Justice Stephen Breyer) found unpersuasive the government’s argument that trademarks are analogous to subsidized speech.  The Alito opinion also determined that it is unnecessary to determine whether trademarks are commercial speech (subject to lesser scrutiny), because the disparagement clause cannot survive the Supreme Court’s test for such speech enunciated in Central Hudson Gas & Electric Company (1980).  Justice Anthony Kennedy, joined by Justices Ruth Bader Ginsburg, Sonia Sotomayor, and Elena Kagan, concurred in the judgment.  The Kennedy opinion agreed that the disparagement clause constitutes viewpoint discrimination because it reflects the government’s disapproval of certain speech, and that heightened scrutiny should apply, whether or not trademarks are commercial speech.

The Tam decision continues the trend of Supreme Court cases extending First Amendment protection for offensive speech.  Perhaps less likely to be noted, however, is that this decision also promotes free market principles by enhancing the effectiveness of legal protection for a key intellectual property right.  To understand this point, a brief primer on the law and economics of federal trademark protection is in order.

  1. The Law and Economics of Federal Trademark Protection in a Nutshell

A trademark (called a service mark in the case of a service) is an intellectual property right that identifies the source of a particular producer’s goods or services.  Trademarks reduce transactions costs by enabling consumers more easily to identify and patronize particular goods and services whose attributes they associate with a trademark.  This enhances market efficiency, by lowering information costs in the market and by encouraging competing firms to develop unique attributes that they can signal to consumers.

By robustly protecting federally-registered trademarks, the federal Lanham Act (see here for Lanham Act trademark infringement remedies) creates strong incentives for each trademark holder to invest in (and promote through advertising and other means) the quality of the trademarked goods or services it produces.  Strong trademark remedies are key because they promote the market-based interest in ensuring trademark holders that their individual property rights will be protected.  As one scholar puts it, “[i]t is generally accepted that [federal trademark] infringement actions protect both the goodwill of mark owners and competition by preventing confusion.”

Shielded by firm legal protection, the trademark holder will tend not to allow the quality of its trademark-protected offerings to slip, knowing that consumers will quickly and easily associate the reduced quality with its mark and stop patronizing the trademarked product or service.  Absent strong trademark protection, however, producers of competing products and services will be tempted to “free ride” by using a competing business’s registered trademark without authorization.  This sharply reduces the original trademark owner’s incentive to invest in and continue to promote quality, because it knows that the free riders will seek to attract customers by using the trademark to sell less costly, lower quality fare.  Quality overall suffers, to the detriment of consumers.  Allowing free riding on distinctive trademarks also (and relatedly) sows confusion as to the identity of sellers and as to the attributes covered by a particular trademark, leading to a weakening of the trademark system’s role as a source identifier and as a spur to attribute-based competition.

In short, federal trademark law protection, embodied in the Lanham Act, enhances free market competitive processes by protecting a trademark’s role in identifying suppliers (reducing transaction costs); incentivizing investment in the enhancement and preservation of product quality; and spurring attribute-based competition.

  1. The Demise of Lanham Act Disparagement Enhances Trademark Rights and Promotes Free Market Principles

The disparagement clause denied federal legal protection to a broad class of trademarks, based merely on the highly subjective determination by federal bureaucrats that the marks in question “disparaged” particular individuals or institutions.  This denial undermined private parties’ incentives to invest in “disparaging” marks, and to compete vigorously by signaling to consumers the existence of novel products and services that they might find appealing.

By “constitutionally expunging” the disparagement clause, the Supreme Court in Tam has opened the gateway to more robust competition by spurring the vigorous investment in and promotion of a larger number of marks.  Consumers in the marketplace, not bureaucrats, will decide whether the products or services identified by particular marks are “problematic” and therefore not worthy of patronage.  In other words, by enhancing legal protection for a wider variety of trademarks, the Tam decision has paved the way for the expansion of mutually-beneficial marketplace transactions, to the benefit of consumers and producers alike.

To conclude, in promoting First Amendment free speech interests, the Tam Court also gave a shot in the arm to welfare-enhancing competition in markets for goods and services.  It turns out that competition in the marketplace of ideas goes hand-in-hand with competition in the commercial marketplace.

Too much ink has been spilled in an attempt to gin up antitrust controversies regarding efforts by holders of “standard essential patents” (SEPs, patents covering technologies that are adopted as part of technical standards relied upon by manufacturers) to obtain reasonable returns to their property. Antitrust theories typically revolve around claims that SEP owners engage in monopolistic “hold-up” when they threaten injunctions or seek “excessive” royalties (or other “improperly onerous” terms) from potential licensees in patent licensing negotiations, in violation of pledges (sometimes imposed by standard-setting organizations) to license on “fair, reasonable, and non-discriminatory” (FRAND) terms. As Professors Joshua Wright and Douglas Ginsburg, among others, have explained, contract law, tort law, and patent law are far better placed to handle “FRAND-related” SEP disputes than antitrust law. Adding antitrust to the litigation mix generates unnecessary costs and inefficiently devalues legitimate private property rights.

Concerns by antitrust mavens that other areas of law are insufficient to cope adequately with SEP-FRAND disputes are misplaced. A fascinating draft law review article by Koren Wrong-Ervin, Director of the Scalia Law School’s Global Antitrust Institute, and Anne Layne-Farrar, Vice President of Charles River Associates, does an admirable job of summarizing key decisions by U.S. and foreign courts involved in determining FRAND rates in SEP litigation, and in highlighting key economic concepts underlying these holdings. As explained in the article’s abstract:

In the last several years, courts around the world, including in China, the European Union, India, and the United States, have ruled on appropriate methodologies for calculating either a reasonable royalty rate or reasonable royalty damages on standard-essential patents (SEPs) upon which a patent holder has made an assurance to license on fair, reasonable and nondiscriminatory (FRAND) terms. Included in these decisions are determinations about patent holdup, licensee holdout, the seeking of injunctive relief, royalty stacking, the incremental value rule, reliance on comparable licenses, the appropriate revenue base for royalty calculations, and the use of worldwide portfolio licensing. This article provides an economic and comparative analysis of the case law to date, including the landmark 2013 FRAND-royalty determination issued by the Shenzhen Intermediate People’s Court (and affirmed by the Guangdong Province High People’s Court) in Huawei v. InterDigital; numerous U.S. district court decisions; recent seminal decisions from the United States Court of Appeals for the Federal Circuit in Ericsson v. D-Link and CISCO v. CSIRO; the six recent decisions involving Ericsson issued by the Delhi High Court; the European Court of Justice decision in Huawei v. ZTE; and numerous post- Huawei v. ZTE decisions by European Union member states. While this article focuses on court decisions, discussions of the various agency decisions from around the world are also included throughout.   

To whet the reader’s appetite, key economic policy and factual “takeaways” from the article, which are reflected implicitly in a variety of U.S. and foreign judicial holdings, are as follows:

  • Holdup of any form requires lock-in, i.e., standard-implementing companies with asset-specific investments locked in to the technologies defining the standard or SEP holders locked in to licensing in the context of a standard because of standard-specific research and development (R&D) leading to standard-specific patented technologies.
  • Lock-in is a necessary condition for holdup, but it is not sufficient. For holdup in any guise to actually occur, there also must be an exploitative action taken by the relevant party once lock-in has happened. As a result, the mere fact that a license agreement was signed after a patent was included in a standard is not enough to establish that the patent holder is practicing holdup—there must also be evidence that the SEP holder took advantage of the licensee’s lock-in, for example by charging supra-FRAND royalties that it could not otherwise have charged but for the lock-in.
  • Despite coming after a particular standard is published, the vast majority of SEP licenses are concluded in arm’s length, bilateral negotiations with no allegations of holdup or opportunistic behavior. This follows because market mechanisms impose a number of constraints that militate against acting on the opportunity for holdup.
  • In order to support holdup claims, an expert must establish that the terms and conditions in an SEP licensing agreement generate payments that exceed the value conveyed by the patented technology to the licensor that signed the agreement.
  • The threat of seeking injunctive relief, on its own, cannot lead to holdup unless that threat is both credible and actionable. Indeed, the in terrorem effect of filing for an injunction depends on the likelihood of its being granted. Empirical evidence shows a significant decline in the number of injunctions sought as well as in the actual rate of injunctions granted in the United States following the Supreme Court’s 2006 decision in eBay v. MercExchange LLC, which ended the prior nearly automatic granting of injunctions to patentees and instead required courts to apply a traditional four-part equitable test for granting injunctive relief.
  • The Federal Circuit has recognized that an SEP holder’s ability to seek injunctive relief is an important safeguard to help prevent potential licensee holdout, whereby an SEP infringer unilaterally refuses a FRAND royalty or unreasonably delays negotiations to the same effect.
  • Related to the previous point, seeking an injunction against a licensee who is delaying or not negotiating in good faith need not actually result in an injunction. The fact that a court finds a licensee is holding out and/or not engaging in good faith licensing discussions can be enough to spur a license agreement as opposed to a permanent injunction.
  • FRAND rates should reflect the value of the SEPs at issue, so it makes no economic sense to estimate an aggregate rate for a standard by assuming that all SEP holders would charge the same rate as the one being challenged in the current lawsuit.
  • Moreover, as the U.S. Court of Appeals for the Federal Circuit has held, allegations of “royalty stacking” – the allegedly “excessive” aggregate burden of high licensing fees stemming from multiple patents that cover a single product – should be backed by case-specific evidence.
  • Most importantly, when a judicial FRAND assessment is focused on the value that the SEP portfolio at issue has contributed to the standard and products embodying the standard, the resulting rates and terms will necessarily avoid both patent holdup and royalty stacking.

In sum, the Wong-Ervin and Layne-Farrar article highlights economic insights that are reflected in the sounder judicial opinions dealing with the determination of FRAND royalties.  The article points the way toward methodologies that provide SEP holders sufficient returns on their intellectual property to reward innovation and maintain incentives to invest in technologies that enhance the value of standards.  Read it and learn.

Today, the Senate Committee on Health, Education, Labor, and Pensions (HELP) enters the drug pricing debate with a hearing on “The Cost of Prescription Drugs: How the Drug Delivery System Affects What Patients Pay.”  By questioning the role of the drug delivery system in pricing, the hearing goes beyond the more narrow focus of recent hearings that have explored how drug companies set prices.  Instead, today’s hearing will explore how pharmacy benefit managers, insurers, providers, and others influence the amounts that patients pay.

In 2016, net U.S. drug spending increased by 4.8% to $323 billion (after adjusting for rebates and off-invoice discounts).  This rate of growth slowed to less than half the rates of 2014 and 2015, when net drug spending grew at rates of 10% and 8.9% respectively.  Yet despite the slowing in drug spending, the public outcry over the cost of prescription drugs continues.

In today’s hearing, there will be testimony both on the various causes of drug spending increases and on various proposals that could reduce the cost of drugs.  Several of the proposals will focus on ways to increase competition in the pharmaceutical industry, and in turn, reduce drug prices.  I have previously explained several ways that the government could reduce prices through enhanced competition, including reducing the backlog of generic drugs awaiting FDA approval and expediting the approval and acceptance of biosimilars.  Other proposals today will likely call for regulatory reforms to enable innovative contractual arrangements that allow for outcome- or indication-based pricing and other novel reimbursement designs.

However, some proposals will undoubtedly return to the familiar call for more government negotiation of drug prices, especially drugs covered under Medicare Part D.  As I’ve discussed in a previous post, in order for government negotiation to significantly lower drug prices, the government must be able to put pressure on drug makers to secure price concessions. This could be achieved if the government could set prices administratively, penalize manufacturers that don’t offer price reductions, or establish a formulary.  Setting prices or penalizing drug makers that don’t reduce prices would produce the same disastrous effects as price controls: drug shortages in certain markets, increased prices for non-Medicare patients, and reduced incentives for innovation. A government formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would mean that many patients could no longer access some of their optimal drugs.

As lawmakers seriously consider changes that would produce these negative consequences, industry would do well to voluntarily constrain prices.  Indeed, in the last year, many drug makers have pledged to limit price increases to keep drug spending under control.  Allergan was first, with its “social contract” introduced last September that promised to keep price increases below 10 percent. Since then, Novo Nordisk, AbbVie, and Takeda, have also voluntarily committed to single-digit price increases.

So far, the evidence shows the drug makers are sticking to their promises. Allergan has raised the price of U.S. branded products by an average of 6.7% in 2017, and no drug’s list price has increased by more than single digits.  In contrast, Pfizer, who has made no pricing commitment, has raised the price of many of its drugs by 20%.

If more drug makers brought about meaningful change by committing to voluntary pricing restraints, the industry could prevent the market-distorting consequences of government intervention while helping patients afford the drugs they need.   Moreover, avoiding intrusive government mandates and price controls would preserve drug innovation that has brought life-saving and life-enhancing drugs to millions of Americans.

 

 

 

R Street’s Sasha Moss recently posted a piece on TechDirt describing the alleged shortcomings of the Register of Copyrights Selection and Accountability Act of 2017 (RCSAA) — proposed legislative adjustments to the Copyright Office, recently passed in the House and introduced in the Senate last month (with identical language).

Many of the article’s points are well taken. Nevertheless, they don’t support the article’s call for the Senate to “jettison [the bill] entirely,” nor the assertion that “[a]s currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.”

R Street’s main complaint with the legislation is that it doesn’t include other proposals in a House Judiciary Committee whitepaper on Copyright Office modernization. But condemning the RCSAA simply for failing to incorporate all conceivable Copyright Office improvements fails to adequately take account of the political realities confronting Congress — in other words, it lets the perfect be the enemy of the good. It also undermines R Street’s own stated preference for Copyright Office modernization effected through “targeted and immediately implementable solutions.”

Everyone — even R Street — acknowledges that we need to modernize the Copyright office. But none of the arguments in favor of a theoretical, “better” bill is undermined or impeded by passing this bill first. While there is certainly more that Congress can do on this front, the RCSAA is a sensible, targeted piece of legislation that begins to build the new foundation for a twenty-first century Copyright Office.

Process over politics

The proposed bill is simple: It would make the Register of Copyrights a nominated and confirmed position. For reasons almost forgotten over the last century and a half, the head of the Copyright Office is currently selected at the sole discretion of the Librarian of Congress. The Copyright Office was placed in the Library merely as a way to grow the Library’s collection with copies of copyrighted works.

More than 100 years later, most everyone acknowledges that the Copyright Office has lagged behind the times. And many think the problem lies with the Office’s placement within the Library, which is plagued with information technology and other problems, and has a distinctly different mission than the Copyright Office. The only real question is what to do about it.

Separating the the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization. And yet, somewhat inexplicably, R Street claims that the bill

amounts largely to a partisan battle over who will have the power to select the next Register: [Current Librarian of Congress] Hayden, who was appointed by Barack Obama, or President Donald Trump.

But this is a pretty farfetched characterization.

First, the House passed the bill 378-48, with 145 Democrats joining 233 Republicans in support. That’s more than three-quarters of the Democratic caucus.

Moreover, legislation to make the Register a nominated and confirmed position has been under discussion for more than four years — long before either Dr. Hayden was nominated or anyone knew that Donald Trump (or any Republican at all, for that matter) would be president.

R Street also claims that the legislation

will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, [and that] the nomination process could delay modernization efforts [because of Trump’s] confirmation backlog.

But precisely the opposite seems far more likely — as Sasha herself has previously recognized:

Clarifying the office’s lines of authority does have the benefit of making it more politically accountable…. The [House] bill takes a positive step forward in promoting accountability.

As far as I’m aware, no one claims that Dr. Hayden was “politicized” or that Librarians are vulnerable to capture because they are nominated and confirmed. And a Senate confirmation process will be more transparent than unilateral appointment by the Librarian, and will give the electorate a (nominal) voice in the Register’s selection. Surely unilateral selection of the Register by the Librarian is more susceptible to undue influence.

With respect to the modernization process, we should also not forget that the Copyright Office currently has an Acting Register in Karyn Temple Claggett, who is perfectly capable of moving the modernization process forward. And any limits on her ability to do so would arise from the very tenuousness of her position that the RCSAA is intended to address.

Modernizing the Copyright Office one piece at a time

It’s certainly true, as the article notes, that the legislation doesn’t include a number of other sensible proposals for Copyright Office modernization. In particular, it points to ideas like forming a stakeholder advisory board, creating new chief economist and technologist positions, upgrading the Office’s information technology systems, and creating a small claims court.

To be sure, these could be beneficial reforms, as ICLE (and many others) have noted. But I would take some advice from R Street’s own “pragmatic approach” to promoting efficient government “with the full realization that progress on the ground tends to be made one inch at a time.”

R Street acknowledges that the legislation’s authors have indicated that this is but a beginning step and that they plan to tackle the other issues in due course. At a time when passage of any legislation on any topic is a challenge, it seems appropriate to defer to those in Congress who affirmatively want more modernization about how big a bill to start with.

In any event, it seems perfectly sensible to address the Register selection process before tackling the other issues, which may require more detailed discussions of policy and cost. And with the Copyright Office currently lacking a permanent Register and discussions underway about finding a new one, addressing any changes Congress deems necessary in the selection process seems like the most pressing issue, if they are to be resolved prior to the next pick being made.

Further, because the Register would presumably be deeply involved in the selection and operation of any new advisory board, chief economist and technologist, IT system, or small claims process, Congress can also be forgiven for wanting to address the Register issue first. Moreover, a Register who can be summarily dismissed by the Librarian likely doesn’t have the needed autonomy to fully and effectively implement the other proposals from the whitepaper. Why build a house on a shaky foundation when you can fix the foundation first?

Process over substance

All of which leaves the question why R Street opposes a bill that was passed by a bipartisan supermajority in the House; that effects precisely the kind of targeted, incremental reform that R Street promotes; and that implements a specific reform that R Street favors.

The legislation has widespread support beyond Congress, although the TechDirt piece gives this support short shrift. Instead, it notes that “some” in the content industry support the legislation, but lists only the Motion Picture Association of America. There is a subtle undercurrent of the typical substantive copyright debate, in which “enlightened” thinking on copyright is set against the presumptively malicious overreach of the movie studios. But the piece neglects to mention the support of more than 70 large and small content creators, technology companies, labor unions, and free market and civil rights groups, among others.

Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates. But it’s difficult to escape. Copyright minimalists are skeptical of an effectual Copyright Office if it is more likely to promote policies that reinforce robust copyright, even if they support sensible process reforms and more-accountable government in the abstract. And, to be fair, copyright proponents are thrilled when their substantive positions might be bolstered by promotion of sensible process reforms.

But the truth is that no one really knows how an independent and accountable Copyright Office will act with respect to contentious, substantive issues. Perhaps most likely, increased accountability via nomination and confirmation will introduce more variance in its positions. In other words, on substance, the best guess is that greater Copyright Office accountability and modernization will be a wash — leaving only process itself as a sensible basis on which to assess reform. And on that basis, there is really no reason to oppose this widely supported, incremental step toward a modern US Copyright Office.

I’ll be participating in two excellent antitrust/consumer protection events next week in DC, both of which may be of interest to our readers:

5th Annual Public Policy Conference on the Law & Economics of Privacy and Data Security

hosted by the GMU Law & Economics Center’s Program on Economics & Privacy, in partnership with the Future of Privacy Forum, and the Journal of Law, Economics & Policy.

Conference Description:

Data flows are central to an increasingly large share of the economy. A wide array of products and business models—from the sharing economy and artificial intelligence to autonomous vehicles and embedded medical devices—rely on personal data. Consequently, privacy regulation leaves a large economic footprint. As with any regulatory enterprise, the key to sound data policy is striking a balance between competing interests and norms that leaves consumers better off; finding an approach that addresses privacy concerns, but also supports the benefits of technology is an increasingly complex challenge. Not only is technology continuously advancing, but individual attitudes, expectations, and participation vary greatly. New ideas and approaches to privacy must be identified and developed at the same pace and with the same focus as the technologies they address.

This year’s symposium will include panels on Unfairness under Section 5: Unpacking “Substantial Injury”, Conceptualizing the Benefits and Costs from Data Flows, and The Law and Economics of Data Security.

I will be presenting a draft paper, co-authored with Kristian Stout, on the FTC’s reasonableness standard in data security cases following the Commission decision in LabMD, entitled, When “Reasonable” Isn’t: The FTC’s Standard-less Data Security Standard.

Conference Details:

  • Thursday, June 8, 2017
  • 8:00 am to 3:40 pm
  • at George Mason University, Founders Hall (next door to the Law School)
    • 3351 Fairfax Drive, Arlington, VA 22201

Register here

View the full agenda here

 

The State of Antitrust Enforcement

hosted by the Federalist Society.

Panel Description:

Antitrust policy during much of the Obama Administration was a continuation of the Bush Administration’s minimal involvement in the market. However, at the end of President Obama’s term, there was a significant pivot to investigations and blocks of high profile mergers such as Halliburton-Baker Hughes, Comcast-Time Warner Cable, Staples-Office Depot, Sysco-US Foods, and Aetna-Humana and Anthem-Cigna. How will or should the new Administration analyze proposed mergers, including certain high profile deals like Walgreens-Rite Aid, AT&T-Time Warner, Inc., and DraftKings-FanDuel?

Join us for a lively luncheon panel discussion that will cover these topics and the anticipated future of antitrust enforcement.

Speakers:

  • Albert A. Foer, Founder and Senior Fellow, American Antitrust Institute
  • Profesor Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Honorable Joshua D. Wright, Professor of Law, George Mason University School of Law
  • Moderator: Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law and President, Cass & Associates, PC

Panel Details:

  • Friday, June 09, 2017
  • 12:00 pm to 2:00 pm
  • at the National Press Club, MWL Conference Rooms
    • 529 14th Street, NW, Washington, DC 20045

Register here

Hope to see everyone at both events!

  1. Introduction

The International Competition Network (ICN), a “virtual” organization comprised of most of the world’s competition (antitrust) agencies and expert non-governmental advisors (NGAs), held its Sixteenth Annual Conference in Porto, Portugal from May 10-12. (I attended this Conference as an NGA.) Now that the ICN has turned “sweet sixteen,” a stocktaking is appropriate. The ICN can point to some significant accomplishments, but faces major future challenges. After describing those challenges, I advance four recommendations for U.S.-led initiatives to enhance the future effectiveness of the ICN.

  1. ICN Background and Successes

The ICN, whose key objective is to promote “soft convergence” among competition law regimes, has much to celebrate. It has gone from a small core of competition authorities focused on a limited set of issues to a collection of 135 agencies from 122 far-flung jurisdictions, plus a large cadre of NGA lawyers and economists who provide practical and theoretical advice. The ICN’s nature and initiatives are concisely summarized on its website:

The ICN provides competition authorities with a specialized yet informal venue for maintaining regular contacts and addressing practical competition concerns. This allows for a dynamic dialogue that serves to build consensus and convergence towards sound competition policy principles across the global antitrust community.

The ICN is unique as it is the only international body devoted exclusively to competition law enforcement and its members represent national and multinational competition authorities. Members produce work products through their involvement in flexible project-oriented and results-based working groups. Working group members work together largely by Internet, telephone, teleseminars and webinars.

Annual conferences and workshops provide opportunities to discuss working group projects and their implications for enforcement. The ICN does not exercise any rule-making function. Where the ICN reaches consensus on recommendations, or “best practices”, arising from the projects, individual competition authorities decide whether and how to implement the recommendations, through unilateral, bilateral or multilateral arrangements, as appropriate.

The Porto Conference highlighted the extent of the ICN’s influence. Representatives from key international organizations that focus on economic growth and development (and at one time were viewed as ICN “rivals”), including the OECD, the World Bank, and UNCTAD, participated in the Conference. A feature in recent years, the one-day “Pre-ICN” Forum jointly sponsored by the World Bank, the International Chamber of Commerce, and the International Bar Association, this year shared the spotlight with other “sidebar” events (for example, an antitrust symposium cosponsored by UNCTAD and the Japan Fair Trade Commission, an “African Competition Forum,” and a roundtable of former senior officials and academics sponsored by a journal). The Porto Conference formally adopted an impressive array of documents generated over the past year by the ICN’s various Working Groups (the Advocacy, Agency Effectiveness, Cartel, Merger, and Unilateral Conduct Working Groups) (see here and here). This work product focuses on offering practical advice to agencies, rather than theoretical academic speculation. If recent history is in any indication, a substantial portion of this advice will be incorporated within some national laws, and various agencies guidance documents, and strategic plans.

In sum, the ICN is an increasingly influential organization. More importantly, it has, on balance, been a force for the promotion of sound policies on such issues as pre-merger notifications and cartel enforcement – policies that reduce transaction costs for the private sector and tend to improve the quality of antitrust enforcement. It has produced valuable training materials for agencies. Furthermore, the ICN’s Advocacy Working Group, buoyed by a growing amount of academic research (some of it supported by the World Bank), increasingly has highlighted the costs of anticompetitive government laws and regulations, and provided a template for assessing and critiquing regulatory schemes that undermine the competitive process. Most recently, the revised chapter on the “analytical framework for evaluating unilateral exclusionary conduct” issued at the 2017 Porto Conference did a solid job of describing the nature of harm to the competitive process and the need to consider error costs in evaluating such conduct. Other examples of welfare-enhancing ICN proposals abound.

  1. Grounds for Caution Going Forward

Nevertheless, despite its generally good record, one must be cautious in evaluating the ICN’s long-term prospects, for at least five reasons.

First, as the ICN tackles increasingly contentious issues (such as the assessment of vertical restraints, which are part of the 2017-2018 ICN Work Plan, and “dominant” single firm “platforms,” cited specifically by ICN Chairman Andreas Mundt in Porto), the possibility for controversy and difficulty in crafting recommendations rises.

Second, most ICN members have adopted heavily administrative competition law frameworks that draw upon an inquisitorial civil law model, as opposed to the common law adversarial legal system in which independent courts conduct full legal reviews of agency conclusions. Public choice analysis (not to mention casual empiricism and common sense) indicates that as they become established, administrative agencies will have a strong incentive to “do something” in order to expand their authority. Generally speaking, sound economic analysis (bolstered by large staffs of economists) that stresses consumer welfare has been incorporated into U.S. federal antitrust enforcement decisions and federal antitrust jurisprudence – but that is not the case in large parts of the world. As its newer member agencies grow in size and influence, the ICN may be challenged by those authorities to address “novel” practices that stray beyond well-understood competition law categories. As a result, innovative welfare-enhancing business innovations could be given unwarranted scrutiny and thereby discouraged.

Third, as various informed commentators in Porto noted, many competition laws explicitly permit consideration of non-economic welfare-based goals, such as “industrial policy” (including promotion of “national champion” competitors), “fairness,” and general “public policy.” Such ill-defined statutory goals allow competition agencies (and, of course, politicians who may exercise influence over those agencies) to apply competition statutes in an unpredictable manner that has nothing to do with (indeed, may be antithetical to) promotion of a vigorous competitive process and consumer welfare. With the proliferation of international commerce, the costly uncertainty injected into business decision-making by malleable antitrust statutes becomes increasingly significant. The ICN, which issues non-binding recommendations and advice and relies on voluntary interagency cooperation, may have little practical ability to fend off such welfare-inimical politicization of antitrust.

Fourth, for nearly a decade United States antitrust agencies have expressed concern in international forums about lack of due process in competition enforcement. Commendably, in 2015 the ICN did issue guidance regarding “key investigative principles and practices important to effective and fair investigative process”, but this guidance did not address administrative hearings and enforcement actions, which remain particularly serious concerns. The ICN’s ability to drive a “due process improvements” agenda may be inherently limited, due to differences among ICN members’ legal systems and sensitivities regarding the second-guessing of national enforcement norms associated with the concept of “due process.”

Fifth, there is “the elephant outside the room.” One major jurisdiction, China, still has not joined the ICN. Given China’s size, importance in the global economy, and vigorous enforcement of its completion law, China’s “absence from “the table” is a significant limitation on the ICN’s ability to promote economically meaningful global policy convergence. (Since Hong Kong, a “special administrative region” of China, has joined the ICN, one may hope that China itself will consider opting for ICN membership in the not too distant future.)

  1. What Should the U.S. Antitrust Agencies Do?

Despite the notes of caution regarding the ICN’s future initiatives and effectiveness, the ICN will remain for the foreseeable future a useful forum for “nudging” members toward improvements in their competition law systems, particularly in key areas such as cartel enforcement, merger review, and agency effectiveness (internal improvements in agency management may improve the quality of enforcement and advocacy initiatives). Thus, the U.S. federal antitrust agencies, the Justice Department’s Antitrust Division (DOJ) and the Federal Trade Commission (FTC), should (and undoubtedly will) remain fully engaged with the ICN. DOJ and the FTC not only should remain fully engaged in the ICN’s Working Groups, they should also develop a strategy for minimizing the negative effects of the ICN’s limitations and capitalizing on its strengths. What should such a strategy entail? Four key elements come to mind.

First, the FTC and DOJ should strongly advocate against an ICN focus on expansive theories of liability for unilateral conduct (particularly involving such areas as popular Internet “platforms” (e.g., Google, Facebook, and Amazon, among others) and vertical restraints), not tied to showings of harm to the competitive process. The proliferation of cases based on such theories could chill economically desirable business innovations. In countering such novel and expansive condemnations of unilateral conduct, the U.S. agencies could draw upon the extensive law and economics literature on efficiencies and unilateral conduct in speeches, publications, and presentations to ICN Working Groups. To provide further support for their advocacy, the FTC and DOJ should also consider issuing a new joint statement of unilateral conduct enforcement principles, inspired by the general lines of the 2008 DOJ Report on Single Firm Conduct Under Section 2 of the Sherman Act (regrettably withdrawn by the Obama Administration DOJ in 2009). Relatedly, the FTC and DOJ should advocate the right of intellectual property (IP) holders legitimately to maximize returns on their holdings. The U.S. agencies also should be prepared to argue against novel theories of antitrust liability untethered from traditional concepts of antitrust harm, based on the unilateral exploitation of IP rights (see here, here, here, and here).

Second, the U.S. agencies should promote a special ICN project on decision theory and competition law enforcement (see my Heritage Foundation commentary here), under the aegis of the ICN’s Agency Effectiveness Working Group. A decision-theoretic framework aims to minimize the costs of antitrust administration and enforcement error, in order to promote cost-beneficial enforcement outcomes. ICN guidance on decision theory (which would stress the primacy of empirical analysis and the need for easily administrable rules) hopefully would encourage competition agencies to focus on clearly welfare-inimical practices, and avoid pursuing fanciful new theories of antitrust violations unmoored from robust theories of competitive harm. The FTC and DOJ should also work to inculcate decision theory into the work of the core ICN Cartel and Merger Working Groups (see here).

Third, the U.S. agencies should also encourage the ICN’s Agency Effectiveness Working Group to pursue a comprehensive “due process” initiative, focused on guaranteeing fundamental fairness to parties at all stages of a competition law proceeding.  An emphasis on basic universal notions of fairness would transcend the differences inherent in civil law and common law administrative processes. It would suggest a path forward whereby agencies could agree on the nature of basic rights owed litigants, while still preserving differences among administrative enforcement models. Administrative procedure recommendations developed by the American Bar Association’s Antitrust Section in 2015 (see here) offer a good template for consideration, and 2012 OECD deliberations on fairness and transparency (see here) yield valuable background analysis. Consistent with these materials, the U.S. agencies could stress that due process reforms to protect basic rights would not only improve the quality of competition authority decision-making, it would also enhance economic welfare and encourage firms from around the world to do business in reforming jurisdictions. (As discussed above, due process raises major sensitivities, and thus the push for due process improvements should be viewed as a long-term project that will have to be pursued vigorously and very patiently.)

Fourth, working through the ICN’s Advocacy Working Group, the FTC and DOJ should push to substantially raise the profile of competition advocacy at the ICN. A growing body of economic research reveals the enormous economic gains that could be unlocked within individual countries by the removal of anticompetitive laws and rules, particularly those that create artificial barriers to entry and distort trade (see, for example, here and here). The U.S. agencies should emphasize the negative consequences for poorer consumers, reduced innovation, and foregone national income due to many of these anticompetitive barriers, drawing upon research by World Bank and OECD scholars (see here). (Fortunately, the ICN already works with the World Bank to promote an annual contest that showcases economic “success stories” due to agency advocacy.) The FTC and DOJ should also use the ICN as a forum to recommend that national competition authorities accord competition advocacy aimed at domestic regulatory reform relatively more resources and attention, particularly compared to investigations of vertical restraints and novel unilateral conduct. It should also work within the ICN’s guidance and oversight body, the “Steering Group,” to make far-reaching competition advocacy initiatives a top ICN priority.

  1. Conclusion

The ICN is a worthwhile international organization that stands at a crossroads. Having no permanent bureaucracy (its website is maintained by the Canadian Competition Bureau), and relying in large part on online communications among agency staff and NGAs to carry out its work, the ICN represents a very good investment of scare resources by the U.S. Government. Absent thoughtful guidance, however, there is a danger that it could drift and become less effective at promoting welfare-enhancing competition law improvements around the world. To avert such an outcome, U.S. antitrust enforcement agencies (joined by like-minded ICN members from other jurisdictions) should proactively seek to have the ICN take up new projects that hold out the promise for substantive and process-based improvements in competition policy worldwide, including far-reaching regulatory reform. A positive ICN response to such initiatives would enhance the quality of competition policy. Moreover, it could contribute in no small fashion to increased economic welfare and innovation in those jurisdictions that adopted reforms in response to the ICN’s call. American businesses operating internationally also would benefit from improvements in the global competition climate generated by ICN-incentivized reforms.

 

 

 

It’s fitting that FCC Chairman Ajit Pai recently compared his predecessor’s jettisoning of the FCC’s light touch framework for Internet access regulation without hard evidence to the Oklahoma City Thunder’s James Harden trade. That infamous deal broke up a young nucleus of three of the best players in the NBA in 2012 because keeping all three might someday create salary cap concerns. What few saw coming was a new TV deal in 2015 that sent the salary cap soaring.

If it’s hard to predict how the market will evolve in the closed world of professional basketball, predictions about the path of Internet innovation are an order of magnitude harder — especially for those making crucial decisions with a lot of money at stake.

The FCC’s answer for what it considered to be the dangerous unpredictability of Internet innovation was to write itself a blank check of authority to regulate ISPs in the 2015 Open Internet Order (OIO), embodied in what is referred to as the “Internet conduct standard.” This standard expanded the scope of Internet access regulation well beyond the core principle of preserving openness (i.e., ensuring that any legal content can be accessed by all users) by granting the FCC the unbounded, discretionary authority to define and address “new and novel threats to the Internet.”

When asked about what the standard meant (not long after writing it), former Chairman Tom Wheeler replied,

We don’t really know. We don’t know where things will go next. We have created a playing field where there are known rules, and the FCC will sit there as a referee and will throw the flag.

Somehow, former Chairman Wheeler would have us believe that an amorphous standard that means whatever the agency (or its Enforcement Bureau) says it means created a playing field with “known rules.” But claiming such broad authority is hardly the light-touch approach marketed to the public. Instead, this ill-conceived standard allows the FCC to wade as deeply as it chooses into how an ISP organizes its business and how it manages its network traffic.

Such an approach is destined to undermine, rather than further, the objectives of Internet openness, as embodied in Chairman Powell’s 2005 Internet Policy Statement:

To foster creation, adoption and use of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.

Instead, the Internet conduct standard is emblematic of how an off-the-rails quest to heavily regulate one specific component of the complex Internet ecosystem results in arbitrary regulatory imbalances — e.g., between ISPs and over-the-top (OTT) or edge providers that offer similar services such as video streaming or voice calling.

As Boston College Law Professor, Dan Lyons, puts it:

While many might assume that, in theory, what’s good for Netflix is good for consumers, the reality is more complex. To protect innovation at the edge of the Internet ecosystem, the Commission’s sweeping rules reduce the opportunity for consumer-friendly innovation elsewhere, namely by facilities-based broadband providers.

This is no recipe for innovation, nor does it coherently distinguish between practices that might impede competition and innovation on the Internet and those that are merely politically disfavored, for any reason or no reason at all.

Free data madness

The Internet conduct standard’s unholy combination of unfettered discretion and the impulse to micromanage can (and will) be deployed without credible justification to the detriment of consumers and innovation. Nowhere has this been more evident than in the confusion surrounding the regulation of “free data.”

Free data, like T-Mobile’s Binge On program, is data consumed by a user that has been subsidized by a mobile operator or a content provider. The vertical arrangements between operators and content providers creating the free data offerings provide many benefits to consumers, including enabling subscribers to consume more data (or, for low-income users, to consume data in the first place), facilitating product differentiation by mobile operators that offer a variety of free data plans (including allowing smaller operators the chance to get a leg up on competitors by assembling a market-share-winning plan), increasing the overall consumption of content, and reducing users’ cost of obtaining information. It’s also fundamentally about experimentation. As the International Center for Law & Economics (ICLE) recently explained:

Offering some services at subsidized or zero prices frees up resources (and, where applicable, data under a user’s data cap) enabling users to experiment with new, less-familiar alternatives. Where a user might not find it worthwhile to spend his marginal dollar on an unfamiliar or less-preferred service, differentiated pricing loosens the user’s budget constraint, and may make him more, not less, likely to use alternative services.

In December 2015 then-Chairman Tom Wheeler used his newfound discretion to launch a 13-month “inquiry” into free data practices before preliminarily finding some to be in violation of the standard. Without identifying any actual harm, Wheeler concluded that free data plans “may raise” economic and public policy issues that “may harm consumers and competition.”

After assuming the reins at the FCC, Chairman Pai swiftly put an end to that nonsense, saying that the Commission had better things to do (like removing barriers to broadband deployment) than denying free data plans that expand Internet access and are immensely popular, especially among low-income Americans.

The global morass of free data regulation

But as long as the Internet conduct standard remains on the books, it implicitly grants the US’s imprimatur to harmful policies and regulatory capriciousness in other countries that look to the US for persuasive authority. While Chairman Pai’s decisive intervention resolved the free data debate in the US (at least for now), other countries are still grappling with whether to prohibit the practice, allow it, or allow it with various restrictions.

In Europe, the 2016 EC guidelines left the decision of whether to allow the practice in the hands of national regulators. Consequently, some regulators — in Hungary, Sweden, and the Netherlands (although there the ban was recently overturned in court) — have banned free data practices  while others — in Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine — have not. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs, a state of affairs that is compounded by a lack of data on the consequences of various approaches to their regulation.

In Canada this year, the CRTC issued a decision adopting restrictive criteria under which to evaluate free data plans. The criteria include assessing the degree to which the treatment of data is agnostic, whether the free data offer is exclusive to certain customers or certain content providers, the impact on Internet openness and innovation, and whether there is financial compensation involved. The standard is open-ended, and free data plans as they are offered in the US would “likely raise concerns.”

Other regulators are contributing to the confusion through ambiguously framed rules, such as that of the Chilean regulator, Subtel. In a 2014 decision, it found that a free data offer of specific social network apps was in breach of Chile’s Internet rules. In contrast to what is commonly reported, however, Subtel did not ban free data. Instead, it required mobile operators to change how they promote such services, requiring them to state that access to Facebook, Twitter and WhatsApp were offered “without discounting the user’s balance” instead of “at no cost.” It also required them to disclose the amount of time the offer would be available, but imposed no mandatory limit.

In addition to this confusing regulatory make-work governing how operators market free data plans, the Chilean measures also require that mobile operators offer free data to subscribers who pay for a data plan, in order to ensure free data isn’t the only option users have to access the Internet.

The result is that in Chile today free data plans are widely offered by Movistar, Claro, and Entel and include access to apps such as Facebook, WhatsApp, Twitter, Instagram, Pokemon Go, Waze, Snapchat, Apple Music, Spotify, Netflix or YouTube — even though Subtel has nominally declared such plans to be in violation of Chile’s net neutrality rules.

Other regulators are searching for palatable alternatives to both flex their regulatory muscle to govern Internet access, while simultaneously making free data work. The Indian regulator, TRAI, famously banned free data in February 2016. But the story doesn’t end there. After seeing the potential value of free data in unserved and underserved, low-income areas, TRAI proposed implementing government-sanctioned free data. The proposed scheme would provide rural subscribers with 100 MB of free data per month, funded through the country’s universal service fund. To ensure that there would be no vertical agreements between content providers and mobile operators, TRAI recommended introducing third parties, referred to as “aggregators,” that would facilitate mobile-operator-agnostic arrangements.

The result is a nonsensical, if vaguely well-intentioned, threading of the needle between the perceived need to (over-)regulate access providers and the determination to expand access. Notwithstanding the Indian government’s awareness that free data will help to close the digital divide and enhance Internet access, in other words, it nonetheless banned private markets from employing private capital to achieve that very result, preferring instead non-market processes which are unlikely to be nearly as nimble or as effective — and yet still ultimately offer “non-neutral” options for consumers.

Thinking globally, acting locally (by ditching the Internet conduct standard)

Where it is permitted, free data is undergoing explosive adoption among mobile operators. Currently in the US, for example, all major mobile operators offer some form of free data or unlimited plan to subscribers. And, as a result, free data is proving itself as a business model for users’ early stage experimentation and adoption of augmented reality, virtual reality and other cutting-edge technologies that represent the Internet’s next wave — but that also use vast amounts of data. Were the US to cut off free data at the legs under the OIO absent hard evidence of harm, it would substantially undermine this innovation.

The application of the nebulous Internet conduct standard to free data is a microcosm of the current incoherence: It is a rule rife with a parade of uncertainties and only theoretical problems, needlessly saddling companies with enforcement risk, all in the name of preserving and promoting innovation and openness. As even some of the staunchest proponents of net neutrality have recognized, only companies that can afford years of litigation can be expected to thrive in such an environment.

In the face of confusion and uncertainty globally, the US is now poised to provide leadership grounded in sound policy that promotes innovation. As ICLE noted last month, Chairman Pai took a crucial step toward re-imposing economic rigor and the rule of law at the FCC by questioning the unprecedented and ill-supported expansion of FCC authority that undergirds the OIO in general and the Internet conduct standard in particular. Today the agency will take the next step by voting on Chairman Pai’s proposed rulemaking. Wherever the new proceeding leads, it’s a welcome opportunity to analyze the issues with a degree of rigor that has thus far been appallingly absent.

And we should not forget that there’s a direct solution to these ambiguities that would avoid the undulations of subsequent FCC policy fights: Congress could (and should) pass legislation implementing a regulatory framework grounded in sound economics and empirical evidence that allows for consumers to benefit from the vast number of procompetitive vertical agreements (such as free data plans), while still facilitating a means for policing conduct that may actually harm consumers.

The Golden State Warriors are the heavy odds-on favorite to win another NBA Championship this summer, led by former OKC player Kevin Durant. And James Harden is a contender for league MVP. We can’t always turn back the clock on a terrible decision, hastily made before enough evidence has been gathered, but Chairman Pai’s efforts present a rare opportunity to do so.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

The indefatigable (and highly talented) scriveners at the Scalia Law School’s Global Antitrust Institute (GAI) once again have offered a trenchant law and economics assessment that, if followed, would greatly improve a foreign jurisdiction’s competition law guidance. This latest assessment, which is compelling and highly persuasive, is embodied in a May 4 GAI Commentary on the Japan Fair Trade Commission’s (JFTC’s) consultation on its Draft Guidelines Concerning Distribution Systems and Business Practices Under the Antimonopoly Act (Draft Guidelines). In particular, the Commentary highlights four major concerns with the Draft Guidelines’ antitrust analysis dealing with conduct involving multi-sided platforms, resale price maintenance (RPM), refusals to deal, tying, and other vertical restraints. It also offers guidance on the appropriate analysis of network effects in multi-sided platforms. After summarizing these five key points, I offer some concluding observations on the potential benefit for competition policy worldwide offered by the GAI’s commentaries on foreign jurisdictions’ antitrust guidance.

  1. Resale price maintenance. Though the Draft Guidelines appear to apply a “rule of reason” or effects-based approach to most vertical restraints, Part I.3 and Part I, Chapter 1 carve out resale price maintenance (RPM) practices on the ground that they “usually have significant anticompetitive effects and, as a general rule, they tend to impede fair competition.” Given the economic theory and empirical evidence showing that vertical restraints, including RPM, rarely harm competition and often benefit consumers, the Commentary urges the JFTC to reconsider its approach and instead apply a rule of reason or effects-based analysis to all vertical restraints, including RPM, under which restraints are condemned only if any anticompetitive harm they cause outweighs any procompetitive benefits they create.
  2. Effects of vertical restraints. The Draft Guidelines identify two types of effects of vertical non-price restraints, “foreclosures effects” and “price maintenance effects.” The Commentary urges the JFTC to require proof of actual anticompetitive effects for both competition and unfair trade practice violations, just as it requires proof of procompetitive effects. It also recommends that the agency take cognizance only of substantial foreclosure effects, that is, “foreclosure of a sufficient share of distribution so that a manufacturer’s rivals are forced to operate at a significant cost disadvantage for a significant period of time.” The Commentary explains that a “consensus has emerged that a necessary condition for anticompetitive harm arising from allegedly exclusionary agreements is that the contracts foreclose rivals from a share of distribution sufficient to achieve minimum efficient scale.” The Commentary notes that “the critical market share foreclosure rate should depend upon the minimum efficient scale of production. Unless there are very large economies of scale in manufacturing, the minimum foreclosure of distribution necessary for an anticompetitive effect in most cases would be substantially greater than 40 percent. Therefore, 40 percent should be thought of as a useful screening device or ‘safe harbor,’ not an indication that anticompetitive effects are likely to exist above this level.”

The Commentary also strongly urges the JFTC to include an analysis of the counterfactual world, i.e., to identify “the difference between the percentage share of distribution foreclosed by the allegedly exclusionary agreements or conduct and the share of distribution in the absence of such an agreement.” It explains that such an approach to assessing foreclosure isolates any true competitive effect of the allegedly exclusionary agreement from other factors.

The Commentary also recommends that the JFTC explicitly recognize that evidence of new or expanded entry during the period of the alleged abuse can be a strong indication that the restraint at issue did not foreclose competition or have an anticompetitive effect. It stresses that, with respect to price increases, it is important to recognize and consider other factors (including changes in the product and changes in demand) that may explain higher prices.

  1. Unilateral refusals to deal and forced sharing. Part II, Chapter 3 of the Draft Guidelines would impose unfair trade practice liability for unilateral refusals to deal that “tend to make it difficult for the refused competitor to carry on normal business activities.” The Commentary strongly urges the JFTC to reconsider this vague and unclear approach and instead recognize the numerous significant concerns with forced sharing.

For example, while a firm’s competitors may want to use a particular good or technology in their own products, there are few situations, if any, in which access to a particular good is necessary to compete in a market. Indeed, one of the main reasons not to impose liability for unilateral, unconditional refusals to deal is “pragmatic in nature and concerns the limited abilities of competition authorities and courts to decide whether a facility is truly non-replicable or merely a competitive advantage.” For one thing, there are “no reliable economic or evidential techniques for testing whether a facility can be duplicated,” and it is often “difficult to distinguish situations in which customers simply have a strong preference for one facility from situations in which objective considerations render their choice unavoidable.”

Furthermore, the Commentary notes that forced competition based on several firms using the same inputs may actually preserve monopolies by removing the requesting party’s incentive to develop its own inputs. Consumer welfare is not enhanced only by price competition; it may be significantly improved by the development of new products for which there is an unsatisfied demand. If all competitors share the same facilities this will occur much less quickly if at all. In addition, if competitors can anticipate that they will be allowed to share the same facilities and technologies, the incentives to develop new products is diminished. Also, sharing of a monopoly among several competitors does not in itself increase competition unless it leads to improvements in price and output, i.e., nothing is achieved in terms of enhancing consumer welfare. Competition would be improved only if the terms upon which access is offered allow the requesting party to effectively compete with the dominant firm on the relevant downstream market. This raises the issue of whether the dominant firm is entitled to charge a monopoly rate or whether, in addition to granting access, there is a duty to offer terms that allow efficient rivals to make a profit.

  1. Fair and free competition. The Draft JFTC Guidelines refer throughout to the goal of promoting “fair and free competition.” Part I.3 in particular provides that “[i]f a vertical restraint tends to impede fair competition, such restraint is prohibited as an unfair trade practice.” The Commentary urges the JFTC to adopt an effects-based approach similar to that adopted by the U.S. Federal Trade Commission in its 2015 Policy Statement on Unfair Methods of Competition. Tying unfairness to antitrust principles ensures the alignment of unfairness with the economic principles underlying competition laws. Enforcement of unfair methods of competition statutes should focus on harm to competition, while taking into account possible efficiencies and business justifications. In short, while unfairness can be a useful tool in reaching conduct that harms competition but is not within the scope of the antitrust laws, it is imperative that unfairness be linked to the fundamental goals of the antitrust laws.
  2. Network effects in multi-sided platforms. With respect to multi-sided platforms in particular, the Commentary urges that the JFTC avoid any presumption that network effects create either market power or barriers to entry. In lieu of such a presumption, the Commentary recommends a fact-specific case-by-case analysis with empirical backing on the presence and effect of any network effects. Network effects occur when the value of a good or service increases as the number of people who use it grows. Network effects are generally beneficial. While there is some dispute over whether and under what conditions they might also raise exclusionary concerns, the Commentary notes that “transactions involving complementary products (indirect network effects) fully internalize the benefits of consuming complementary goods and do not present an exclusionary concern.” The Commentary explains that, “[a]s in all analysis of network effects, the standard assumption that quantity alone determines the strength of the effect is likely mistaken.” Rather, to the extent that advertisers, for example, care about end users, they care about many of their characteristics. An increase in the number of users who are looking only for information and never to purchase goods may be of little value to advertisers. “Assessing network or scale effects is extremely difficult in search engine advertising [for example], and scale may not even correlate with increased value over some ranges of size.”
  3. Concluding thoughts. Implicit in the overall approach of this latest GAI Commentary, and in many other GAI assessments of foreign jurisdictions’ proposed antitrust guidance, is the need for regulatory humility, sound empiricism, and a focus on consumer welfare. Antitrust enforcement policies that blandly accept esoteric theories of anticompetitive behavior and ignore actual economic effects are welfare reducing, not welfare enhancing. The very good analytical work carried out by GAI helps competition authorities keep this reality in mind, and merits close attention.