We’re delighted to welcome Eric Fruits as our newest blogger at Truth on the Market.

Eric Fruits, Ph.D. is the Oregon Association of Realtors Faculty Fellow at Portland State University and the recently minted Chief Economist at the International Center for Law & Economics.

Eric Fruits

Among other things, Dr. Fruits is an antitrust expert, with particular expertise in price fixing and cartels (see, e.g., his article, Market Power and Cartel Formation: Theory and an Empirical Test, in the Journal of Law and Economics). He has assisted in the review of several mergers including Sysco-US Foods, Exxon-Mobil, BP-Arco, and Nestle-Ralston. He has worked on numerous antitrust lawsuits, including Weyerhaeuser v. Ross-Simmons, a predatory bidding case that was ultimately decided by the US Supreme Court (and discussed at some length by Thom here on TOTM: See here and here).

As an expert in statistics, he has provided expert opinions and testimony regarding market manipulation, real estate transactions, profit projections, agricultural commodities, and war crimes allegations. His expert testimony has been submitted to state courts, federal courts, and an international court.

Eric has also written peer-reviewed articles on insider trading, initial public offerings (IPOs), and the municipal bond market, among many other topics. His economic analysis has been widely cited and has been published in The Economist and the Wall Street Journal. His testimony regarding the economics of public employee pension reforms was heard by a special session of the Oregon Supreme Court.

You can also find him on Twitter at @ericfruits

Welcome, Eric!

 

 

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

Today I published an article in The Daily Signal bemoaning the European Commission’s June 27 decision to fine Google $2.7 billion for engaging in procompetitive, consumer welfare-enhancing conduct.  The article is reproduced below (internal hyperlinks omitted), in italics:

On June 27, the European Commission—Europe’s antitrust enforcer—fined Google over $2.7 billion for a supposed violation of European antitrust law that bestowed benefits, not harm, on consumers.

And that’s just for starters. The commission is vigorously pursuing other antitrust investigations of Google that could lead to the imposition of billions of dollars in additional fines by European bureaucrats.

The legal outlook for Google is cloudy at best. Although the commission’s decisions can be appealed to European courts, European Commission bureaucrats have a generally good track record in winning before those tribunals.

But the problem is even bigger than that.

Recently, questionable antitrust probes have grown like topsy around the world, many of them aimed at America’s most creative high-tech firms. Beneficial innovations have become legal nightmares—good for defense lawyers, but bad for free market competition and the health of the American economy.

What great crime did Google commit to merit the huge European Commission fine?

The commission claims that Google favored its own comparison shopping service over others in displaying Google search results.

Never mind that consumers apparently like the shopping-related service links they find on Google (after all, they keep using its search engine in droves), or can patronize any other search engine or specialized comparison shopping service that can be found with a few clicks of the mouse.

This is akin to saying that Kroger or Walmart harm competition when they give favorable shelf space displays to their house brands. That’s ridiculous.

Somehow, such “favoritism” does not prevent consumers from flocking to those successful chains, or patronizing their competitors if they so choose. It is the essence of vigorous free market rivalry.  

The commission’s theory of anticompetitive behavior doesn’t hold water, as I explained in an earlier article. The Federal Trade Commission investigated Google’s search engine practices several years ago and found no evidence that alleged Google search engine display bias harmed consumers.

To the contrary, as former FTC Commissioner (and leading antitrust expert) Josh Wright has pointed out, and as the FTC found:

Google likely benefited consumers by prominently displaying its vertical content on its search results page. The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior—so-called ‘click through’ data—which showed how consumers reacted to Google’s promotion of its vertical properties.

In short, Google’s search policies benefit consumers. Antitrust is properly concerned with challenging business practices that harm consumer welfare and the overall competitive process, not with propping up particular competitors.

Absent a showing of actual harm to consumers, government antitrust cops—whether in Europe, the U.S., or elsewhere—should butt out.

Unfortunately, the European Commission shows no sign of heeding this commonsense advice. The Europeans have also charged Google with antitrust violations—with multibillion-dollar fines in the offing—based on the company’s promotion of its Android mobile operating service and its AdSense advertising service.

(That’s not all—other European Commission Google inquiries are also pending.)

As in the shopping services case, these investigations appear to be woefully short on evidence of harm to competition and consumer welfare.

The bigger question raised by the Google matters is the ability of any highly successful individual competitor to efficiently promote and favor its own offerings—something that has long been understood by American enforcers to be part and parcel of free-market competition.

As law Professor Michael Carrier points outs, any changes the EU forces on Google’s business model “could eventually apply to any way that Amazon, Facebook or anyone else offers to search for products or services.”

This is troublesome. Successful American information-age companies have already run afoul of the commission’s regulatory cops.

Microsoft and Intel absorbed multibillion-dollar European Commission antitrust fines in recent years, based on other theories of competitive harm. Amazon, Facebook, and Apple, among others, have faced European probes of their competitive practices and “privacy policies”—the terms under which they use or share sensitive information from consumers.

Often, these probes have been supported by less successful rivals who would rather rely on government intervention than competition on the merits.

Of course, being large and innovative is not a legal shield. Market-leading companies merit being investigated for actions that are truly harmful. The law applies equally to everyone.

But antitrust probes of efficient practices that confer great benefits on consumers (think how much the Google search engine makes it easier and cheaper to buy desired products and services and obtain useful information), based merely on the theory that some rivals may lose business, do not advance the free market. They retard it.

Who loses when zealous bureaucrats target efficient business practices by large, highly successful firms, as in the case of the European Commission’s Google probes and related investigations? The general public.

“Platform firms” like Google and Amazon that bring together consumers and other businesses will invest less in improving their search engines and other consumer-friendly features, for fear of being accused of undermining less successful competitors.

As a result, the supply of beneficial innovations will slow, and consumers will be less well off.

What’s more, competition will weaken, as the incentive to innovate to compete effectively with market leaders will be reduced. Regulation and government favor will substitute for welfare-enhancing improvement in goods, services, and platform quality. Economic vitality will inevitably be reduced, to the public’s detriment.

Europe is not the only place where American market leaders face unwarranted antitrust challenges.

For example, Qualcomm and InterDigital, U.S. firms that are leaders in smartphone communications technologies that power mobile interconnections, have faced large antitrust fines for, in essence, “charging too much” for licenses to their patented technologies.

South Korea also claimed to impose a “global remedy” that imposed its artificially low royalty rates on all of Qualcomm’s licensing agreements around the world.

(All this is part and parcel of foreign government attacks on American intellectual property—patents, copyrights, trademarks, and trade secrets—that cost U.S. innovators hundreds of billions of dollars a year.)

 

A lack of basic procedural fairness in certain foreign antitrust proceedings has also bedeviled American companies, preventing them from being able to defend their conduct. Foreign antitrust has sometimes been perverted into a form of “industrial policy” that discriminates against American companies in favor of domestic businesses.

What can be done to confront these problems?

In 2016, the U.S. Chamber of Commerce convened a group of trade and antitrust experts to examine the problem. In March 2017, the chamber released a report by the experts describing the nature of the problem and making specific recommendations for U.S. government action to deal with it.

Specifically, the experts urged that a White House-led interagency task force be set up to develop a strategy for dealing with unwarranted antitrust attacks on American businesses—including both misapplication of legal rules and violations of due process.

The report also called for the U.S. government to work through existing international institutions and trade negotiations to promote a convergence toward sounder antitrust practices worldwide.

The Trump administration should take heed of the experts’ report and act decisively to combat harmful foreign antitrust distortions. Antitrust policy worldwide should focus on helping the competitive process work more efficiently, not on distorting it by shacking successful innovators.

One more point, not mentioned in the article, merits being stressed.  Although the United States Government cannot control a foreign sovereign’s application of its competition law, it can engage in rhetoric and public advocacy aimed at convincing that sovereign to apply its law in a manner that promotes consumer welfare, competition on the merits, and economic efficiency.  Regrettably, the Obama Administration, particularly in the latter part of its second term, did a miserable job in promoting a facts-based, empirical approach to antitrust enforcement, centered on hard facts, not on mere speculative theories of harm.  In particular, certain political appointees lent lip service or silent acquiescence to inappropriate antitrust attacks on the unilateral exercise of intellectual property rights.  In addition, those senior officials made statements that could have been interpreted as supportive of populist “big is bad” conceptions of antitrust that had been discredited decades ago – through sound scholarship, by U.S. enforcement policies, and in judicial decisions.  The Trump Administration will have an opportunity to correct those errors, and to restore U.S. policy leadership in support of sound, pro-free market antitrust principles.  Let us hope that it does so, and soon.

Last October 26, Heritage scholar James Gattuso and I published an essay in The Daily Signal, explaining that the proposed vertical merger (a merger between firms at different stages of the distribution chain) of AT&T and Time Warner (currently undergoing Justice Department antitrust review) may have the potential to bestow substantial benefits on consumers – and that congressional calls to block it, uninformed by fact-based economic analysis, could prove detrimental to consumer welfare.  We explained:

[E]ven though the proposed union of AT&T and Time Warner is not guaranteed to benefit shareholders or consumers, that is no reason for the government to block it. Absent a strong showing of likely harm to the competitive process (which does not appear to be the case here), the government has no business interfering in corporate acquisitions.  Market forces should be allowed to sort out the welfare-enhancing transactional sheep from the unprofitable goats.  Shareholders are in a position to “vote with their feet” and reward or punish a merged company, based on information generated in the marketplace. 

[M]arket transactors are better placed and better incentivized than bureaucrats to uncover and apply the information needed to yield an efficient allocation of resources.

In short, government meddling in mergers in the absence of likely market failure (and of reason to believe that the government’s actions will yield results superior to those of an imperfect market) is a recipe for a diminution in—not an improvement in—consumer welfare.

Furthermore, by arbitrarily intervening in proposed mergers that are not anti-competitive, government disincentivizes firms from acting boldly to seek out new opportunities to create wealth and enhance the welfare of consumers.

What’s worse, the knowledge that government may intervene in mergers without regard to their likely competitive effects will prompt wasteful expenditures by special interests opposing particular transactions, causing a further diminution in economic welfare.

Unfortunately, the congressional critics of this deal are still out there, louder than ever, and, once again, need to be reminded about the dangers of unwarranted antitrust interventions – and the problem with “big is bad” rhetoric.  Scalia Law School Professor (and former Federal Trade Commissioner) Joshua Wright ably deconstructs the problems with the latest Capitol Hill  criticisms of this proposed merger, set forth in a June 21 letter to the Justice Department from eleven U.S. Senators (including Elizabeth Warren, Al Franken, and Bernie Sanders).  As Professor Wright explains in a June 26 article published by The Hill:

Over the past several decades, there has been resounding and bipartisan agreement — amongst mainstream antitrust economists, practitioners, enforcement agencies, and even politicians — that while mergers between vertically aligned companies, like AT&T and Time Warner, can in rare circumstances harm competition, they usually make consumers better off. The opposition letter is a call to disrupt that consensus with a “new” view that vertical mergers are presumptively a bad deal for consumers and violate the antitrust laws.

The call for an antitrust revolution with respect to vertical mergers should not go unanswered. Revolution actually overstates things. The “new” antitrust is really a thinly veiled attempt to return to the antitrust approach of the 1960s where everything “big” was bad and virtually all deals, vertical ones included, violated the antitrust laws. That approach gained traction in part because it is easy to develop supporting rhetoric that is inflammatory and easily digestible. . . .

[However,] [a]s a matter of fact, the overwhelming weight of economic analysis and empirical evidence serves as a much-needed dose of cold water for the fiery rhetoric in the opposition letter and the commonly held intuition that all mergers between big firms make consumers worse off. . . .

[C]onsider the conclusion of a widely cited summary of dozens of studies authored by Francine LaFontaine and Margaret Slade, two very well respected industrial organization economists (one who served as director of the U.S. Federal Trade Commission’s bureau of economics during the Obama administration). It found that “consumers are often worse off when governments require vertical separation in markets where firms would have chosen otherwise.” Or consider the conclusion of four former enforcement agency economists reviewing the same body of evidence that “there is a paucity of support for the proposition that vertical restraints [or] vertical integration are likely to harm consumers.”

This evidence by no means suggests vertical mergers are incapable of harming consumers or violating the antitrust laws. The data do suggest an evidence-based antitrust enforcement approach aimed at protecting consumers will not presume that they are harmful without careful, rigorous, and objective analysis. Antitrust analysis is — or at least should be — a fact-specific exercise. Weighing concrete economic evidence is critical when assessing mergers, particularly when assessing vertical mergers where procompetitive virtues are almost always present. . . .

The economic and legal framework for analyzing vertical mergers is well understood by the U.S. Department of Justice’s antitrust division and its staff of expert lawyers and economists. The antitrust division has not hesitated to determine an appropriate remedy in the rare instance where a vertical merger has been found likely to harm competition. The [Senators’] opposition letter is correct that a careful and rigorous analysis of the proposed acquisition is called for — as is the case with all mergers. That review process should, however, be guided by careful and objective analysis and not the fiery political rhetoric [of the Senators’ letter].

Under the leadership of soon-to-be U.S. Assistant Attorney General Makan Delrahim, an experienced antitrust lawyer and antitrust enforcement agency veteran, the Justice Department antitrust division staff will be empowered to conduct precisely that type of analysis and reach a decision that best protects competition and consumers.

Professor Wright’s excellent essay merits being read in full.

  1. Background: The Murr v. Wisconsin Case

On June 23, in a 5-3 decision by Justice Anthony Kennedy (Justice Ruth Bader Ginsburg, Stephen Breyer, Sonia Sotomayor, and Elena Kagan joined; Justice Neil Gorsuch did not participate), the U.S. Supreme Court upheld  the Wisconsin State Court of Appeals’ ruling that two waterfront lots should be treated as a single unit in a “regulatory takings” case.  The Murrs are siblings who inherited two adjacent waterfront properties from their parents, and they wanted to sell one of the lots and develop the other.  Unfortunately for the Murrs, the lots had been merged under local zoning regulations, and the local county board of assessments denied the Murrs’ request for a zoning variance to allow their plan to proceed.

The Murrs challenged this in state court, arguing that the state had effectively taken their second property by depriving them of practically all use without paying just compensation, as required by the Takings Clause of the Fifth Amendment.  Affirming a lower state court, the Wisconsin Appeals Court held that the takings analysis properly focused on the two lots together and that, using that framework, the merger regulations did not effectuate a taking.

The U.S. Supreme Court granted the Murrs’ writ of certiorari.  The Supreme Court found that in determining what the relevant unit of property is, courts must ask whether the owner would have a reasonable expectation to believe the property would be treated as a single or separate units.  The Court held that in regulatory takings assessments courts must give substantial weight to how state and local law treat the property, evaluate the property’s physical characteristics, and assess the property’s value under the challenged regulation.  The majority concluded that with regard to the Murrs’ property, there was a valid merger under state law, the terrain and shape of the lots made it clear that the merged lot’s use might be limited, and the second lot brought prospective value to the first. Thus, the lots should be treated as one parcel and they did not suffer a compensable taking, since the Murrs were not deprived of all economically beneficial use of the property.

Chief Justice John Roberts dissented (joined by Justices Clarence Thomas and Samuel Alito), noting that the Takings Clause protects private property rights “as state law created and defines them” and the majority’s “malleable definition of ‘private property’…undermines that protection.”  Thus, “[s]tate law defines the boundaries of distinct parcels of land, and those boundaries should determine the ‘private property’ at issue in regulatory takings cases.  Whether a regulation effects a taking of that property is a separate question, one in which common ownership of adjacent property may be taken into account.”

The always thoughtful Justice Thomas penned a separate dissent, suggesting that the Court should reconsider its regulatory takings jurisprudence to see “whether it can be grounded in the original public meaning” of the relevant constitutional provisions.

  1. The Supreme Court Should Reject the Confusing Dichotomy Between Physical and Regulatory Takings and Apply a Simpler Uniform Standard, One that Better Protects the Property Interests Safeguarded by the Fifth Amendment’s Takings Clause

Unfortunately, far from clarifying regulatory takings analysis, the Murr decision further muddies the doctrinal waters in this area.  Justice Kennedy’s majority decision creates a new inherently ambiguous balancing test that gives substantial leeway to localities to adjust regulatory demarcations and property line divisions without paying compensation to harmed property owners.

Although the three-Justice dissent sets forth a more full-throated paean to property rights, it does little to clarify how to determine when a regulatory taking occurs.  Instead, it approvingly cites prior less than helpful Supreme Court pronouncements on the topic:

Governments can infringe private property interests for public use not only through   [direct] appropriations, but through regulations as well. . . .  Our regulatory takings decisions . . .  have recognized that, “while property may be regulated to a certain extent, if regulation goes too far it will be recognized as a taking.”  This rule strikes a balance between property owners’ rights and the government’s authority to advance the common good. Owners can rest assured that they will be compensated for particularly onerous regulatory actions, while governments maintain the freedom to adjust the benefits and burdens of property ownership without incurring crippling costs from each alteration. . . .  For the vast array of regulations that [do not deny all economically beneficial or productive use of land and thus automatically constitute a taking,] . . . a flexible approach is more fitting.  The factors to consider are wide ranging, and include the economic impact of the regulation, the owner’s investment-backed expectations, and the character of the government action.  The ultimate question is whether the government’s imposition on a property has forced the owner “to bear public burdens which, in all fairness and justice, should be borne by the public as a whole.” 

Such a weighing of “wide-ranging factors” to determine whether or not a taking has occurred is inherently subjective and prone to manipulation by local authorities.  It enables them to marshal a list of Court-approved phrases to explain why a regulation does not go “too far” and take property – even though it may substantially destroy property value.

What is missing from the opinions in Murr is the recognition that any substantial net reduction in the value of a piece of property (subdivided or not) takes a certain property interest.  It is black letter law that there is not a single undivided property right inhering in an item of property, but, rather, multiple property interests – a “bundle of sticks” – that can be taken in whole or in part.  Under current Supreme Court jurisprudence, if the government directly seizes (or physically occupies) a particular stick, compensation is owed for the reduction in overall property value stemming from that stick’s loss.  This is the case of a physical “per se” taking.  But if the government instead enacts a rule preventing that stick from being sold or embellished by the bundle’s owner (think of the Murrs’ plan to sell one plot and develop the other), the owner likewise suffers similar reduced overall property value due to restrictions on the stick.  Under existing Supreme Court case law, however, the loss in value in the second case, unlike the first case, may well not be compensable, because the owner has not been deprived “of all beneficial use” of the overall property.  Supreme Court case law indicates that a taking may exist in the second case, depending upon a regulation’s impact, its interference in investment-backed expectations, and the character of its actions.  As a practical matter, this infelicitous, indeterminate balancing test very seldom results in a taking being found.  As a result, government is incentivized to invade property rights by using regulations, rather than physical appropriations, thereby undermining the Taking Clause’s requirement that “private property [not] be taken for public use, without just compensation.”

There is a far better way to deal with the problem of government regulatory intrusions on private property rights, one that recognizes that regulatory deprivation of any stick in the bundle should be compensable.  Professor Richard Epstein, distinguished property law scholar extraordinaire, points the way in his very recent article posted at the NYU Journal of Law and Liberty blog 18 days before Murr was handed down.  While Professor Epstein’s brilliant essay merits a close read, his key points are as follows:

I have used the occasion of yet another takings case before the Supreme Court, Murr v. Wisconsin, to comment on the structure of the takings law as it is, and as it ought to be.  On the former count, it is quite clear that the entire structure of the modern law of physical and regulatory takings tends to fixate on the ratio of the value of property rights taken to the value of the full bundle of rights before the regulation was put into place.  But there is no explanation as to why this ratio has any significance in light of the standard rule in physical-takings cases that the fair market value of the rights taken affords the correct measure of compensation so long as the taking is for a public use when no police-power justification is available.  Within this peculiar framework, it is a mistake to make the right of compensation for the loss of development rights under the Wisconsin ordinance turn on the technicalities of the chain of title to a particular plot.  This seems a uniquely inappropriate reason to deny compensation for the loss of development rights.

Any analysis of Murr is inherently messy, and it leaves open the endless challenge of reconciling this case with a wide range of other cases that cannot decide whether two contiguous parcels held by different titles can be a collective denominator in takings cases.  [But] . . . the muddle and confusion of the current law is largely obviated by the simple proposition that, prima facie, the more the government takes, the more it pays.  That rule applies to the outright taking of any given parcel of land or to the taking of a divided interest in property. In all of these cases, the shifts in what is taken do not create odd and indefensible discontinuities, but only raise valuation questions as to the size of the loss, taking into account any return benefits that a property owner may receive when the taking is part of some comprehensive scheme. But those issues are routinely encountered in all physical-takings cases. In all instances, police-power justifications, tied closely to the law of nuisance, may be invoked, and in cases of comprehensive regulation, courts must be alert to determine whether the scheme that takes rights away also affords compensation in-kind from the parallel restrictions on others in the scheme. Under this view, the full range of divided interests, be they air rights, mineral rights, liens, covenants, or easements, are fully compensable. The untenable discontinuities under current doctrine disappear.

Let us hope that in the future, the Supreme Court will take to heart Justice Thomas’s recommendation that the Court return to first principles, and, in so doing, seriously consider the economically and jurisprudentially sophisticated analysis adumbrated in Professor Epstein’s inspired essay.                  

  1. Background

On June 19, in Matal v. Tam, the U.S. Supreme Court (Justice Gorsuch did not participate in the case) affirmed the Federal Circuit’s ruling that the Lanham Act’s “disparagement clause” is unconstitutional under the First Amendment’s free speech clause.  The Patent and Trademark Office denied the Slants’ (an Asian rock group) federal trademark registration, relying on the Lanham Act’s prohibition on trademarks that “which may disparage . . . persons, living or dead, institutions, beliefs, or national symbols, or bring them into contempt, or disrepute.”  The Court held that trademarks are not government speech, pointing out that the government “does not dream up these marks.”  With the exception of marks scrutinized under the disparagement clause, trademarks are not reviewed for compliance with government policies.  Writing for the Court, Justice Samuel Alito (joined by Chief Justice John Roberts, Justice Clarence Thomas, and Justice Stephen Breyer) found unpersuasive the government’s argument that trademarks are analogous to subsidized speech.  The Alito opinion also determined that it is unnecessary to determine whether trademarks are commercial speech (subject to lesser scrutiny), because the disparagement clause cannot survive the Supreme Court’s test for such speech enunciated in Central Hudson Gas & Electric Company (1980).  Justice Anthony Kennedy, joined by Justices Ruth Bader Ginsburg, Sonia Sotomayor, and Elena Kagan, concurred in the judgment.  The Kennedy opinion agreed that the disparagement clause constitutes viewpoint discrimination because it reflects the government’s disapproval of certain speech, and that heightened scrutiny should apply, whether or not trademarks are commercial speech.

The Tam decision continues the trend of Supreme Court cases extending First Amendment protection for offensive speech.  Perhaps less likely to be noted, however, is that this decision also promotes free market principles by enhancing the effectiveness of legal protection for a key intellectual property right.  To understand this point, a brief primer on the law and economics of federal trademark protection is in order.

  1. The Law and Economics of Federal Trademark Protection in a Nutshell

A trademark (called a service mark in the case of a service) is an intellectual property right that identifies the source of a particular producer’s goods or services.  Trademarks reduce transactions costs by enabling consumers more easily to identify and patronize particular goods and services whose attributes they associate with a trademark.  This enhances market efficiency, by lowering information costs in the market and by encouraging competing firms to develop unique attributes that they can signal to consumers.

By robustly protecting federally-registered trademarks, the federal Lanham Act (see here for Lanham Act trademark infringement remedies) creates strong incentives for each trademark holder to invest in (and promote through advertising and other means) the quality of the trademarked goods or services it produces.  Strong trademark remedies are key because they promote the market-based interest in ensuring trademark holders that their individual property rights will be protected.  As one scholar puts it, “[i]t is generally accepted that [federal trademark] infringement actions protect both the goodwill of mark owners and competition by preventing confusion.”

Shielded by firm legal protection, the trademark holder will tend not to allow the quality of its trademark-protected offerings to slip, knowing that consumers will quickly and easily associate the reduced quality with its mark and stop patronizing the trademarked product or service.  Absent strong trademark protection, however, producers of competing products and services will be tempted to “free ride” by using a competing business’s registered trademark without authorization.  This sharply reduces the original trademark owner’s incentive to invest in and continue to promote quality, because it knows that the free riders will seek to attract customers by using the trademark to sell less costly, lower quality fare.  Quality overall suffers, to the detriment of consumers.  Allowing free riding on distinctive trademarks also (and relatedly) sows confusion as to the identity of sellers and as to the attributes covered by a particular trademark, leading to a weakening of the trademark system’s role as a source identifier and as a spur to attribute-based competition.

In short, federal trademark law protection, embodied in the Lanham Act, enhances free market competitive processes by protecting a trademark’s role in identifying suppliers (reducing transaction costs); incentivizing investment in the enhancement and preservation of product quality; and spurring attribute-based competition.

  1. The Demise of Lanham Act Disparagement Enhances Trademark Rights and Promotes Free Market Principles

The disparagement clause denied federal legal protection to a broad class of trademarks, based merely on the highly subjective determination by federal bureaucrats that the marks in question “disparaged” particular individuals or institutions.  This denial undermined private parties’ incentives to invest in “disparaging” marks, and to compete vigorously by signaling to consumers the existence of novel products and services that they might find appealing.

By “constitutionally expunging” the disparagement clause, the Supreme Court in Tam has opened the gateway to more robust competition by spurring the vigorous investment in and promotion of a larger number of marks.  Consumers in the marketplace, not bureaucrats, will decide whether the products or services identified by particular marks are “problematic” and therefore not worthy of patronage.  In other words, by enhancing legal protection for a wider variety of trademarks, the Tam decision has paved the way for the expansion of mutually-beneficial marketplace transactions, to the benefit of consumers and producers alike.

To conclude, in promoting First Amendment free speech interests, the Tam Court also gave a shot in the arm to welfare-enhancing competition in markets for goods and services.  It turns out that competition in the marketplace of ideas goes hand-in-hand with competition in the commercial marketplace.

Too much ink has been spilled in an attempt to gin up antitrust controversies regarding efforts by holders of “standard essential patents” (SEPs, patents covering technologies that are adopted as part of technical standards relied upon by manufacturers) to obtain reasonable returns to their property. Antitrust theories typically revolve around claims that SEP owners engage in monopolistic “hold-up” when they threaten injunctions or seek “excessive” royalties (or other “improperly onerous” terms) from potential licensees in patent licensing negotiations, in violation of pledges (sometimes imposed by standard-setting organizations) to license on “fair, reasonable, and non-discriminatory” (FRAND) terms. As Professors Joshua Wright and Douglas Ginsburg, among others, have explained, contract law, tort law, and patent law are far better placed to handle “FRAND-related” SEP disputes than antitrust law. Adding antitrust to the litigation mix generates unnecessary costs and inefficiently devalues legitimate private property rights.

Concerns by antitrust mavens that other areas of law are insufficient to cope adequately with SEP-FRAND disputes are misplaced. A fascinating draft law review article by Koren Wrong-Ervin, Director of the Scalia Law School’s Global Antitrust Institute, and Anne Layne-Farrar, Vice President of Charles River Associates, does an admirable job of summarizing key decisions by U.S. and foreign courts involved in determining FRAND rates in SEP litigation, and in highlighting key economic concepts underlying these holdings. As explained in the article’s abstract:

In the last several years, courts around the world, including in China, the European Union, India, and the United States, have ruled on appropriate methodologies for calculating either a reasonable royalty rate or reasonable royalty damages on standard-essential patents (SEPs) upon which a patent holder has made an assurance to license on fair, reasonable and nondiscriminatory (FRAND) terms. Included in these decisions are determinations about patent holdup, licensee holdout, the seeking of injunctive relief, royalty stacking, the incremental value rule, reliance on comparable licenses, the appropriate revenue base for royalty calculations, and the use of worldwide portfolio licensing. This article provides an economic and comparative analysis of the case law to date, including the landmark 2013 FRAND-royalty determination issued by the Shenzhen Intermediate People’s Court (and affirmed by the Guangdong Province High People’s Court) in Huawei v. InterDigital; numerous U.S. district court decisions; recent seminal decisions from the United States Court of Appeals for the Federal Circuit in Ericsson v. D-Link and CISCO v. CSIRO; the six recent decisions involving Ericsson issued by the Delhi High Court; the European Court of Justice decision in Huawei v. ZTE; and numerous post- Huawei v. ZTE decisions by European Union member states. While this article focuses on court decisions, discussions of the various agency decisions from around the world are also included throughout.   

To whet the reader’s appetite, key economic policy and factual “takeaways” from the article, which are reflected implicitly in a variety of U.S. and foreign judicial holdings, are as follows:

  • Holdup of any form requires lock-in, i.e., standard-implementing companies with asset-specific investments locked in to the technologies defining the standard or SEP holders locked in to licensing in the context of a standard because of standard-specific research and development (R&D) leading to standard-specific patented technologies.
  • Lock-in is a necessary condition for holdup, but it is not sufficient. For holdup in any guise to actually occur, there also must be an exploitative action taken by the relevant party once lock-in has happened. As a result, the mere fact that a license agreement was signed after a patent was included in a standard is not enough to establish that the patent holder is practicing holdup—there must also be evidence that the SEP holder took advantage of the licensee’s lock-in, for example by charging supra-FRAND royalties that it could not otherwise have charged but for the lock-in.
  • Despite coming after a particular standard is published, the vast majority of SEP licenses are concluded in arm’s length, bilateral negotiations with no allegations of holdup or opportunistic behavior. This follows because market mechanisms impose a number of constraints that militate against acting on the opportunity for holdup.
  • In order to support holdup claims, an expert must establish that the terms and conditions in an SEP licensing agreement generate payments that exceed the value conveyed by the patented technology to the licensor that signed the agreement.
  • The threat of seeking injunctive relief, on its own, cannot lead to holdup unless that threat is both credible and actionable. Indeed, the in terrorem effect of filing for an injunction depends on the likelihood of its being granted. Empirical evidence shows a significant decline in the number of injunctions sought as well as in the actual rate of injunctions granted in the United States following the Supreme Court’s 2006 decision in eBay v. MercExchange LLC, which ended the prior nearly automatic granting of injunctions to patentees and instead required courts to apply a traditional four-part equitable test for granting injunctive relief.
  • The Federal Circuit has recognized that an SEP holder’s ability to seek injunctive relief is an important safeguard to help prevent potential licensee holdout, whereby an SEP infringer unilaterally refuses a FRAND royalty or unreasonably delays negotiations to the same effect.
  • Related to the previous point, seeking an injunction against a licensee who is delaying or not negotiating in good faith need not actually result in an injunction. The fact that a court finds a licensee is holding out and/or not engaging in good faith licensing discussions can be enough to spur a license agreement as opposed to a permanent injunction.
  • FRAND rates should reflect the value of the SEPs at issue, so it makes no economic sense to estimate an aggregate rate for a standard by assuming that all SEP holders would charge the same rate as the one being challenged in the current lawsuit.
  • Moreover, as the U.S. Court of Appeals for the Federal Circuit has held, allegations of “royalty stacking” – the allegedly “excessive” aggregate burden of high licensing fees stemming from multiple patents that cover a single product – should be backed by case-specific evidence.
  • Most importantly, when a judicial FRAND assessment is focused on the value that the SEP portfolio at issue has contributed to the standard and products embodying the standard, the resulting rates and terms will necessarily avoid both patent holdup and royalty stacking.

In sum, the Wong-Ervin and Layne-Farrar article highlights economic insights that are reflected in the sounder judicial opinions dealing with the determination of FRAND royalties.  The article points the way toward methodologies that provide SEP holders sufficient returns on their intellectual property to reward innovation and maintain incentives to invest in technologies that enhance the value of standards.  Read it and learn.

Today, the Senate Committee on Health, Education, Labor, and Pensions (HELP) enters the drug pricing debate with a hearing on “The Cost of Prescription Drugs: How the Drug Delivery System Affects What Patients Pay.”  By questioning the role of the drug delivery system in pricing, the hearing goes beyond the more narrow focus of recent hearings that have explored how drug companies set prices.  Instead, today’s hearing will explore how pharmacy benefit managers, insurers, providers, and others influence the amounts that patients pay.

In 2016, net U.S. drug spending increased by 4.8% to $323 billion (after adjusting for rebates and off-invoice discounts).  This rate of growth slowed to less than half the rates of 2014 and 2015, when net drug spending grew at rates of 10% and 8.9% respectively.  Yet despite the slowing in drug spending, the public outcry over the cost of prescription drugs continues.

In today’s hearing, there will be testimony both on the various causes of drug spending increases and on various proposals that could reduce the cost of drugs.  Several of the proposals will focus on ways to increase competition in the pharmaceutical industry, and in turn, reduce drug prices.  I have previously explained several ways that the government could reduce prices through enhanced competition, including reducing the backlog of generic drugs awaiting FDA approval and expediting the approval and acceptance of biosimilars.  Other proposals today will likely call for regulatory reforms to enable innovative contractual arrangements that allow for outcome- or indication-based pricing and other novel reimbursement designs.

However, some proposals will undoubtedly return to the familiar call for more government negotiation of drug prices, especially drugs covered under Medicare Part D.  As I’ve discussed in a previous post, in order for government negotiation to significantly lower drug prices, the government must be able to put pressure on drug makers to secure price concessions. This could be achieved if the government could set prices administratively, penalize manufacturers that don’t offer price reductions, or establish a formulary.  Setting prices or penalizing drug makers that don’t reduce prices would produce the same disastrous effects as price controls: drug shortages in certain markets, increased prices for non-Medicare patients, and reduced incentives for innovation. A government formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would mean that many patients could no longer access some of their optimal drugs.

As lawmakers seriously consider changes that would produce these negative consequences, industry would do well to voluntarily constrain prices.  Indeed, in the last year, many drug makers have pledged to limit price increases to keep drug spending under control.  Allergan was first, with its “social contract” introduced last September that promised to keep price increases below 10 percent. Since then, Novo Nordisk, AbbVie, and Takeda, have also voluntarily committed to single-digit price increases.

So far, the evidence shows the drug makers are sticking to their promises. Allergan has raised the price of U.S. branded products by an average of 6.7% in 2017, and no drug’s list price has increased by more than single digits.  In contrast, Pfizer, who has made no pricing commitment, has raised the price of many of its drugs by 20%.

If more drug makers brought about meaningful change by committing to voluntary pricing restraints, the industry could prevent the market-distorting consequences of government intervention while helping patients afford the drugs they need.   Moreover, avoiding intrusive government mandates and price controls would preserve drug innovation that has brought life-saving and life-enhancing drugs to millions of Americans.

 

 

 

R Street’s Sasha Moss recently posted a piece on TechDirt describing the alleged shortcomings of the Register of Copyrights Selection and Accountability Act of 2017 (RCSAA) — proposed legislative adjustments to the Copyright Office, recently passed in the House and introduced in the Senate last month (with identical language).

Many of the article’s points are well taken. Nevertheless, they don’t support the article’s call for the Senate to “jettison [the bill] entirely,” nor the assertion that “[a]s currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.”

R Street’s main complaint with the legislation is that it doesn’t include other proposals in a House Judiciary Committee whitepaper on Copyright Office modernization. But condemning the RCSAA simply for failing to incorporate all conceivable Copyright Office improvements fails to adequately take account of the political realities confronting Congress — in other words, it lets the perfect be the enemy of the good. It also undermines R Street’s own stated preference for Copyright Office modernization effected through “targeted and immediately implementable solutions.”

Everyone — even R Street — acknowledges that we need to modernize the Copyright office. But none of the arguments in favor of a theoretical, “better” bill is undermined or impeded by passing this bill first. While there is certainly more that Congress can do on this front, the RCSAA is a sensible, targeted piece of legislation that begins to build the new foundation for a twenty-first century Copyright Office.

Process over politics

The proposed bill is simple: It would make the Register of Copyrights a nominated and confirmed position. For reasons almost forgotten over the last century and a half, the head of the Copyright Office is currently selected at the sole discretion of the Librarian of Congress. The Copyright Office was placed in the Library merely as a way to grow the Library’s collection with copies of copyrighted works.

More than 100 years later, most everyone acknowledges that the Copyright Office has lagged behind the times. And many think the problem lies with the Office’s placement within the Library, which is plagued with information technology and other problems, and has a distinctly different mission than the Copyright Office. The only real question is what to do about it.

Separating the the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization. And yet, somewhat inexplicably, R Street claims that the bill

amounts largely to a partisan battle over who will have the power to select the next Register: [Current Librarian of Congress] Hayden, who was appointed by Barack Obama, or President Donald Trump.

But this is a pretty farfetched characterization.

First, the House passed the bill 378-48, with 145 Democrats joining 233 Republicans in support. That’s more than three-quarters of the Democratic caucus.

Moreover, legislation to make the Register a nominated and confirmed position has been under discussion for more than four years — long before either Dr. Hayden was nominated or anyone knew that Donald Trump (or any Republican at all, for that matter) would be president.

R Street also claims that the legislation

will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, [and that] the nomination process could delay modernization efforts [because of Trump’s] confirmation backlog.

But precisely the opposite seems far more likely — as Sasha herself has previously recognized:

Clarifying the office’s lines of authority does have the benefit of making it more politically accountable…. The [House] bill takes a positive step forward in promoting accountability.

As far as I’m aware, no one claims that Dr. Hayden was “politicized” or that Librarians are vulnerable to capture because they are nominated and confirmed. And a Senate confirmation process will be more transparent than unilateral appointment by the Librarian, and will give the electorate a (nominal) voice in the Register’s selection. Surely unilateral selection of the Register by the Librarian is more susceptible to undue influence.

With respect to the modernization process, we should also not forget that the Copyright Office currently has an Acting Register in Karyn Temple Claggett, who is perfectly capable of moving the modernization process forward. And any limits on her ability to do so would arise from the very tenuousness of her position that the RCSAA is intended to address.

Modernizing the Copyright Office one piece at a time

It’s certainly true, as the article notes, that the legislation doesn’t include a number of other sensible proposals for Copyright Office modernization. In particular, it points to ideas like forming a stakeholder advisory board, creating new chief economist and technologist positions, upgrading the Office’s information technology systems, and creating a small claims court.

To be sure, these could be beneficial reforms, as ICLE (and many others) have noted. But I would take some advice from R Street’s own “pragmatic approach” to promoting efficient government “with the full realization that progress on the ground tends to be made one inch at a time.”

R Street acknowledges that the legislation’s authors have indicated that this is but a beginning step and that they plan to tackle the other issues in due course. At a time when passage of any legislation on any topic is a challenge, it seems appropriate to defer to those in Congress who affirmatively want more modernization about how big a bill to start with.

In any event, it seems perfectly sensible to address the Register selection process before tackling the other issues, which may require more detailed discussions of policy and cost. And with the Copyright Office currently lacking a permanent Register and discussions underway about finding a new one, addressing any changes Congress deems necessary in the selection process seems like the most pressing issue, if they are to be resolved prior to the next pick being made.

Further, because the Register would presumably be deeply involved in the selection and operation of any new advisory board, chief economist and technologist, IT system, or small claims process, Congress can also be forgiven for wanting to address the Register issue first. Moreover, a Register who can be summarily dismissed by the Librarian likely doesn’t have the needed autonomy to fully and effectively implement the other proposals from the whitepaper. Why build a house on a shaky foundation when you can fix the foundation first?

Process over substance

All of which leaves the question why R Street opposes a bill that was passed by a bipartisan supermajority in the House; that effects precisely the kind of targeted, incremental reform that R Street promotes; and that implements a specific reform that R Street favors.

The legislation has widespread support beyond Congress, although the TechDirt piece gives this support short shrift. Instead, it notes that “some” in the content industry support the legislation, but lists only the Motion Picture Association of America. There is a subtle undercurrent of the typical substantive copyright debate, in which “enlightened” thinking on copyright is set against the presumptively malicious overreach of the movie studios. But the piece neglects to mention the support of more than 70 large and small content creators, technology companies, labor unions, and free market and civil rights groups, among others.

Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates. But it’s difficult to escape. Copyright minimalists are skeptical of an effectual Copyright Office if it is more likely to promote policies that reinforce robust copyright, even if they support sensible process reforms and more-accountable government in the abstract. And, to be fair, copyright proponents are thrilled when their substantive positions might be bolstered by promotion of sensible process reforms.

But the truth is that no one really knows how an independent and accountable Copyright Office will act with respect to contentious, substantive issues. Perhaps most likely, increased accountability via nomination and confirmation will introduce more variance in its positions. In other words, on substance, the best guess is that greater Copyright Office accountability and modernization will be a wash — leaving only process itself as a sensible basis on which to assess reform. And on that basis, there is really no reason to oppose this widely supported, incremental step toward a modern US Copyright Office.

I’ll be participating in two excellent antitrust/consumer protection events next week in DC, both of which may be of interest to our readers:

5th Annual Public Policy Conference on the Law & Economics of Privacy and Data Security

hosted by the GMU Law & Economics Center’s Program on Economics & Privacy, in partnership with the Future of Privacy Forum, and the Journal of Law, Economics & Policy.

Conference Description:

Data flows are central to an increasingly large share of the economy. A wide array of products and business models—from the sharing economy and artificial intelligence to autonomous vehicles and embedded medical devices—rely on personal data. Consequently, privacy regulation leaves a large economic footprint. As with any regulatory enterprise, the key to sound data policy is striking a balance between competing interests and norms that leaves consumers better off; finding an approach that addresses privacy concerns, but also supports the benefits of technology is an increasingly complex challenge. Not only is technology continuously advancing, but individual attitudes, expectations, and participation vary greatly. New ideas and approaches to privacy must be identified and developed at the same pace and with the same focus as the technologies they address.

This year’s symposium will include panels on Unfairness under Section 5: Unpacking “Substantial Injury”, Conceptualizing the Benefits and Costs from Data Flows, and The Law and Economics of Data Security.

I will be presenting a draft paper, co-authored with Kristian Stout, on the FTC’s reasonableness standard in data security cases following the Commission decision in LabMD, entitled, When “Reasonable” Isn’t: The FTC’s Standard-less Data Security Standard.

Conference Details:

  • Thursday, June 8, 2017
  • 8:00 am to 3:40 pm
  • at George Mason University, Founders Hall (next door to the Law School)
    • 3351 Fairfax Drive, Arlington, VA 22201

Register here

View the full agenda here

 

The State of Antitrust Enforcement

hosted by the Federalist Society.

Panel Description:

Antitrust policy during much of the Obama Administration was a continuation of the Bush Administration’s minimal involvement in the market. However, at the end of President Obama’s term, there was a significant pivot to investigations and blocks of high profile mergers such as Halliburton-Baker Hughes, Comcast-Time Warner Cable, Staples-Office Depot, Sysco-US Foods, and Aetna-Humana and Anthem-Cigna. How will or should the new Administration analyze proposed mergers, including certain high profile deals like Walgreens-Rite Aid, AT&T-Time Warner, Inc., and DraftKings-FanDuel?

Join us for a lively luncheon panel discussion that will cover these topics and the anticipated future of antitrust enforcement.

Speakers:

  • Albert A. Foer, Founder and Senior Fellow, American Antitrust Institute
  • Profesor Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Honorable Joshua D. Wright, Professor of Law, George Mason University School of Law
  • Moderator: Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law and President, Cass & Associates, PC

Panel Details:

  • Friday, June 09, 2017
  • 12:00 pm to 2:00 pm
  • at the National Press Club, MWL Conference Rooms
    • 529 14th Street, NW, Washington, DC 20045

Register here

Hope to see everyone at both events!