Search Results For Michael Vita

During 2016 it became fashionable in certain circles to decry “lax” merger enforcement and to call for a more aggressive merger enforcement policy (see, for example, the American Antitrust Institute’s September 2016 paper on competition policy, critiqued by me in this blog post).  Interventionists promoting “tougher” merger enforcement have cited Professor John Kwoka’s 2015 book, Mergers, Merger Control, and Remedies in support of the proposition that U.S. antitrust enforcers have been “excessively tolerant” in analyzing proposed mergers.

In that regard, a recent paper by two outstanding FTC economists (Michael Vita and David Osinski) is well worth noting.  It makes a strong (and, in my view, persuasive) case that Kwoka’s research is fatally flawed.  The following excerpt, drawn from the introduction and conclusion of the paper (Mergers, Merger Control, and Remedies:  A Critical Review), merits close attention:

John Kwoka’s recently published Mergers, Merger Control, and Remedies (2015) has received considerable attention from both antitrust practitioners and academics. The book features a meta-analysis of retrospective studies of consummated mergers, joint ventures, and other horizontal arrangements. Based on summary statistics derived from these studies, Kwoka concludes that domestic antitrust agencies are excessively tolerant in their merger enforcement; that merger remedies are ineffective at mitigating market power; and that merger enforcement has become increasingly lax over time. We review both his evidence and his empirical methods, and conclude that serious deficiencies in both undermine the basis for these conclusions. . . .

We sympathize with the goal of using retrospective analyses to assess the performance of the antitrust agencies and to identify possible improvements. Unfortunately, Kwoka has drawn inferences and reached conclusions about contemporary merger enforcement policy that are unjustified by his data and his methods. His critique of negotiated remedies in merger cases relies on a small number of transactions; a close reading reveals that a number of them are silent on the effectiveness of the associated remedies. His data sample lacks diversity, relying heavily on a small number of studies conducted on a small and unrepresentative set of industries. His statistical methodology departs from well-established techniques for conducting meta-analyses, making it impossible for readers to assess the strength of his evidence using standard statistical tools. His conclusions about the growing permissiveness of enforcement policies lack substantiation. Overall, we are unpersuaded that his evidence can support such broad and general policy conclusions.

Hopefully, the new leadership at the Federal Trade Commission and at the Justice Department’s Antitrust Division will carefully scrutinize this and other recent research on mergers in devising their merger enforcement policy.  Additional research on the effects of mergers, including an evaluation of their static and dynamic efficiencies, is highly warranted.  Enforcers should not lose sight of the fact that disincentivizing efficient mergers could undermine a vibrant market for corporate control in general, as well as precluding the net creation of economic surplus in specific cases.

More on Error Costs

Josh Wright —  20 January 2009

Speaking of error cost analysis, this paper from a trio of lawyers in the General Counsel’s Policy Studies’ group at the FTC has a section entitled “Error Costs: The False Positive/ Negative Debate.” A frustration for me in discussing the error cost issue with respect to antitrust policy is that many people do not seem to understand that it is error costs and not just errors that we are concerned with. A common refrain is: show me a false positive monopolization case! We bring so few, we don’t have to worry about false positives anymore. QED. Of course that is wrong. The social costs of false positives are not about the single business practice that is condemned by an antitrust judgment. The real costs are the chilling of pro-competitive behavior by other firms in response to the expectation of the same type of judgment against them. You cannot just count cases. Those aren’t where the costs are! Not to mention much of the expectation of liability from pro-competitive behavior is to do with the threat of settlements. Oh, and you want a false positive? Here’s one.

Much of the confusion surrounding this basic point of the error-cost approach to antitrust rules can be read in the hearings on the Section 2 Report with proponents of more interventionist antitrust policy constantly invoking the mantra that we just don’t see that many false positives in the cases as evidence in favor of the lack of error costs. Some go so far as to argue that the social costs of a false negative in the context of monopolization is likely to outweigh the costs of a false positive. While monopolization can have the same economic impact as cartel behavior, of course, economic theory tells us that there are some offsetting forces to correct market failure in the case of false negatives but none for false positives. This was one of Easterbrook’s key points in the Limits of Antitrust. The other key point, which is not as well appreciated, is that errors are more likely when we do not have a good basis for identifying anticompetitive conduct. This is nowhere more true than in the case of monopolization. In this era of no monopolization enforcement, there ought to be bundles of empirical examples abound documenting exclusionary distribution contracts with convincing statistical evidence. The literature isn’t there. So I’m not sure exactly how one would identify the false negative if they saw it. On the other hand, the bulk of the literature on vertical restraints and single firm conduct suggests that the conduct is pro-consumer most of the time. Another point in favor of fearing false positives instead of negatives.

Anyway, back to the paper. Its generally very good in framing the debate and pointing out what Section hearing panelists had to say on the issue. It doesn’t quite, at least to my tastes, separate out the issue of errors versus error costs nor the theoretical underpinnings of the error-cost approach sufficiently. Still, its good and better than most on this issue. And they do cite Easterbrook’s argument with respect to higher social costs for false positives relative to negatives. However, it wraps up the discussion with a little bit of a tone of “some say false positives, some say false negatives, lets ignore both of them because there is no evidence.” For example, on this issue it includes the following paragraphs:

This debate reflects both the potential promise of decision theory as an analytical framework and its current limits as a calibrated tool. While decision theory provides “a way to organize our thinking about legal standards,” the lack of reliable data limits its ability to identify optimal rules.186 As one panelist observed, “[F]alse positives [and] false negatives” should be considered “on the basis of empirical data and not on theoretical assumptions.”187 Yet the hearings suggested no basis for reliably quantifying the likelihood and magnitude of false positives and negatives under potential liability rules.

When evidence is limited, decision theory primarily provides general directions and broad insights, leading courts and enforcers to identify circumstances in which concerns regarding either false positives or false negatives are likely to be especially significant, and where greater tolerance or heightened vigilance may be appropriate.188 The Supreme Court’s application of decision theory in antitrust cases has reflected these limitations, identifying two areas—predatory pricing and predatory buying—in which concerns regarding false positives warrant the use of a specially-designed test.189

The conclusion here on the error cost debate seems to be that the error-cost framework (and the decision-theory that necessarily goes with it) is less useful when we do not have empirical data on the likelihood and magnitude of false positives and negatives. Perhaps no data came out of the hearings on these issues, but there is a substantial economic literature on single firm practices ranging from RPM to exclusive dealing to tying. See, e.g. this recent survey of that literature by colleagues of these FTC authors over in the Bureau of Economics. There are authors surveys as well. E.g., Lafontaine & Slade.

Here’s a quote from the first linked survey piece by Luke Froeb, James Cooper, Dan O’Brien and Michael Vita summing up the state of play:

Empirical analyses of vertical integration and control have failed to find compelling evidence that these practices have harmed competition, and numerous studies find otherwise. Some studies find evidence consistent with both pro- and anticompetitive effects; but virtually no studies can claim to have identified instances where vertical practices were likely to have harmed competition.

The error cost approach allows one to focus on an evidence-based antitrust policy. And frankly, at least in the monopolization area, the empirical basis for a more aggressive policy just isn’t there no matters. Assertions about the rarity of judicial errors in favor of plaintiffs in antitrust cases do not change that given the current state of our empirical knowledge.

I am very pleased to announce the “Merger Analysis in High Technology Markets” on behalf of my colleague Tom Hazlett, myself, and the Information Economy Project of the National Center for Technology and Law. The conference will be held at George Mason University School of Law on February 1, 2008 from 8:15 am-2:30 pm. Below is the conference agenda and information about attending. We hope to see you there!

INFORMATION ECONOMY PROJECT
THOMAS W. HAZLETT, DIRECTOR
DREW CLARK, ASSOCIATE DIRECTOR
JOSHUA D. WRIGHT, CONFERENCE ORGANIZER

 

MERGER ANALYSIS IN HIGH TECHNOLOGY MARKETS

HAZEL HALL * GMU SCHOOL OF LAW * ROOM 121

8:15 WELCOME * THOMAS HAZLETT (GMU)

8:20 MORNING KEYNOTE

8:45 PANEL 1 * MODERATOR: KEN HEYER (DOJ)

HOWARD SHELANSKI (UC BERKELEY)
TECHNOLOGICAL INNOVATION AND MERGER POLICY’S THIRD ERA

MICHAEL BAYE (FTC)
MARKET DEFINITION IN ONLINE MARKETS

RICHARD GILBERT (UC BERKELEY)
SKY WARS: THE ATTEMPTED MERGER OF DISH/ DIRECTV

10:00 BREAK

10:15 PANEL 2 * MODERATOR: MICHAEL VITA (FTC)

HAL SINGER (CRITERION) & ROBERT HAHN (AEI)
AN ANTITRUST ANALYSIS OF GOOGLE’S PROPOSED ACQUISITION OF DOUBLECLICK

MARY COLEMAN (LECG)

NICE THEORY BUT WHERE’S THE EVIDENCE?:
THE
USE OF ECONOMIC EVIDENCE TO EVALUATE VERTICAL AND CONGLOMERATE MERGERS IN THE US AND EU

LUKE FROEB (VANDERBILT)

MERGERS AMONG FIRMS THAT LICENSE COMMON INTELLECTUAL PROPERTY

11:30 BREAK

11:45 PANEL 3*MODERATOR: JONATHAN BAKER (AMERICAN)

BRUCE ABRAMSON (CRAI)

ARE “ONLINE MARKETS” REAL AND RELEVANT?

THOMAS HAZLETT (GMU)

ANTITRUST IN ORBIT: SOME DYNAMICS OF HORIZONTAL MERGER ANALYSIS IN THE CASE OF XM-SIRIUS

J. GREG SIDAK (GEORGETOWN)

EVALUATING MARKET POWER WITH TWO-SIDED DEMAND AND PREEMPTIVE OFFERS TO DISSIPATE MONOPOLY RENT: LESSONS FOR HIGH-TECHNOLOGY INDUSTRIES FROM THE PROPOSED MERGER OF XM AND SIRIUS SATELLITE RADIO MERGER

1:00 LUNCH

LUNCH KEYNOTE

2:30 ADJOURN

VENUE: The George Mason University School of Law, Hazel Hall, 3301 Fairfax Drive, Arlington, VA 22201 (near the Virginia Square-GMU Metro — Orange Line). Admission is free, but seating is limited. To reserve your spot, please email Drew Clark: iep.gmu@gmail.com. Parking (at market rates) is available in the GMU Foundation Bldg., 3434 Washington Boulevard. An Arlington campus map is found here: http://www.gmu.edu/departments/infoservices/ArlingtonMap07.pdf.

Louis De Alessi is Professor Emeritus of Economics at the University of Miami.

Fred and I met when he enrolled in my graduate course in Microeconomic Theory at George Washington University. The class was small, I used a Socratic approach, and Fred — as you would expect – was an active participant, asking good questions and making insightful comments. We began to get to know each other, and toward the end of the course he came to see me. He said that he had been intrigued by the range of interesting problems that economic theory could be used to explore, and was thinking of studying for the Ph.D. We discussed his interests and constraints, and it seemed to me that the Department of Economics at the University of Virginia would be a very good fit. I encouraged him to look into it, he did, and decided to go there. I was delighted. He was an outstanding student of great academic promise, and UVA was an excellent springboard.

In 1975 Henry Manne, a good friend, approached me about joining the Law & Economics Center at the University of Miami as co-director of the John M. Olin Fellowship Program. One of his inducements was that Fred was going to be one of the Fellows in the first class, and that helped me decide to accept the offer. When the LEC moved into its own building, Fred had the office directly across the hallway from mine, and we saw a lot of each other. Fred did extremely well in the LEC courses and seminars as well as in the UM School of Law, where he made law review and graduated at the top of his class.

Fred and I became good friends. He inevitably had a good story to tell – usually the long version – and time spent with him was always entertaining and informative. We had other interests in common besides economics, including history and — during his stint at LEC — sailing. He and three colleagues owned an O’Day Daysailer that they docked at our house and used to explore Biscayne Bay and its keys, and occasionally our boats crossed paths.

Fred was thoughtful and kind. He and Tim Muris used to spend an occasional Spring weekend making a tour of baseball Spring training camps in Florida. One weekend they took along my son Michael, a 9-year old baseball fan. Michael had a marvelous time, and my wife Helen and I were deeply appreciative.

After Fred graduated from law school, I followed his career with great interest and I was delighted that he fulfilled his promise. We corresponded, and we met at conferences, meetings, and similar events. We were members of many of the same professional organizations and we had a lot of friends in common, so it was easy and rewarding to keep in touch.  We exchanged reprints and I contributed a chapter to the book on antitrust that he co-edited with William Shughart. For a time I taught a few days a year in a master’s program in Napoli, Italy, and in 1992 I asked Fred to teach a segment on law. We overlapped, and Fred – with his usual flair – opened his first lecture in Italian. The next day he joined my wife and me in Roma, and we spent a day or two wending our way to Torino, where he stayed with us for a couple of days. He was bright, cheerful, witty, and a great companion.

When Fred returned to the University of Miami School of Law, first on a visiting basis and then as a chair professor, I was utterly pleased. We began to meet periodically for mid-morning coffee (Fred always had a banana as well) at a Starbucks with an outdoor patio and spend an hour or two chatting. As usual, he was bubbling with research and writing ideas and travel plans. Life here was good to him for a while. Then a close friend died suddenly of cancer and not too long afterwards his own health began to deteriorate.

The last time we met he did not look well and was quite subdued. I thought it was just a temporary setback, and he certainly seemed to think so. Even after his family moved him to the Washington, DC  area for better care, I expected him to recover, as he had before. His demise was a shock.

Fred had a long and brilliant career. It spanned legal practice with a major law firm, an influential position at the Federal Trade Commission, and a series of distinguished academic appointments. Those in the profession will remember him for his many research contributions as a leading scholar in law and economics. His friends will also remember him for his good humor, warmth, and erudition.  We had been friends for some forty-five years, and I will miss him.

He loved his family, and it is comforting that he was with them at the end.

On July 24, as part of their newly-announced “Better Deal” campaign, congressional Democrats released an antitrust proposal (“Better Deal Antitrust Proposal” or BDAP) entitled “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power.”  Unfortunately, this antitrust tract is really an “Old Deal” screed that rehashes long-discredited ideas about “bigness is badness” and “corporate abuses,” untethered from serious economic analysis.  (In spirit it echoes the proposal for a renewed emphasis on “fairness” in antitrust made by then Acting Assistant Attorney General Renata Hesse in 2016 – a recommendation that ran counter to sound economics, as I explained in a September 2016 Truth on the Market commentary.)  Implementation of the BDAP’s recommendations would be a “worse deal” for American consumers and for American economic vitality and growth.

The BDAP’s Portrayal of the State of Antitrust Enforcement is Factually Inaccurate, and it Ignores the Real Problems of Crony Capitalism and Regulatory Overreach

The Better Deal Antitrust Proposal begins with the assertion that antitrust has failed in recent decades:

Over the past thirty years, growing corporate influence and consolidation has led to reductions in competition, choice for consumers, and bargaining power for workers.  The extensive concentration of power in the hands of a few corporations hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors.  It means higher prices and less choice for the things the American people buy every day. . .  [This is because] [o]ver the last thirty years, courts and permissive regulators have allowed large companies to get larger, resulting in higher prices and limited consumer choice in daily expenses such as travel, cable, and food and beverages.  And because concentrated market power leads to concentrated political power, these companies deploy armies of lobbyists to increase their stranglehold on Washington.  A Better Deal on competition means that we will revisit our antitrust laws to ensure that the economic freedom of all Americans—consumers, workers, and small businesses—come before big corporations that are getting even bigger.

This statement’s assertions are curious (not to mention problematic) in multiple respects.

First, since Democratic administrations have held the White House for sixteen of the past thirty years, the BDAP appears to acknowledge that Democratic presidents have overseen a failed antitrust policy.

Second, the broad claim that consumers have faced higher prices and limited consumer choice with regard to their daily expenses is baseless.  Indeed, internet commerce and new business models have sharply reduced travel and entertainment costs for the bulk of American consumers, and new “high technology” products such as smartphones and electronic games have been characterized by dramatic improvements in innovation, enhanced variety, and relatively lower costs.  Cable suppliers face vibrant competition from competitive satellite providers, fiberoptic cable suppliers (the major telcos such as Verizon), and new online methods for distributing content.  Consumer price inflation has been extremely low in recent decades, compared to the high inflationary, less innovative environment of the 1960s and 1970s – decades when federal antitrust law was applied much more vigorously.  Thus, the claim that weaker antitrust has denied consumers “economic freedom” is at war with the truth.

Third, the claim that recent decades have seen the creation of “concentrated market power,” safe from antitrust challenge, ignores the fact that, over the last three decades, apolitical government antitrust officials under both Democratic and Republican administrations have applied well-accepted economic tools (wielded by the scores of Ph.D. economists in the Justice Department and Federal Trade Commission) in enforcing the antitrust laws.  Antitrust analysis has used economics to focus on inefficient business conduct that would maintain or increase market power, and large numbers of cartels have been prosecuted and questionable mergers (including a variety of major health care and communications industry mergers) have been successfully challenged.  The alleged growth of “concentrated market power,” untouched by incompetent antitrust enforcers, is a myth.  Furthermore, claims that mere corporate size and “aggregate concentration” are grounds for antitrust concern (“big is bad”) were decisively rejected by empirical economic research published in the 1970s, and are no more convincing today.  (As I pointed out in a January 2017 blog posting at this site, recent research by highly respected economists debunks a few claims that federal antitrust enforcers have been “excessively tolerant” of late in analyzing proposed mergers.)

More interesting is the BDAP’s claim that “armies of [corporate] lobbyists” manage to “increase their stranglehold on Washington.”  This is not an antitrust concern, however, but, rather, a complaint against crony capitalism and overregulation, which became an ever more serious problem under the Obama Administration.  As I explained in my October 2016 critique of the American Antitrust Institute’s September 2008 National Competition Policy Report (a Report which is very similar in tone to the BDAP), the rapid growth of excessive regulation during the Obama years has diminished competition by creating new regulatory schemes that benefit entrenched and powerful firms (such as Dodd-Frank Act banking rules that impose excessive burdens on smaller banks).  My critique emphasized that, “as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.”  And, more generally, excessive regulatory burdens undermine the competitive process, by distorting business decisions in a manner that detracts from competition on the merits.

It follows that, if the BDAP really wanted to challenge “unfair” corporate advantages, it would seek to roll back excessive regulation (see my November 2012 article on Trump Administration competition policy).  Indeed, the Trump Administration’s regulatory reform program (which features agency-specific regulatory reform task forces) seeks to do just that.  Perhaps then the BDAP could be rewritten to focus on endorsing President Trump’s regulatory reform initiative, rather than emphasizing a meritless “big is bad” populist antitrust policy that was consigned to the enforcement dustbin decades ago.

The BDAP’s Specific Proposals Would Harm the Economy and Reduce Consumer Welfare

Unfortunately, the BDAP does more than wax nostalgic about old-time “big is bad” antitrust policy.  It affirmatively recommends policy changes that would harm the economy.

First, the BDAP would require “a broader, longer-term view and strong presumptions that market concentration can result in anticompetitive conduct.”  Specifically, it would create “new standards to limit large mergers that unfairly consolidate corporate power,” including “mergers [that] reduce wages, cut jobs, lower product quality, limit access to services, stifle innovation, or hinder the ability of small businesses and entrepreneurs to compete.”  New standards would also “explicitly consider the ways in which control of consumer data can be used to stifle competition or jeopardize consumer privacy.”

Unlike current merger policy, which evaluates likely competitive effects, centered on price and quality, estimated in economically relevant markets, these new standards are open-ended.  They could justify challenges based on such a wide variety of factors that they would incentivize direct competitors not to merge, even in cases where the proposed merged entity would prove more efficient and able to enhance quality or innovation.  Certain less efficient competitors – say small businesses – could argue that they would be driven out of business, or that some jobs in the industry would disappear, in order to prompt government challenges.  But such challenges would tend to undermine innovation and business improvements, and the inevitable redistribution of assets to higher-valued uses that is a key benefit of corporate reorganizations and acquisitions.  (Mergers might focus instead, for example, on inefficient conglomerate acquisitions among companies in unrelated industries, which were incentivized by the overly strict 1960s rules that prohibited mergers among direct competitors.)  Such a change would represent a retreat from economic common sense, and be at odds with consensus economically-sound merger enforcement guidance that U.S. enforcers have long recommended other countries adopt.  Furthermore, questions of consumer data and privacy are more appropriately dealt with as consumer protection questions, which the Federal Trade Commission has handled successfully for years.

Second, the BDAP would require “frequent, independent [after-the-fact] reviews of mergers” and require regulators “to take corrective measures if they find abusive monopolistic conditions where previously approved [consent decree] measures fail to make good on their intended outcomes.”

While high profile mergers subject to significant divestiture or other remedial requirements have in appropriate circumstances included monitoring requirements, the tone of this recommendation is to require that far more mergers be subjected to detailed and ongoing post-acquisition reviews.  The cost of such monitoring is substantial, however, and routine reliance on it (backed by the threat of additional enforcement actions based merely on changing economic conditions) could create excessive caution in the post-merger management of newly-consolidated enterprises.  Indeed, potential merged parties might decide in close cases that this sort of oversight is not worth accepting, and therefore call off potentially efficient transactions that would have enhanced economic welfare.  (The reality of enforcement error cost, and the possibility of misdiagnosis of post-merger competitive conditions, is not acknowledged by the BDAP.)

Third, a newly created “competition advocate” independent of the existing federal antitrust enforcers would be empowered to publicly recommend investigations, with the enforcers required to justify publicly why they chose not to pursue a particular recommended investigation.  The advocate would ensure that antitrust enforcers are held “accountable,” assure that complaints about “market exploitation and anticompetitive conduct” are heard, and publish data on “concentration and abuses of economic power” with demographic breakdowns.

This third proposal is particularly egregious.  It is at odds with the long tradition of prosecutorial discretion that has been enjoyed by the federal antitrust enforcers (and law enforcers in general).  It would also empower a special interest intervenor to promote the complaints of interest groups that object to efficiency-seeking business conduct, thereby undermining the careful economic and legal analysis that is consistently employed by the expert antitrust agencies.  The references to “concentration” and “economic power” clarify that the “advocate” would have an untrammeled ability to highlight non-economic objections to transactions raised by inefficient competitors, jealous rivals, or self-styled populists who object to excessive “bigness.”  This would strike at the heart of our competitive process, which presumes that private parties will be allowed to fulfill their own goals, free from government micromanagement, absent indications of a clear and well-defined violation of law.  In sum, the “competition advocate” is better viewed as a “special interest” advocate empowered to ignore normal legal constraints and unjustifiably interfere in business transactions.  If empowered to operate freely, such an advocate (better viewed as an albatross) would undoubtedly chill a wide variety of business arrangements, to the detriment of consumers and economic innovation.

Finally, the BDAP refers to a variety of ills that are said to affect specific named industries, in particular airlines, cable/telecom, beer, food prices, and eyeglasses.  Airlines are subject to a variety of capacity limitations (limitations on landing slots and the size/number of airports) and regulatory constraints (prohibitions on foreign entry or investment) that may affect competitive conditions, but airlines mergers are closely reviewed by the Justice Department.  Cable and telecom companies face a variety of federal, state, and local regulations, and their mergers also are closely scrutinized.  The BDAP’s reference to the proposed AT&T/Time Warner merger ignores the potential efficiencies of this “vertical” arrangement involving complementary assets (see my coauthored commentary here), and resorts to unsupported claims about wrongful “discrimination” by “behemoths” – issues that in any event are examined in antitrust merger reviews.  Unsupported references to harm to competition and consumer choice are thrown out in the references to beer and agrochemical mergers, which also receive close economically-focused merger scrutiny under existing law.  Concerns raised about the price of eyeglasses ignore the role of potentially anticompetitive regulation – that is, bad government – in harming consumer welfare in this sector.  In short, the alleged competitive “problems” the BDAP raises with respect to particular industries are no more compelling than the rest of its analysis.  The Justice Department and Federal Trade Commission are hard at work applying sound economics to these sectors.  They should be left to do their jobs, and the BDAP’s industry-specific commentary (sadly, like the rest of its commentary) should be accorded no weight.

Conclusion

Congressional Democrats would be well-advised to ditch their efforts to resurrect the counterproductive antitrust policy from days of yore, and instead focus on real economic problems, such as excessive and inappropriate government regulation, as well as weak protection for U.S. intellectual property rights, here and abroad (see here, for example).  Such a change in emphasis would redound to the benefit of American consumers and producers.

 

 

Today I published an article in The Daily Signal bemoaning the European Commission’s June 27 decision to fine Google $2.7 billion for engaging in procompetitive, consumer welfare-enhancing conduct.  The article is reproduced below (internal hyperlinks omitted), in italics:

On June 27, the European Commission—Europe’s antitrust enforcer—fined Google over $2.7 billion for a supposed violation of European antitrust law that bestowed benefits, not harm, on consumers.

And that’s just for starters. The commission is vigorously pursuing other antitrust investigations of Google that could lead to the imposition of billions of dollars in additional fines by European bureaucrats.

The legal outlook for Google is cloudy at best. Although the commission’s decisions can be appealed to European courts, European Commission bureaucrats have a generally good track record in winning before those tribunals.

But the problem is even bigger than that.

Recently, questionable antitrust probes have grown like topsy around the world, many of them aimed at America’s most creative high-tech firms. Beneficial innovations have become legal nightmares—good for defense lawyers, but bad for free market competition and the health of the American economy.

What great crime did Google commit to merit the huge European Commission fine?

The commission claims that Google favored its own comparison shopping service over others in displaying Google search results.

Never mind that consumers apparently like the shopping-related service links they find on Google (after all, they keep using its search engine in droves), or can patronize any other search engine or specialized comparison shopping service that can be found with a few clicks of the mouse.

This is akin to saying that Kroger or Walmart harm competition when they give favorable shelf space displays to their house brands. That’s ridiculous.

Somehow, such “favoritism” does not prevent consumers from flocking to those successful chains, or patronizing their competitors if they so choose. It is the essence of vigorous free market rivalry.  

The commission’s theory of anticompetitive behavior doesn’t hold water, as I explained in an earlier article. The Federal Trade Commission investigated Google’s search engine practices several years ago and found no evidence that alleged Google search engine display bias harmed consumers.

To the contrary, as former FTC Commissioner (and leading antitrust expert) Josh Wright has pointed out, and as the FTC found:

Google likely benefited consumers by prominently displaying its vertical content on its search results page. The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior—so-called ‘click through’ data—which showed how consumers reacted to Google’s promotion of its vertical properties.

In short, Google’s search policies benefit consumers. Antitrust is properly concerned with challenging business practices that harm consumer welfare and the overall competitive process, not with propping up particular competitors.

Absent a showing of actual harm to consumers, government antitrust cops—whether in Europe, the U.S., or elsewhere—should butt out.

Unfortunately, the European Commission shows no sign of heeding this commonsense advice. The Europeans have also charged Google with antitrust violations—with multibillion-dollar fines in the offing—based on the company’s promotion of its Android mobile operating service and its AdSense advertising service.

(That’s not all—other European Commission Google inquiries are also pending.)

As in the shopping services case, these investigations appear to be woefully short on evidence of harm to competition and consumer welfare.

The bigger question raised by the Google matters is the ability of any highly successful individual competitor to efficiently promote and favor its own offerings—something that has long been understood by American enforcers to be part and parcel of free-market competition.

As law Professor Michael Carrier points outs, any changes the EU forces on Google’s business model “could eventually apply to any way that Amazon, Facebook or anyone else offers to search for products or services.”

This is troublesome. Successful American information-age companies have already run afoul of the commission’s regulatory cops.

Microsoft and Intel absorbed multibillion-dollar European Commission antitrust fines in recent years, based on other theories of competitive harm. Amazon, Facebook, and Apple, among others, have faced European probes of their competitive practices and “privacy policies”—the terms under which they use or share sensitive information from consumers.

Often, these probes have been supported by less successful rivals who would rather rely on government intervention than competition on the merits.

Of course, being large and innovative is not a legal shield. Market-leading companies merit being investigated for actions that are truly harmful. The law applies equally to everyone.

But antitrust probes of efficient practices that confer great benefits on consumers (think how much the Google search engine makes it easier and cheaper to buy desired products and services and obtain useful information), based merely on the theory that some rivals may lose business, do not advance the free market. They retard it.

Who loses when zealous bureaucrats target efficient business practices by large, highly successful firms, as in the case of the European Commission’s Google probes and related investigations? The general public.

“Platform firms” like Google and Amazon that bring together consumers and other businesses will invest less in improving their search engines and other consumer-friendly features, for fear of being accused of undermining less successful competitors.

As a result, the supply of beneficial innovations will slow, and consumers will be less well off.

What’s more, competition will weaken, as the incentive to innovate to compete effectively with market leaders will be reduced. Regulation and government favor will substitute for welfare-enhancing improvement in goods, services, and platform quality. Economic vitality will inevitably be reduced, to the public’s detriment.

Europe is not the only place where American market leaders face unwarranted antitrust challenges.

For example, Qualcomm and InterDigital, U.S. firms that are leaders in smartphone communications technologies that power mobile interconnections, have faced large antitrust fines for, in essence, “charging too much” for licenses to their patented technologies.

South Korea also claimed to impose a “global remedy” that imposed its artificially low royalty rates on all of Qualcomm’s licensing agreements around the world.

(All this is part and parcel of foreign government attacks on American intellectual property—patents, copyrights, trademarks, and trade secrets—that cost U.S. innovators hundreds of billions of dollars a year.)

 

A lack of basic procedural fairness in certain foreign antitrust proceedings has also bedeviled American companies, preventing them from being able to defend their conduct. Foreign antitrust has sometimes been perverted into a form of “industrial policy” that discriminates against American companies in favor of domestic businesses.

What can be done to confront these problems?

In 2016, the U.S. Chamber of Commerce convened a group of trade and antitrust experts to examine the problem. In March 2017, the chamber released a report by the experts describing the nature of the problem and making specific recommendations for U.S. government action to deal with it.

Specifically, the experts urged that a White House-led interagency task force be set up to develop a strategy for dealing with unwarranted antitrust attacks on American businesses—including both misapplication of legal rules and violations of due process.

The report also called for the U.S. government to work through existing international institutions and trade negotiations to promote a convergence toward sounder antitrust practices worldwide.

The Trump administration should take heed of the experts’ report and act decisively to combat harmful foreign antitrust distortions. Antitrust policy worldwide should focus on helping the competitive process work more efficiently, not on distorting it by shacking successful innovators.

One more point, not mentioned in the article, merits being stressed.  Although the United States Government cannot control a foreign sovereign’s application of its competition law, it can engage in rhetoric and public advocacy aimed at convincing that sovereign to apply its law in a manner that promotes consumer welfare, competition on the merits, and economic efficiency.  Regrettably, the Obama Administration, particularly in the latter part of its second term, did a miserable job in promoting a facts-based, empirical approach to antitrust enforcement, centered on hard facts, not on mere speculative theories of harm.  In particular, certain political appointees lent lip service or silent acquiescence to inappropriate antitrust attacks on the unilateral exercise of intellectual property rights.  In addition, those senior officials made statements that could have been interpreted as supportive of populist “big is bad” conceptions of antitrust that had been discredited decades ago – through sound scholarship, by U.S. enforcement policies, and in judicial decisions.  The Trump Administration will have an opportunity to correct those errors, and to restore U.S. policy leadership in support of sound, pro-free market antitrust principles.  Let us hope that it does so, and soon.

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

The Senate should not reconfirm Jessica Rosenworcel to the Federal Communications Commission (FCC), in order to allow the Trump Administration to usher in needed reforms in the critical area of communications policy.

As documented by the Free State Foundation (FSF) and other supporters of free markets, the Obama Administration’s FCC has done a dismal job in overseeing communications regulation, both as a matter of law and economics (see, for example, the abuses documented in FSF publications).  The FCC’s proposal to impose common carrier-like regulations on the Internet is just one example of what constitutes not merely flawed policy, but a failure to adhere to the rule of law, as I explain in an October 2016 Heritage Foundation Legal Memorandum (citations omitted):

[T]he rule of law involves “a system of binding rules” that have been adopted and applied by a valid government authority and that embody “clarity, predictability, and equal applicability.”

 Practices employed by government agencies that undermine the rule of law ignore a fundamental duty that the government owes its citizens and thereby undermine America’s constitutional system. Federal courts, however, will not review a federal administrative action unless an actual litigated “case or controversy” is presented to them, and they generally are reluctant to invoke constitutional “first principles” to strike down federal agency initiatives. Judicial intervention is thus a poor check on an agency’s tendency to flout the rule of law—or merely give it lip service—by acting in an unpredictable and inequitable manner.

It follows, therefore, that close scrutiny of federal administrative agencies’ activities is particularly important in helping to achieve public accountability for an agency’s failure to honor the rule of law standard. Applying such scrutiny to the FCC reveals that it does a poor job of adhering to rule of law principles. Accordingly, specific legislative reforms to rectify that shortcoming warrant serious consideration by Congress. . . .

The FCC has fallen short in meeting rule of law standards, both in its procedural practices and in various substantive actions that it has taken. . . .

[FCC Procedural failures include] delays, lack of transparency, and inefficiencies in agency proceedings (including “voting on secret texts and delaying the publication of orders”; excessive cost burdens on regulated parties; outdated rules; and problems in agency interactions with the public. . . .

Substantive agency actions also undermine the rule of law if they fall outside the scope of the agency’s constitutional, statutory, or regulatory authority.  By their nature, such actions indicate that an agency does not view itself as bound by the law and is unwilling to clarify how the government’s coercive powers will be applied.  Significant FCC initiatives in recent years have involved such derogations from rule of law principles and have proved to be far more serious than mere procedural imperfections. 

Specific FCC abuses of the rule of law, documented in my Heritage Legal Memorandum, include the imposition of arbitrary conditions on merging parties having nothing to do with the actual effects of a merger.  They also involve regulatory initiatives that exceed the FCC’s statutory authority, such as (1) an attempt to repeal state municipal broadband regulation (struck down in court), (2) the “Open Internet Order” which seeks to regulate the Internet under the guise of “net neutrality,” (3) the unauthorized extension of FCC rules covering joint sales agreements by broadcast stations (struck down in court), and (4) the unauthorized regulation of video “set top box” equipment.

The FCC has also brought a variety of public enforcement actions against private parties that could not reasonably have known that they were violating a legal norm as defined by the FCC, thereby violating principles of clarity, predictability, and equal treatment in law enforcement.

Key FCC actions that flout the rule of law have been enacted by partisan three-to-two FCC votes, with the three Democratic Commissioners (Chairman Tom Wheeler, Mignon Clyburn, and Jessica Rosenworcel) voting in favor of such measures and the two Republican Commissioners (Ajit Pai and Michael O’Rielly) voting in opposition.  Without Commissioner Rosenworcel’s votes, the FCC’s ability to undermine the rule of law in those instances would have been thwarted.

Commissioner Rosenworcel’s term expired in June 2015, but she remained on the Commission.  In 2015 President Obama nominated her for a new five-year term as FCC Commissioner, and, as explained by the Senate Commerce Committee, “[s]he may remain in her current role as commissioner until December 31, 2016 while awaiting Senate confirmation for a second term.”

Rosenworcel’s remomination has not yet been taken up by the Senate, giving President-Elect Trump the opportunity to select a new Commissioner (and Chairman) who can steer the FCC in a market-oriented direction that respects the rule of law.  On December 2nd, however, it was reported that “[Senate Minority Leader] Harry Reid and President Obama are circulating a petition to remove the hold on FCC Commissioner Jessica Rosenworcel so that she can be reconfirmed before Congress recesses next week.”

This is troublesome news.  Confirmation of Rosenworcel would deny the new President the ability to reshape communications policy, with serious negative effects on Internet freedom and innovation in the economically vital communications sector.  Senate Republicans should stand firm and deny confirmation to Ms. Rosenworcel, in order to ensure that the new President has the opportunity to reform the FCC.

Guest post by Steve Salop responding to Dan’s latest post on the appropriate liability rule for loyalty discounts. Other posts in the series: SteveDan, and Thom.

(1) Dan says that price-cost test should apply to “customer foreclosure” allegations.   One of my key points was that many loyalty discount claims involve “input foreclosure” or “raising rivals’ costs” effects, not plain-vanilla customer foreclosure.   In addition, loyalty agreements with distributors often involve input foreclosure because “distribution services” are an input and a rebate might be characterized as a reward payment for the (near-) exclusivity.    From his silence on the issue, I am inclined to presume that Dan would agree that the price-cost test should not be applied to such allegations.      Dan, what do you intend?

(2) Dan says that he agrees that the price-cost test should not be required for “partial exclusivity contracts” that involve contractual commitments to limit purchases from rivals.  He says that the price-cost test should apply only where the “claimed exclusionary mechanism is the price term.”  This distinction is peculiar because the economic analysis is the same in both situations.  In addition, even such voluntary exclusivity flowing from a price term can be anticompetitive, and even if the price-cost test is passed.  There are numerous reasons for this, as I explained in my original post. (I also discuss these issues in my contribution to Robert Pitofsky’s volume, “How the Chicago School Overshot the Mark.”  See also articles by Eric Rasmussen et. al., Michael Whinston and others.)

(3) Consider the following numerical examples that concretely illustrate the economic forces at work when there is competition for distribution, even in the absence of contractual commitments.

(a) Suppose that a monopolist is earning profits of $200.  If there is successful entry by an equally efficient entrant, each of the two firms will earn duopoly profits of $70.  (The duopoly profits are less than monopoly profits because of the price competition.)  Suppose that the entrant needs to obtain just non-exclusive distribution from a particular retailer in order to be viable.  In this case, the entrant would be willing to bid up to $70 per period for the non-exclusive distribution.  (In price terms, this would be a payment that led to the entrant’s costs equaling its price.)  But the monopolist would be willing to bid up to $130 for an exclusive (i.e., the difference between its monopoly and duopoly profits), in order to prevent the entrant from surviving.   Thus, the monopolist would win the bidding, say for a price of $71.   The monopolist would easily pass the price-cost test.   Why is the monopolist systematically able to outbid the entrant? This fundamental asymmetry does not arise because the entrant is less efficient.  Instead, the answer is that the monopolist is bidding to maintain its monopoly power, whereas the entrant can only obtain duopoly price.  The monopolist is “purchasing market power” in addition to distribution, whereas the entrant is only purchasing distribution.

(b) Or, consider this interesting variant with sequential bidding for multiple distributors.   Suppose there are two retailers and the entrant needs to get non-exclusive distribution at both in order to be viable.  Suppose that the negotiations at the two stores are sequential.  In this scenario, the entrant would have no incentive even to try to outbid the monopolist.   This is easy to see.   Suppose that the entrant wins the competition to get into the first store by paying the amount $B1.   In bidding for distribution at the second retailer, the monopolist would be willing to bid up to $130, as above.   At this second store, the entrant would not be willing to pay more than $70 (or $70 – $B1, if it is ignores the fact that the $B1 was an already sunk cost).  So the monopolist will win the exclusive at the second retailer and the entry will fail.   Looking back to the negotiations at the first store, the entrant would have had no incentive to throw away money by paying any positive amount $B1 to get distribution at the first store.   This is because it rationally would anticipate that it is inevitable that it will fail to gain distribution at the second retailer.  Thus, the monopolist will be able to gain the exclusive at both stores for next to nothing.   It clearly will pass the price-cost test even as it maintains its monopoly, merely by instituting the competition for distribution.

(c) If the entrant only needs to gain non-exclusive distribution at either one of the two stores, then the situation can be reversed and the entry can succeed.   The monopolist clearly would not be willing to pay $71 each at both stores (equal to a total payment of $142) in order to deter the entry and protect its “incremental” monopoly profits (equal to only $130 in the example).  Therefore, when the entrant bids for distribution at the first store, the monopolist might as well let the entrant win, which means that the entrant can gain access to both stores for next to nothing.   The entry succeeds, but again, the price-cost test would not be relevant to the analysis.

(d) There also can be elements of a “self-fulfilling equilibrium” because of lack of coordination by the distributors.  Suppose that there are 10 retailers and the entrant only needs to get distribution at 5 of them.   Suppose that the entrant offers to pay a $14 rebate for non-exclusive distribution, and it also will offer $14 again in the next period, if its entry succeeds in the first period.   Suppose the monopolist offers a lower rebate for an exclusive that will continue into the second period.   Suppose that each of the 10 retailers anticipates that the other retailers will accept the monopolist’s lower offer out of fear that the entrant will be unable to get 4 other retailers to accept its offer.  In that situation, the entry will fail.  This is not because the entrant is less efficient.  Instead, it is because the entrant faces a classic coordination problem.  If the retailers behave independently, the retailers’ fear of the entrant’s failure can be a self-fulfilling prophecy.   Again, the monopolist will easily pass the price-cost test.

(4) Dan makes the point that the price-cost test does not require adoption of an EEC antitrust standard (i.e., whereby only harm to EECs is relevant to antitrust).  I certainly agree that the price-cost screen does not necessarily rely on the EEC standard.   The price-cost test is better framed as a measure of “profit-sacrifice,” and EEC is simply a misleading way to express the test.  For example, I expect that Dan agrees that predatory pricing law uses the price-cost test as a measure of “profit-sacrifice,” not an assumption that only EECs matter.

(5) But, I was surprised that Dan also says that the EEC theory “has merit.”  In my view, the EEC standard has no merit in rigorous antitrust analysis. The example in my previous post illustrates why that is the case.  Raising the costs and possibly deterring the entry of a less efficient rival harms consumers and reduces output.

(6) Dan says that the “disloyalty penalty” price theory has problems, “including the empirical one that it doesn’t fit the pattern of almost any of the recent loyalty discount cases.”   The validity of Dan’s empirical claim is not obvious clear to me.  To evaluate whether there is a price penalty, you would need to know more than the path of prices over time.  You also would need to know what the price would be in the “but-for world.”   For example, suppose that in the absence of the loyalty discount, the incumbent would have reduced its price to $90.  This observation has two important implications.  First, this is a reason why it is not clear that loyalty discounts are “presumptively beneficial.”  Second, this is another reason why a price-cost test is not a good “screen” in loyalty discount cases.  Implementing the screen involves evaluating what prices would be absent the conduct.  But, after the competitive effects on consumers are known, what is the value of the screen?

(7) As to the question of whether Josh’s speech on loyalty discounts (and this issue of penalty prices) is inconsistent with their joint article on bundled discounts, I will leave that one for Josh and Dan to sort out, at least for the moment.   I certainly will concede the point that Wright is not always right.

(8) Dan began to suggest that the penalty price theory has a “problem of basic economics” in that the penalty price was not short-run profit-maximizing.   Dan subsequently seemed to withdraw this criticism, noticing that one could characterize the loyalty restriction as not profit-maximizing in the same way.   In any event, it is not a “problem” with the theory.  The reason why the firm is willing to sacrifice profits is because it gains the benefit of deterring entry.  By the way, it also may not even end up sacrificing profits.  The threat of the penalty price for non-exclusivity may be sufficient.  If the distributors succumb to the threat and buy exclusively from the incumbent, it never needs to actually charge them the penalty price.

Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A on Google+ that “our efforts at patent reform only went about halfway to where we need to go” and that he would like “to see if we can build some additional consensus on smarter patent laws.” So, Reps. DeFazio and Chaffetz introduced on March 1 the Saving High-tech Innovators from Egregious Legal Disputes (SHIELD) Act, which creates a “losing plaintiff patent-owner pays” litigation system for a single type of patent owner—patent licensing companies that purchase and license patents in the marketplace (and who sue infringers when infringers refuse their requests to license). To Google, to Representative DeFazio, and to others, these patent licensing companies are “patent trolls” who are destroyers of all things good—and the SHIELD Act will save us all from these dastardly “trolls” (is a troll anything but dastardly?).

As I and other scholars have pointed out, the “patent troll” moniker is really just a rhetorical epithet that lacks even an agreed-upon definition.  The term is used loosely enough that it sometimes covers and sometimes excludes universities, Thomas Edison, Elias Howe (the inventor of the lockstitch in 1843), Charles Goodyear (the inventor of vulcanized rubber in 1839), and even companies like IBM.  How can we be expected to have a reasonable discussion about patent policy when our basic terms of public discourse shift in meaning from blog to blog, article to article, speaker to speaker?  The same is true of the new term, “Patent Assertion Entities,” which sounds more neutral, but has the same problem in that it also lacks any objective definition or usage.

Setting aside this basic problem of terminology for the moment, the SHIELD Act is anything but a “smarter patent law” (to quote President Obama). Some patent scholars, like Michael Risch, have begun to point out some of the serious problems with the SHIELD Act, such as its selectively discriminatory treatment of certain types of patent-owners.  Moreover, as Professor Risch ably identifies, this legislation was so cleverly drafted to cover only a limited set of a specific type of patent-owner that it ended up being too clever. Unlike the previous version introduced last year, the 2013 SHIELD Act does not even apply to the flavor-of-the-day outrage over patent licensing companies—the owner of the podcast patent. (Although you wouldn’t know this if you read the supporters of the SHIELD Act like the EFF who falsely claim that this law will stop patent-owners like the podcast patent-owning company.)

There are many things wrong with the SHIELD Act, but one thing that I want to highlight here is that it based on a falsehood: the oft-repeated claim that two Boston University researchers have proven in a study that “patent troll suits cost American technology companies over $29 billion in 2011 alone.”  This is what Rep. DeFazio said when he introduced the SHIELD Act on March 1. This claim was repeated yesterday by House Members during a hearing on “Abusive Patent Litigation.” The claim that patent licensing companies cost American tech companies $29 billion in a single year (2011) has become gospel since this study, The Direct Costs from NPE Disputes, was released last summer on the Internet. (Another name of patent licensing companies is “Non Practicing Entity” or “NPE.”)  A Google search of “patent troll 29 billion” produces 191,000 hits. A Google search of “NPE 29 billion” produces 605,000 hits. Such is the making of conventional wisdom.

The problem with conventional wisdom is that it is usually incorrect, and the study that produced the claim of “$29 billion imposed by patent trolls” is no different. The $29 billion cost study is deeply and fundamentally flawed, as explained by two noted professors, David Schwartz and Jay Kesan, who are also highly regarded for their empirical and economic work in patent law.  In their essay, Analyzing the Role of Non-Practicing Entities in the Patent System, also released late last summer, they detailed at great length serious methodological and substantive flaws in The Direct Costs from NPE Disputes. Unfortunately, the Schwartz and Kesan essay has gone virtually unnoticed in the patent policy debates, while the $29 billion cost claim has through repetition become truth.

In the hope that at least a few more people might discover the Schwartz and Kesan essay, I will briefly summarize some of their concerns about the study that produced the $29 billion cost figure.  This is not merely an academic exercise.  Since Rep. DeFazio explicitly relied on the $29 billion cost claim to justify the SHIELD Act, and he and others keep repeating it, it’s important to know if it is true, because it’s being used to drive proposed legislation in the real world.  If patent legislation is supposed to secure innovation, then it behooves us to know if this legislation is based on actual facts. Yet, as Schwartz and Kesan explain in their essay, the $29 billion cost claim is based on a study that is fundamentally flawed in both substance and methodology.

In terms of its methodological flaws, the study supporting the $29 billion cost claim employs an incredibly broad definition of “patent troll” that covers almost every person, corporation or university that sues someone for infringing a patent that it is not currently being used to manufacture a product at that moment.  While the meaning of the “patent troll” epithet shifts depending on the commentator, reporter, blogger, or scholar who is using it, one would be extremely hard pressed to find anyone embracing this expansive usage in patent scholarship or similar commentary today.

There are several reasons why the extremely broad definition of “NPE” or “patent troll” in the study is unusual even compared to uses of this term in other commentary or studies. First, and most absurdly, this definition, by necessity, includes every university in the world that sues someone for infringing one of its patents, as universities don’t manufacture goods.  Second, it includes every individual and start-up company who plans to manufacture a patented invention, but is forced to sue an infringer-competitor who thwarted these business plans by its infringing sales in the marketplace.  Third, it includes commercial firms throughout the wide-ranging innovation industries—from high tech to biotech to traditional manufacturing—that have at least one patent among a portfolio of thousands that is not being used at the moment to manufacture a product because it may be “well outside the area in which they make products” and yet they sue infringers of this patent (the quoted language is from the study). So, according to this study, every manufacturer becomes an “NPE” or “patent troll” if it strays too far from what somebody subjectively defines as its rightful “area” of manufacturing. What company is not branded an “NPE” or “patent troll” under this definition, or will necessarily become one in the future given inevitable changes in one’s business plans or commercial activities? This is particularly true for every person or company whose only current opportunity to reap the benefit of their patented invention is to license the technology or to litigate against the infringers who refuse license offers.

So, when almost every possible patent-owning person, university, or corporation is defined as a “NPE” or “patent troll,” why are we surprised that a study that employs this virtually boundless definition concludes that they create $29 billion in litigation costs per year?  The only thing surprising is that the number isn’t even higher!

There are many other methodological flaws in the $29 billion cost study, such as its explicit assumption that patent litigation costs are “too high” without providing any comparative baseline for this conclusion.  What are the costs in other areas of litigation, such as standard commercial litigation, tort claims, or disputes over complex regulations?  We are not told.  What are the historical costs of patent litigation?  We are not told.  On what basis then can we conclude that $29 billion is “too high” or even “too low”?  We’re supposed to be impressed by a number that exists in a vacuum and that lacks any empirical context by which to evaluate it.

The $29 billion cost study also assumes that all litigation transaction costs are deadweight losses, which would mean that the entire U.S. court system is a deadweight loss according to the terms of this study.  Every lawsuit, whether a contract, tort, property, regulatory or constitutional dispute is, according to the assumption of the $29 billion cost study, a deadweight loss.  The entire U.S. court system is an inefficient cost imposed on everyone who uses it.  Really?  That’s an assumption that reduces itself to absurdity—it’s a self-imposed reductio ad absurdum!

In addition to the methodological problems, there are also serious concerns about the trustworthiness and quality of the actual data used to reach the $29 billion claim in the study.  All studies rely on data, and in this case, the $29 billion study used data from a secret survey done by RPX of its customers.  For those who don’t know, RPX’s business model is to defend companies against these so-called “patent trolls.”  So, a company whose business model is predicated on hyping the threat of “patent trolls” does a secret survey of its paying customers, and it is now known that RPX informed its customers in the survey that their answers would be used to lobby for changes in the patent laws.

As every reputable economist or statistician will tell you, such conditions encourage exaggeration and bias in a data sample by motivating participation among those who support changes to the patent law.  Such a problem even has a formal name in economic studies: self-selection bias.  But one doesn’t need to be an economist or statistician to be able to see the problems in relying on the RPX data to conclude that NPEs cost $29 billion per year. As the classic adage goes, “Something is rotten in the state of Denmark.”

Even worse, as I noted above, the RPX survey was confidential.  RPX has continued to invoke “client confidences” in refusing to disclose its actual customer survey or the resulting data, which means that the data underlying the $29 billion claim is completely unknown and unverifiable for anyone who reads the study.  Don’t worry, the researchers have told us in a footnote in the study, they looked at the data and confirmed it is good.  Again, it doesn’t take economic or statistical training to know that something is not right here. Another classic cliché comes to mind at this point: “it’s not the crime, it’s the cover-up.”

In fact, keeping data secret in a published study violates well-established and longstanding norms in all scientific research that data should always be made available for testing and verification by third parties.  No peer-reviewed medical or scientific journal would publish a study based on a secret data set in which the researchers have told us that we should simply trust them that the data is accurate.  Its use of secret data probably explains why the $29 billion study has not yet appeared in a peer-reviewed journal, and, if economics has any claim to being an actual science, this study never will.  If a study does not meet basic scientific standards for verifying data, then why are Reps. DeFazio and Chaffetz relying on it to propose national legislation that directly impacts the patent system and future innovation?  If heads-in-the-clouds academics would know to reject such a study as based on unverifiable, likely biased claptrap, then why are our elected officials embracing it to create real-world legal rules?

And, to continue our running theme of classic clichés, there’s the rub. The more one looks at the actual legal requirements of the SHIELD Act, the more, in the words of Professor Risch, one is left “scratching one’s head” in bewilderment.  The more one looks at the supporting studies and arguments in favor of the SHIELD Act, the more one is left, in the words of Professor Risch, “scratching one’s head.”  The more and more one thinks about the SHIELD Act, the more one realizes what it is—legislation that has been crafted at the behest of the politically powerful (such as an Internet company who can get the President to do a special appearance on its own social media website) to have the government eliminate a smaller, publicly reviled, and less politically-connected group.

In short, people may have legitimate complaints about the ways in which the court system in the U.S. generally has problems.  Commentators and Congresspersons could even consider revising the general legal rules governing patent ligtiation for all plaintiffs and defendants to make the ligitation system work better or more efficiently (by some established metric).   Professor Risch has done exactly this in a recent Wired op-ed.  But it’s time to call a spade a spade: the SHIELD Act is a classic example of rent-seeking, discriminatory legislation.