Archives For law and economics

[Cross posted at the CPIP Blog.]

By Mark Schultz & Adam Mossoff

A handful of increasingly noisy critics of intellectual property (IP) have emerged within free market organizations. Both the emergence and vehemence of this group has surprised most observers, since free market advocates generally support property rights. It’s true that there has long been a strain of IP skepticism among some libertarian intellectuals. However, the surprised observer would be correct to think that the latest critique is something new. In our experience, most free market advocates see the benefit and importance of protecting the property rights of all who perform productive labor – whether the results are tangible or intangible.

How do the claims of this emerging critique stand up? We have had occasion to examine the arguments of free market IP skeptics before. (For example, see here, here, here.) So far, we have largely found their claims wanting.

We have yet another occasion to examine their arguments, and once again we are underwhelmed and disappointed. We recently posted an essay at AEI’s Tech Policy Daily prompted by an odd report recently released by the Mercatus Center, a free-market think tank. The Mercatus report attacks recent research that supposedly asserts, in the words of the authors of the Mercatus report, that “the existence of intellectual property in an industry creates the jobs in that industry.” They contend that this research “provide[s] no theoretical or empirical evidence to support” its claims of the importance of intellectual property to the U.S. economy.

Our AEI essay responds to these claims by explaining how these IP skeptics both mischaracterize the studies that they are attacking and fail to acknowledge the actual historical and economic evidence on the connections between IP, innovation, and economic prosperity. We recommend that anyone who may be confused by the assertions of any IP skeptics waving the banner of property rights and the free market read our essay at AEI, as well as our previous essays in which we have called out similarly odd statements from Mercatus about IP rights.

The Mercatus report, though, exemplifies many of the concerns we raise about these IP skeptics, and so it deserves to be considered at greater length.

For instance, something we touched on briefly in our AEI essay is the fact that the authors of this Mercatus report offer no empirical evidence of their own within their lengthy critique of several empirical studies, and at best they invoke thin theoretical support for their contentions.

This is odd if only because they are critiquing several empirical studies that develop careful, balanced and rigorous models for testing one of the biggest economic questions in innovation policy: What is the relationship between intellectual property and jobs and economic growth?

Apparently, the authors of the Mercatus report presume that the burden of proof is entirely on the proponents of IP, and that a bit of hand waving using abstract economic concepts and generalized theory is enough to defeat arguments supported by empirical data and plausible methodology.

This move raises a foundational question that frames all debates about IP rights today: On whom should the burden rest? On those who claim that IP has beneficial economic effects? Or on those who claim otherwise, such as the authors of the Mercatus report?

The burden of proof here is an important issue. Too often, recent debates about IP rights have started from an assumption that the entire burden of proof rests on those investigating or defending IP rights. Quite often, IP skeptics appear to believe that their criticism of IP rights needs little empirical or theoretical validation, beyond talismanic invocations of “monopoly” and anachronistic assertions that the Framers of the US Constitution were utilitarians.

As we detail in our AEI essay, though, the problem with arguments like those made in the Mercatus report is that they contradict history and empirics. For the evidence that supports this claim, including citations to the many studies that are ignored by the IP skeptics at Mercatus and elsewhere, check out the essay.

Despite these historical and economic facts, one may still believe that the US would enjoy even greater prosperity without IP. But IP skeptics who believe in this counterfactual world face a challenge. As a preliminary matter, they ought to acknowledge that they are the ones swimming against the tide of history and prevailing belief. More important, the burden of proof is on them – the IP skeptics – to explain why the U.S. has long prospered under an IP system they find so odious and destructive of property rights and economic progress, while countries that largely eschew IP have languished. This obligation is especially heavy for one who seeks to undermine empirical work such as the USPTO Report and other studies.

In sum, you can’t beat something with nothing. For IP skeptics to contest this evidence, they should offer more than polemical and theoretical broadsides. They ought to stop making faux originalist arguments that misstate basic legal facts about property and IP, and instead offer their own empirical evidence. The Mercatus report, however, is content to confine its empirics to critiques of others’ methodology – including claims their targets did not make.

For example, in addition to the several strawman attacks identified in our AEI essay, the Mercatus report constructs another strawman in its discussion of studies of copyright piracy done by Stephen Siwek for the Institute for Policy Innovation (IPI). Mercatus inaccurately and unfairly implies that Siwek’s studies on the impact of piracy in film and music assumed that every copy pirated was a sale lost – this is known as “the substitution rate problem.” In fact, Siwek’s methodology tackled that exact problem.

IPI and Siwek never seem to get credit for this, but Siwek was careful to avoid the one-to-one substitution rate estimate that Mercatus and others foist on him and then critique as empirically unsound. If one actually reads his report, it is clear that Siwek assumes that bootleg physical copies resulted in a 65.7% substitution rate, while illegal downloads resulted in a 20% substitution rate. Siwek’s methodology anticipates and renders moot the critique that Mercatus makes anyway.

After mischaracterizing these studies and their claims, the Mercatus report goes further in attacking them as supporting advocacy on behalf of IP rights. Yes, the empirical results have been used by think tanks, trade associations and others to support advocacy on behalf of IP rights. But does that advocacy make the questions asked and resulting research invalid? IP skeptics would have trumpeted results showing that IP-intensive industries had a minimal economic impact, just as Mercatus policy analysts have done with alleged empirical claims about IP in other contexts. In fact, IP skeptics at free-market institutions repeatedly invoke studies in policy advocacy that allegedly show harm from patent litigation, despite these studies suffering from far worse problems than anything alleged in their critiques of the USPTO and other studies.

Finally, we noted in our AEI essay how it was odd to hear a well-known libertarian think tank like Mercatus advocate for more government-funded programs, such as direct grants or prizes, as viable alternatives to individual property rights secured to inventors and creators. There is even more economic work being done beyond the empirical studies we cited in our AEI essay on the critical role that property rights in innovation serve in a flourishing free market, as well as work on the economic benefits of IP rights over other governmental programs like prizes.

Today, we are in the midst of a full-blown moral panic about the alleged evils of IP. It’s alarming that libertarians – the very people who should be defending all property rights – have jumped on this populist bandwagon. Imagine if free market advocates at the turn of the Twentieth Century had asserted that there was no evidence that property rights had contributed to the Industrial Revolution. Imagine them joining in common cause with the populist Progressives to suppress the enforcement of private rights and the enjoyment of economic liberty. It’s a bizarre image, but we are seeing its modern-day equivalent, as these libertarians join the chorus of voices arguing against property and private ordering in markets for innovation and creativity.

It’s also disconcerting that Mercatus appears to abandon its exceptionally high standards for scholarly work-product when it comes to IP rights. Its economic analyses and policy briefs on such subjects as telecommunications regulation, financial and healthcare markets, and the regulatory state have rightly made Mercatus a respected free-market institution. It’s unfortunate that it has lent this justly earned prestige and legitimacy to stale and derivative arguments against property and private ordering in the innovation and creative industries. It’s time to embrace the sound evidence and back off the rhetoric.

Microsoft wants you to believe that Google’s business practices stifle competition and harm consumers. Again.

The latest volley in its tiresome and ironic campaign to bludgeon Google with the same regulatory club once used against Microsoft itself is the company’s effort to foment an Android-related antitrust case in Europe.

In a recent polemicMicrosoft consultant (and business school professor) Ben Edelman denounces Google for requiring that, if device manufacturers want to pre-install key Google apps on Android devices, they “must install all the apps Google specifies, with the prominence Google requires, including setting these apps as defaults where Google instructs.” Edelman trots out gasp-worthy “secret” licensing agreements that he claims support his allegation (more on this later).

Similarly, a recent Wall Street Journal article, “Android’s ‘Open’ System Has Limits,” cites Edelman’s claim that limits on the licensing of Google’s proprietary apps mean that the Android operating system isn’t truly open source and comes with “strings attached.”

In fact, along with the Microsoft-funded trade organization FairSearch, Edelman has gone so far as to charge that this “tying” constitutes an antitrust violation. It is this claim that Microsoft and a network of proxies brought to the Commission when their efforts to manufacture a search-neutrality-based competition case against Google failed.

But before getting too caught up in the latest round of anti-Google hysteria, it’s worth noting that the Federal Trade Commission has already reviewed these claims. After a thorough, two-year inquiry, the FTC found the antitrust arguments against Google to be without merit. The South Korea Fair Trade Commission conducted its own two year investigation into Google’s Android business practices and dismissed the claims before it as meritless, as well.

Taking on Edelman and FairSearch with an exhaustive scholarly analysis, German law professor Torsten Koerber recently assessed the nature of competition among mobile operating systems and concluded that:

(T)he (EU) Fairsearch complaint ultimately does not aim to protect competition or consumers, as it pretends to. It rather strives to shelter Microsoft from competition by abusing competition law to attack Google’s business model and subvert competition.

It’s time to take a step back and consider the real issues at play.

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.” He writes, “I see no way to reconcile the MADA restrictions with [Android openness].”

Conveniently, however, Edelman neglects to cite to Section 2.6 of the MADA:

The parties will create an open environment for the Devices by making all Android Products and Android Application Programming Interfaces available and open on the Devices and will take no action to limit or restrict the Android platform.

Professor Korber’s analysis provides a straight-forward explanation of the relationship between Android and its OEM licensees:

Google offers Android to OEMs on a royalty-free basis. The licensees are free to download, distribute and even modify the Android code as they like. OEMs can create mobile devices that run “pure” Android…or they can apply their own user interfaces (IO) and thereby hide most of the underlying Android system (e.g. Samsung’s “TouchWiz” or HTC’s “Sense”). OEMs make ample use of this option.

The truth is that the Android operating system remains, as ever, definitively open source — but Android’s openness isn’t really what the fuss is about. In this case, the confusion (or obfuscation) stems from the casual confounding of Google Apps with the Android Operating System. As we’ll see, they aren’t the same thing.

Consider Amazon, which pre-loads no Google applications at all on its Kindle Fire and Fire Phone. Amazon’s version of Android uses Microsoft’s Bing as the default search engineNokia provides mapping services, and the app store is Amazon’s own.

Still, Microsoft’s apologists continue to claim that Android licensees can’t choose to opt out of Google’s applications suite — even though, according to a new report from ABI Research, 20 percent of smartphones shipped between May and July 2014 were based on a “Google-less” version of the Android OS. And that number is consistently increasing: Analysts predict that by 2015, 30 percent of Android phones won’t access Google Services.

It’s true that equipment manufacturers who choose the Android operating system have the option to include the suite of integrated, proprietary Google apps and services licensed (royalty-free) under the name Google Mobile Services (GMS). GMS includes Google Search, Maps, Calendar, YouTube and other apps that together define the “Google Android experience” that users know and love.

But Google Android is far from the only Android experience.

Even if a manufacturer chooses to license Google’s apps suite, Google’s terms are not exclusive. Handset makers are free to install competing applications, including other search engines, map applications or app stores.

Although Google requires that Google Search be made easily accessible (hardly a bad thing for consumers, as it is Google Search that finances the development and maintenance of all of the other (free) apps from which Google otherwise earns little to no revenue), OEMs and users alike can (and do) easily install and access other search engines in numerous ways. As Professor Korber notes:

The standard MADA does not entail any exclusivity for Google Search nor does it mandate a search default for the web browser.

Regardless, integrating key Google apps (like Google Search and YouTube) with other apps the company offers (like Gmail and Google+) is an antitrust problem only if it significantly forecloses competitors from these apps’ markets compared to a world without integrated Google apps, and without pro-competitive justification. Neither is true, despite the unsubstantiated claims to the contrary from Edelman, FairSearch and others.

Consumers and developers expect and demand consistency across devices so they know what they’re getting and don’t have to re-learn basic functions or program multiple versions of the same application. Indeed, Apple’s devices are popular in part because Apple’s closed iOS provides a predictable, seamless experience for users and developers.

But making Android competitive with its tightly controlled competitors requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

Unlike Android, Apple prohibits modifications of its operating system by downstream partners and users, and completely controls the pre-installation of apps on iOS devices. It deeply integrates applications into iOS, including Apple Maps, iTunes, Siri, Safari, its App Store and others. Microsoft has copied Apple’s model to a large degree, hard-coding its own applications (including Bing, Windows Store, Skype, Internet Explorer, Bing Maps and Office) into the Windows Phone operating system.

In the service of creating and maintaining a competitive platform, each of these closed OS’s bakes into its operating system significant limitations on which third-party apps can be installed and what they can (and can’t) do. For example, neither platform permits installation of a third-party app store, and neither can be significantly customized. Apple’s iOS also prohibits users from changing default applications — although the soon-to-be released iOS 8 appears to be somewhat more flexible than previous versions.

In addition to pre-installing a raft of their own apps and limiting installation of other apps, both Apple and Microsoft enable greater functionality for their own apps than they do the third-party apps they allow.

For example, Apple doesn’t make available for other browsers (like Google’s Chrome) all the JavaScript functionality that it does for Safari, and it requires other browsers to use iOS Webkit instead of their own web engines. As a result there are things that Chrome can’t do on iOS that Safari and only Safari can do, and Chrome itself is hamstrung in implementing its own software on iOS. This approach has led Mozilla to refuse to offer its popular Firefox browser for iOS devices (while it has no such reluctance about offering it on Android).

On Windows Phone, meanwhile, Bing is integrated into the OS and can’t be removed. Only in markets where Bing is not supported (and with Microsoft’s prior approval) can OEMs change the default search app from Bing. While it was once possible to change the default search engine that opens in Internet Explorer (although never from the hardware search button), the Windows 8.1 Hardware Development Notes, updated July 22, 2014, state:

By default, the only search provider included on the phone is Bing. The search provider used in the browser is always the same as the one launched by the hardware search button.

Both Apple iOS and Windows Phone tightly control the ability to use non-default apps to open intents sent from other apps and, in Windows especially, often these linkages can’t be changed.

As a result of these sorts of policies, maintaining the integrity — and thus the brand — of the platform is (relatively) easy for closed systems. While plenty of browsers are perfectly capable of answering an intent to open a web page, Windows Phone can better ensure a consistent and reliable experience by forcing Internet Explorer to handle the operation.

By comparison, Android, with or without Google Mobile Services, is dramatically more open, more flexible and customizable, and more amenable to third-party competition. Even the APIs that it uses to integrate its apps are open to all developers, ensuring that there is nothing that Google apps are able to do that non-Google apps with the same functionality are prevented from doing.

In other words, not just Gmail, but any email app is permitted to handle requests from any other app to send emails; not just Google Calendar but any calendar app is permitted to handle requests from any other app to accept invitations.

In no small part because of this openness and flexibility, current reports indicate that Android OS runs 85 percent of mobile devices worldwide. But it is OEM giant Samsung, not Google, that dominates the market, with a 65 percent share of all Android devices. Competition is rife, however, especially in emerging markets. In fact, according to one report, “Chinese and Indian vendors accounted for the majority of smartphone shipments for the first time with a 51% share” in 2Q 2014.

As he has not been in the past, Edelman is at least nominally circumspect in his unsubstantiated legal conclusions about Android’s anticompetitive effect:

Applicable antitrust law can be complicated: Some ties yield useful efficiencies, and not all ties reduce welfare.

Given Edelman’s connections to Microsoft and the realities of the market he is discussing, it could hardly be otherwise. If every integration were an antitrust violation, every element of every operating system — including Apple’s iOS as well as every variant of Microsoft’s Windows — should arguably be the subject of a government investigation.

In truth, Google has done nothing more than ensure that its own suite of apps functions on top of Android to maintain what Google sees as seamless interconnectivity, a high-quality experience for users, and consistency for application developers — while still allowing handset manufacturers room to innovate in a way that is impossible on other platforms. This is the very definition of pro-competitive, and ultimately this is what allows the platform as a whole to compete against its far more vertically integrated alternatives.

Which brings us back to Microsoft. On the conclusion of the FTC investigation in January 2013, a GigaOm exposé on the case had this to say:

Critics who say Google is too powerful have nagged the government for years to regulate the company’s search listings. But today the critics came up dry….

The biggest loser is Microsoft, which funded a long-running cloak-and-dagger lobbying campaign to convince the public and government that its arch-enemy had to be regulated….

The FTC is also a loser because it ran a high profile two-year investigation but came up dry.

EU regulators, take note.

The Federal Trade Commission’s recent enforcement actions against Amazon and Apple raise important questions about the FTC’s consumer protection practices, especially its use of economics. How does the Commission weigh the costs and benefits of its enforcement decisions? How does the agency employ economic analysis in digital consumer protection cases generally?

Join the International Center for Law and Economics and TechFreedom on Thursday, July 31 at the Woolly Mammoth Theatre Company for a lunch and panel discussion on these important issues, featuring FTC Commissioner Joshua Wright, Director of the FTC’s Bureau of Economics Martin Gaynor, and several former FTC officials. RSVP here.

Commissioner Wright will present a keynote address discussing his dissent in Apple and his approach to applying economics in consumer protection cases generally.

Geoffrey Manne, Executive Director of ICLE, will briefly discuss his recent paper on the role of economics in the FTC’s consumer protection enforcement. Berin Szoka, TechFreedom President, will moderate a panel discussion featuring:

  • Martin Gaynor, Director, FTC Bureau of Economics
  • David Balto, Fmr. Deputy Assistant Director for Policy & Coordination, FTC Bureau of Competition
  • Howard Beales, Fmr. Director, FTC Bureau of Consumer Protection
  • James Cooper, Fmr. Acting Director & Fmr. Deputy Director, FTC Office of Policy Planning
  • Pauline Ippolito, Fmr. Acting Director & Fmr. Deputy Director, FTC Bureau of Economics

Background

The FTC recently issued a complaint and consent order against Apple, alleging its in-app purchasing design doesn’t meet the Commission’s standards of fairness. The action and resulting settlement drew a forceful dissent from Commissioner Wright, and sparked a discussion among the Commissioners about balancing economic harms and benefits in Section 5 unfairness jurisprudence. More recently, the FTC brought a similar action against Amazon, which is now pending in federal district court because Amazon refused to settle.

Event Info

The “FTC: Technology and Reform” project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology. The Project’s initial report, released in December 2013, identified critical questions facing the agency, Congress, and the courts about the FTC’s future, and proposed a framework for addressing them.

The event will be live streamed here beginning at 12:15pm. Join the conversation on Twitter with the #FTCReform hashtag.

When:

Thursday, July 31
11:45 am – 12:15 pm — Lunch and registration
12:15 pm – 2:00 pm — Keynote address, paper presentation & panel discussion

Where:

Woolly Mammoth Theatre Company – Rehearsal Hall
641 D St NW
Washington, DC 20004

Questions? – Email mail@techfreedom.orgRSVP here.

See ICLE’s and TechFreedom’s other work on FTC reform, including:

  • Geoffrey Manne’s Congressional testimony on the the FTC@100
  • Op-ed by Berin Szoka and Geoffrey Manne, “The Second Century of the Federal Trade Commission”
  • Two posts by Geoffrey Manne on the FTC’s Amazon Complaint, here and here.

About The International Center for Law and Economics:

The International Center for Law and Economics is a non-profit, non-partisan research center aimed at fostering rigorous policy analysis and evidence-based regulation.

About TechFreedom:

TechFreedom is a non-profit, non-partisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.

The Federal Trade Commission’s (FTC) June 23 Workshop on Conditional Pricing Practices featured a broad airing of views on loyalty discounts and bundled pricing, popular vertical business practices that recently have caused much ink to be spilled by the antitrust commentariat.  In addition to predictable academic analyses featuring alternative theoretical anticompetitive effects stories, the Workshop commendably included presentations by Benjamin Klein that featured procompetitive efficiency explanations for loyalty programs and by Daniel Crane that stressed the importance of (1) treating discounts hospitably and (2) requiring proof of harmful foreclosure.  On balance, however, the Workshop provided additional fuel for enforcers who are enthused about applying new anticompetitive effects models to bring “problematic” discounting and bundling to heel.

Before U.S. antitrust enforcement agencies launch a new crusade against novel vertical discounting and bundling contracts, however, they may wish to ponder a few salient factors not emphasized in the Workshop.

First, the United States has the most efficient marketing and distribution system in the world, and it has been growing more efficient in recent decades (this is the one part of the American economy that has been a bright spot).  Consumers have benefited from more shopping convenience and higher quality/lower priced offerings due to the advent of  “big box” superstores, Internet sales engines (and e-commerce in general), and other improvements in both on-line and “bricks and mortar” sales methods.

Second, and relatedly, the Supreme Court’s recognition of vertical contractual efficiencies in GTE-Sylvania (1977) ushered in a period of greatly reduced potential liability for vertical restraints, undoubtedly encouraging economically beneficial marketing improvements.  A new government emphasis on investigating and litigating the merits of novel vertical practices (particularly practices that emphasize discounting, which presumptively benefits consumers) could inject costly new uncertainty into the marketing side of business planning, spawn risk aversion, and deter marketing innovations that reduce costs, thereby harming welfare.  These harms would mushroom to the extent courts mistakenly “bought into” new theories and incorrectly struck down efficient practices.

Third, in applying new theories of competitive harm, the antitrust enforcers should be mindful of Ronald Coase’s admonition that “if an economist finds something—a business practice of one sort or other—that he does not understand, he looks for a monopoly explanation.  And as in this field we are very ignorant, the number of ununderstandable practices tends to be rather large, and the reliance on a monopoly explanation, frequent.”  Competition is a discovery procedure.  Entrepreneurial businesses constantly seek improvements not just in productive efficiency, but in distribution and marketing efficiencies, in order to eclipse their rivals.  As such, entrepreneurs may experiment with new contractual forms (such as bundling and loyalty discounts) in an effort to expand their market shares and grow their firms.  Business persons may not know ex ante which particular forms will work.  They may try out alternatives, sticking with those that succeed and discarding those that fail, without necessarily being able to articulate precisely the reasons for success or failure.  Real results in the market, rather than arcane economic theorems, may be expected to drive their decision-making.   Distribution and marketing methods that are successful will be emulated by others and spread.  Seen in this light (and relatedly, in light of transaction cost economics explanations for “non-standard” contracts), widespread adoption of new vertical contractual devices most likely indicates that they are efficient (they improve distribution, and imitation is the sincerest form of flattery), not that they represent some new competitive threat.  Since an economic model almost always can be ginned up to explain why some new practice may reduce consumer welfare in theory, enforcers should instead focus on hard empirical evidence that output and quality have been reduced due to a restraint before acting.  Unfortunately, the mere threat of costly misbegotten investigations may chill businesses’ interest in experimenting with new and potentially beneficial vertical contractual arrangements, reducing innovation and slowing welfare enhancement (consistent with point two, above).

Fourth, decision theoretic considerations should make enforcers particularly wary of pursuing conditional pricing contracts cases.  Consistent with decision theory, optimal antitrust enforcement should adopt an error cost framework that seeks to minimize the sum of the costs attributable to false positives, false negatives, antitrust administrative costs, and disincentive costs imposed on third parties (the latter may also be viewed as a subset of false positives).  Given the significant potential efficiencies flowing from vertical restraints, and the lack of empirical showing that they are harmful, antitrust enforcers should exercise extreme caution in entertaining proposals to challenge new vertical arrangements, such as conditional pricing mechanisms.  In particular, they should carefully assess the cumulative weight of the high risk of false positives in this area, the significant administrative costs that attend investigations and prosecutions, and the disincentives toward efficient business arrangements (see points two and three above).  Taken together, these factors strongly suggest that the aggressive pursuit of conditional pricing practice investigations would flunk a reasonable cost-benefit calculus.

Fifth, a new U.S. antitrust enforcement crusade against conditional pricing could be used by foreign competition agencies to justify further attacks on efficient vertical practices.  This could add to the harm suffered by companies (including, of course, U.S.-based multinationals) which would be deterred from maintaining and creating new welfare-beneficial distribution methods.  Foreign consumers, of course, would suffer as well.

My caveats should not be read to suggest that the FTC should refrain from pursuing new economic learning on loyalty discounting and bundled pricing, nor on other novel business practices.  Nor should it necessarily eschew all enforcement in the vertical restraints area – although that might not be such a bad idea, given error cost and resource constraint issues.  (Vertical restraints that are part of a cartel enforcement scheme should be treated as cartel conduct, and, as such, should be fair game, of course.)  In order optimally to allocate scarce resources, however, the FTC might benefit by devoting relatively greater attention to the most welfare-inimical competitive abuses – namely, anticompetitive arrangements instigated, shielded, or maintained by government authority.  (Hard core private cartel activity is best left to the Justice Department, which can deploy powerful criminal law tools against such schemes.)

U.S. antitrust law focuses primarily on private anticompetitive restraints, leaving the most serious impediments to a vibrant competitive process – government-initiated restraints – relatively free to flourish.  Thus the Federal Trade Commission (FTC) should be commended for its July 16 congressional testimony that spotlights a fast-growing and particularly pernicious species of (largely state) government restriction on competition – occupational licensing requirements.  Today such disciplines (to name just a few) as cat groomers, flower arrangers, music therapists, tree trimmers, frozen dessert retailers, eyebrow threaders, massage therapists (human and equine), and “shampoo specialists,” in addition to the traditional categories of doctors, lawyers, and accountants, are subject to professional licensure.  Indeed, since the 1950s, the coverage of such rules has risen dramatically, as the percentage of Americans requiring government authorization to do their jobs has risen from less than five percent to roughly 30 percent.

Even though some degree of licensing responds to legitimate health and safety concerns (i.e., no fly-by-night heart surgeons), much occupational regulation creates unnecessary barriers to entry into a host of jobs.  Excessive licensing confers unwarranted benefits on fortunate incumbents, while effectively barring large numbers of capable individuals from the workforce.  (For example, many individuals skilled in natural hair braiding simply cannot afford the 2,100 hours required to obtain a license in Iowa, Nebraska, and South Dakota.)  It also imposes additional economic harms, as the FTC’s testimony explains:  “[Occupational licensure] regulations may lead to higher prices, lower quality services and products, and less convenience for consumers.  In the long term, they can cause lasting damage to competition and the competitive process by rendering markets less responsive to consumer demand and by dampening incentives for innovation in products, services, and business models.”  Licensing requirements are often enacted in tandem with other occupational regulations that unjustifiably limit the scope of beneficial services particular professionals can supply – for instance, a ban on tooth cleaning by dental hygienists not acting under a dentist’s supervision that boosts dentists’ income but denies treatment to poor children who have no access to dentists.

What legal and policy tools are available to chip away at these pernicious and costly laws and regulations, which largely are the fruit of successful special interest lobbying?  The FTC’s competition advocacy program, which responds to requests from legislators and regulators to assess the economic merits of proposed laws and regulations, has focused on unwarranted regulatory restrictions in such licensed professions as real estate brokers, electricians, accountants, lawyers, dentists, dental hygienists, nurses, eye doctors, opticians, and veterinarians.  Retrospective reviews of FTC advocacy efforts suggest it may have helped achieve some notable reforms (for example, 74% of requestors, regulators, and bill sponsors surveyed responded that FTC advocacy initiatives influenced outcomes).  Nevertheless, advocacy’s reach and effectiveness inherently are limited by FTC resource constraints, by the need to obtain “invitations” to submit comments, and by the incentive and ability of licensing scheme beneficiaries to oppose regulatory and legislative reforms.

Former FTC Chairman Kovacic and James Cooper (currently at George Mason University’s Law and Economics Center) have suggested that federal and state antitrust experts could be authorized to have ex ante input into regulatory policy making.  As the authors recognize, however, several factors sharply limit the effectiveness of such an initiative.  In particular, “the political feasibility of this approach at the legislative level is slight”, federal mandates requiring ex ante reviews would raise serious federalism concerns, and resource constraints would loom large.

Antitrust law challenges to anticompetitive licensing schemes likewise offer little solace.  They are limited by the antitrust “state action” doctrine, which shields conduct undertaken pursuant to “clearly articulated” state legislative language that displaces competition – a category that generally will cover anticompetitive licensing requirements.  Even a Supreme Court decision next term (in North Carolina Dental v. FTC) that state regulatory boards dominated by self-interested market participants must be actively supervised to enjoy state action immunity would have relatively little bite.  It would not limit states from issuing simple statutory commands that create unwarranted occupational barriers, nor would it prevent states from implementing “adequate” supervisory schemes that are designed to approve anticompetitive state board rules.

What then is to be done?

Constitutional challenges to unjustifiable licensing strictures may offer the best long-term solution to curbing this regulatory epidemic.  As Clark Neily points out in Terms of Engagement, there is a venerable constitutional tradition of protecting the liberty interest to earn a living, reflected in well-reasoned late 19th and early 20th century “Lochner-era” Supreme Court opinions.  Even if Lochner is not rehabilitated, however, there are a few recent jurisprudential “straws in the wind” that support efforts to rein in “irrational” occupational licensure barriers.  Perhaps acting under divine inspiration, the Fifth Circuit in St. Joseph Abbey (2013) ruled that Louisiana statutes that required all casket manufacturers to be licensed funeral directors – laws that prevented monks from earning a living by making simple wooden caskets – served no other purpose than to protect the funeral industry, and, as such, violated the 14th Amendment’s Equal Protection and Due Process Clauses.  In particular, the Fifth Circuit held that protectionism, standing alone, is not a legitimate state interest sufficient to establish a “rational basis” for a state statute, and that absent other legitimate state interests, the law must fall.  Since the Sixth and Ninth Circuits also have held that intrastate protectionism standing alone is not a legitimate purpose for rational basis review, but the Tenth Circuit has held to the contrary, the time may soon be ripe for the Supreme Court to review this issue and, hopefully, delegitimize pure economic protectionism.  Such a development would place added pressure on defenders of protectionist occupational licensing schemes.  Other possible avenues for constitutional challenges to protectionist licensing regimes (perhaps, for example, under the Dormant Commerce Clause) also merit being explored, of course.  The Institute of Justice already is performing yeoman’s work in litigating numerous cases involving unjustified licensing and other encroachments on economic liberty; perhaps their example can prove an inspiration for pro bono efforts by others.

Eliminating anticompetitive occupational licensing rules – and, more generally, vindicating economic liberties that too long have been neglected – is obviously a long-term project, and far-reaching reform will not happen in the near term.  Nevertheless, while we the currently living may in the long run be dead (pace Keynes), our posterity will be alive, and we owe it to them to pursue the vindication of economic liberties under the Constitution.

With Berin Szoka.

TechFreedom and the International Center for Law & Economics will shortly file two joint comments with the FCC, explaining why the FCC has no sound legal basis for micromanaging the Internet—now called “net neutrality regulation”—and why such regulation would be counter-productive as a policy matter. The following summarizes some of the key points from both sets of comments.

No one’s against an open Internet. The notion that anyone can put up a virtual shingle—and that the good ideas will rise to the top—is a bedrock principle with broad support; it has made the Internet essential to modern life. Key to Internet openness is the freedom to innovate. An open Internet and the idea that companies can make special deals for faster access are not mutually exclusive. If the Internet really is “open,” shouldn’t all companies be free to experiment with new technologies, business models and partnerships? Shouldn’t the FCC allow companies to experiment in building the unknown—and unknowable—Internet of the future?

The best approach would be to maintain the “Hands off the Net” approach that has otherwise prevailed for 20 years. That means a general presumption that innovative business models and other forms of “prioritization” are legal. Innovation could thrive, and regulators could still keep a watchful eye, intervening only where there is clear evidence of actual harm, not just abstract fears. And they should start with existing legal tools—like antitrust and consumer protection laws—before imposing prior restraints on innovation.

But net neutrality regulation hurts more than it helps. Counterintuitively, a blanket rule that ISPs treat data equally could actually harm consumers. Consider the innovative business models ISPs are introducing. T-Mobile’s unRadio lets users listen to all the on-demand music and radio they want without taking a hit against their monthly data plan. Yet so-called consumer advocates insist that’s a bad thing because it favors some content providers over others. In fact, “prioritizing” one service when there is congestion frees up data for subscribers to consume even more content—from whatever source. You know regulation may be out of control when a company is demonized for offering its users a freebie.

Treating each bit of data neutrally ignores the reality of how the Internet is designed, and how consumers use it.  Net neutrality proponents insist that all Internet content must be available to consumers neutrally, whether those consumers (or content providers) want it or not. They also argue against usage-based pricing. Together, these restrictions force all users to bear the costs of access for other users’ requests, regardless of who actually consumes the content, as the FCC itself has recognized:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks.

The rules that net neutrality advocates want would hurt startups as well as consumers. Imagine a new entrant, clamoring for market share. Without the budget for a major advertising blitz, the archetypical “next Netflix” might never get the exposure it needs to thrive. But for a relatively small fee, the startup could sign up to participate in a sponsored data program, with its content featured and its customers’ data usage exempted from their data plans. This common business strategy could mean the difference between success and failure for a startup. Yet it would be prohibited by net neutrality rules banning paid prioritization.

The FCC lacks sound legal authority. The FCC is essentially proposing to do what can only properly be done by Congress: invent a new legal regime for broadband. Each of the options the FCC proposes to justify this—Section 706 of the Telecommunications Act and common carrier classification—is deeply problematic.

First, Section 706 isn’t sustainable. Until 2010, the FCC understood Section 706 as a directive to use its other grants of authority to promote broadband deployment. But in its zeal to regulate net neutrality, the FCC reversed itself in 2010, claiming Section 706 as an independent grant of authority. This would allow the FCC to regulate any form of “communications” in any way not directly barred by the Act — not just broadband but “edge” companies like Google and Facebook. This might mean going beyond neutrality to regulate copyright, cybersecurity and more. The FCC need only assert that regulation would somehow promote broadband.

If Section 706 is a grant of authority, it’s almost certainly a power to deregulate. But even if its power is as broad as the FCC claims, the FCC still hasn’t made the case that, on balance, its proposed regulations would actually do what it asserts: promote broadband. The FCC has stubbornly refused to conduct serious economic analysis on the net effects of its neutrality rules.

And Title II would be a disaster. The FCC has asked whether Title II of the Act, which governs “common carriers” like the old monopoly telephone system, is a workable option. It isn’t.

In the first place, regulations that impose design limitations meant for single-function networks simply aren’t appropriate for the constantly evolving Internet. Moreover, if the FCC re-interprets the Communications Act to classify broadband ISPs as common carriers, it risks catching other Internet services in the cross-fire, inadvertently making them common carriers, too. Surely net neutrality proponents can appreciate the harmful effects of treating Skype as a common carrier.

Forbearance can’t clean up the Title II mess. In theory the FCC could “forbear” from Title II’s most onerous rules, promising not to apply them when it determines there’s enough competition in a market to make the rules unnecessary. But the agency has set a high bar for justifying forbearance.

Most recently, in 2012, the Commission refused to grant Qwest forbearance even in the highly competitive telephony market, disregarding competition from wireless providers, and concluding that a cable-telco “duopoly” is inadequate to protect consumers. It’s unclear how the FCC could justify reaching the opposite conclusion about the broadband market—simultaneously finding it competitive enough to forbear, yet fragile enough to require net neutrality rules. Such contradictions would be difficult to explain, even if the FCC generally gets discretion on changing its approach.

But there is another path forward. If the FCC can really make the case for regulation, it should go to Congress, armed with the kind of independent economic and technical expert studies Commissioner Pai has urged, and ask for new authority. A new Communications Act is long overdue anyway. In the meantime, the FCC could convene the kind of multistakeholder process generally endorsed by the White House to produce a code enforceable by the Federal Trade Commission. A consensus is possible — just not inside the FCC, where the policy questions can’t be separated from the intractable legal questions.

Meanwhile, the FCC should focus on doing what Section 706 actually demands: clearing barriers to broadband deployment and competition. The 2010 National Broadband Plan laid out an ambitious pro-deployment agenda. It’s just too bad the FCC was so obsessed with net neutrality that it didn’t focus on the plan. Unleashing more investment and competition, not writing more regulation, is the best way to keep the Internet open, innovative and free.

[Cross-posted at TechFreedom.]

Last Monday, a group of nineteen scholars of antitrust law and economics, including yours truly, urged the U.S. Court of Appeals for the Eleventh Circuit to reverse the Federal Trade Commission’s recent McWane ruling.

McWane, the largest seller of domestically produced iron pipe fittings (DIPF), would sell its products only to distributors that “fully supported” its fittings by carrying them exclusively.  There were two exceptions: where McWane products were not readily available, and where the distributor purchased a McWane rival’s pipe along with its fittings.  A majority of the FTC ruled that McWane’s policy constituted illegal exclusive dealing.

Commissioner Josh Wright agreed that the policy amounted to exclusive dealing, but he concluded that complaint counsel had failed to prove that the exclusive dealing constituted unreasonably exclusionary conduct in violation of Sherman Act Section 2.  Commissioner Wright emphasized that complaint counsel had produced no direct evidence of anticompetitive harm (i.e., an actual increase in prices or decrease in output), even though McWane’s conduct had already run its course.  Indeed, the direct evidence suggested an absence of anticompetitive effect, as McWane’s chief rival, Star, grew in market share at exactly the same rate during and after the time of McWane’s exclusive dealing.

Instead of focusing on direct evidence of competitive effect, complaint counsel pointed to a theoretical anticompetitive harm: that McWane’s exclusive dealing may have usurped so many sales from Star that Star could not achieve minimum efficient scale.  The only evidence as to what constitutes minimum efficient scale in the industry, though, was Star’s self-serving statement that it would have had lower average costs had it operated at a scale sufficient to warrant ownership of its own foundry.  As Commissioner Wright observed, evidence in the record showed that other pipe fitting producers had successfully entered the market and grown market share substantially without owning their own foundry.  Thus, actual market experience seemed to undermine Star’s self-serving testimony.

Commissioner Wright also observed that complaint counsel produced no evidence showing what percentage of McWane’s sales of DIPF might have gone to other sellers absent McWane’s exclusive dealing policy.  Only those “contestable” sales – not all of McWane’s sales to distributors subject to the full support policy – should be deemed foreclosed by McWane’s exclusive dealing.  Complaint counsel also failed to quantify sales made to McWane’s rivals under the generous exceptions to its policy.  These deficiencies prevented complaint counsel from adequately establishing the degree of market foreclosure caused by McWane’s policy – the first (but not last!) step in establishing the alleged anticompetitive harm.

In our amicus brief, we antitrust scholars take Commissioner Wright’s side on these matters.  We also observe that the Commission failed to account for an important procompetitive benefit of McWane’s policy:  it prevented rival DIPF sellers from “cherry-picking” the most popular, highest margin fittings and selling only those at prices that could be lower than McWane’s because the cherry-pickers didn’t bear the costs of producing the full line of fittings.  Such cherry-picking is a form of free-riding because every producer’s fittings are more highly valued if a full line is available.  McWane’s policy prevented the sort of free-riding that would have made its production of a full line uneconomical.

In short, the FTC’s decision made it far too easy to successfully challenge exclusive dealing arrangements, which are usually procompetitive, and calls into question all sorts of procompetitive full-line forcing arrangements.  Hopefully, the Eleventh Circuit will correct the Commission’s mistake.

Other professors signing the brief include:

  • Tom Arthur, Emory Law
  • Roger Blair, Florida Business
  • Don Boudreaux, George Mason Economics (and Café Hayek)
  • Henry Butler, George Mason Law
  • Dan Crane, Michigan Law (and occasional TOTM contributor)
  • Richard Epstein, NYU and Chicago Law
  • Ken Elzinga, Virginia Economics
  • Damien Geradin, George Mason Law
  • Gus Hurwitz, Nebraska Law (and TOTM)
  • Keith Hylton, Boston University Law
  • Geoff Manne, International Center for Law and Economics (and TOTM)
  • Fred McChesney, Miami Law
  • Tom Morgan, George Washington Law
  • Barack Orbach, Arizona Law
  • Bill Page, Florida Law
  • Paul Rubin, Emory Economics (and TOTM)
  • Mike Sykuta, Missouri Economics (and TOTM)
  • Todd Zywicki, George Mason Law (and Volokh Conspiracy)

The brief’s “Summary of Argument” follows the jump. Continue Reading…

Whereas the antitrust rules on a number of once-condemned business practices (e.g., vertical non-price restraints, resale price maintenanceprice squeezes) have become more economically sensible in the last few decades, the law on tying remains an embarrassment.  The sad state of the doctrine is evident in a federal district court’s recent denial of Viacom’s motion to dismiss a tying action by Cablevision.

According to Cablevision’s complaint, Viacom threatened to impose a substantial financial “penalty” (probably by denying a discount) unless Cablevision licensed Viacom’s less popular television programming (the “Suite Networks”) along with its popular “Core Networks” of Nickelodeon, Comedy Central, BET, and MTV.  This arrangement, Cablevision insisted, amounted to a per se illegal tie-in of the Suite Networks to the Core Networks.

Similar tying actions based on cable bundling have failed, and I have previously explained why cable bundling like this is, in fact, efficient.  But putting aside whether  the tie-in at issue here was efficient, the district court’s order is troubling because it illustrates how very unconcerned with efficiency tying doctrine is.

First, the district court rejected–correctly, under ill-founded precedents–Viacom’s argument that Cablevision was required to plead an anticompetitive effect.  It concluded that Cablevision had to allege only four elements: separate tying and tied products, coercion by the seller to force purchase of the tied product along with the tying product, the seller’s possession of market power in the tying product market, and the involvement of a “not insubstantial” dollar volume of commerce in the tied product market.  Once these elements are alleged, the court said,

plaintiffs need not allege, let alone prove, facts addressed to the anticompetitive effects element.  If a plaintiff succeeds in establishing the existence of sufficient market power to create a per se violation, the plaintiff is also relieved of the burden of rebutting any justification the defendant may offer for the tie.

In other words, if a tying plaintiff establishes the four elements listed above, the efficiency of the challenged tie-in is completely irrelevant.  And if a plaintiff merely pleads those four elements, it is entitled to proceed to discovery, which can be crippling for antitrust defendants and often causes them to settle even non-meritorious cases. Given that a great many tie-ins involving the four elements listed above are, in fact, efficient, this is a terrible rule.  It is, however, the law as established in the Supreme Court’s Jefferson Parish decision.  The blame for this silliness therefore rests on that Court, not the district court here.

But the Cablevision order includes a second unfortunate feature for which the district court and the Supreme Court share responsibility.  Having concluded that Cablevision was not required to plead anticompetitive effect, the court went on to say that Cablevision “ha[d], in any event, pleaded facts sufficient to support plausibly an inference of anticompetitive effect.”  Those alleged facts were that Cablevision would have bought content from another seller but for the tie-in:

Cablevision alleges that if it were not forced to carry the Suite Networks, it “would carry other networks on the numerous channel slots that Viacom’s Suite Networks currently occupy.”  (Compl. par. 10.)  Cablevision also alleges that Cablevision would buy other “general programming networks” from Viacom’s competitors absent the tying arrangement.  (Id.)

In other words, the district court reasoned, Cablevision alleged anticompetitive harm merely by pleading that Viacom’s conduct reduced some sales opportunities for its rivals.

But harm to a competitor, standing alone, is not harm to competition.  To establish true anticompetitive harm, Cablevision would have to show that Viacom’s tie-in reduced its rivals’ sales by so much that they lost scale efficiencies so that their average per-unit costs rose.  To make that showing, Cablevision would have to show (or allege, at the motion to dismiss stage) that Viacom’s tying occasioned substantial foreclosure of sales opportunities in the tied product market. “Some” reduction in sales to rivals–while perhaps anticompetitor–is simply not sufficient to show anticompetitive harm.

Because the Supreme Court has emphasized time and again that mere harm to a competitor is not harm to competition, the gaffe here is primarily the district court’s fault.  But at least a little blame should fall on the Supreme Court.  That Court has never precisely specified the potential anticompetitive harm from tying: that a tie-in may enhance market power in the tied or tying product markets if, but only if, it results in substantial foreclosure of sales opportunities in the tied product market.

If the Court were to do so, and were to jettison the silly quasi-per se rule of Jefferson Parish, tying doctrine would be far more defensible.

[NOTE: For a more detailed explanation of why substantial tied market foreclosure is a prerequisite to anticompetitive harm from tie-ins, see my article, Appropriate Liability Rules for Tying and Bundled Discounting, 72 Ohio St. L. J. 909 (2011).]

In recent years, antitrust enforcers in Europe and the United States have made public pronouncements and pursued enforcement initiatives that undermine the ability of patentees to earn maximum profits through the unilateral exercise of rights within the scope of their patents, as discussed in separate recent articles by me and by Professor Nicolas Petit of the University of Liege. (Similar sorts of concerns have been raised by Federal Trade Commissioner Joshua Wright.) This represents a change in emphasis away from restraints on competition among purveyors of rival patented technologies and toward the alleged “exploitation” of a patentee’s particular patented technology. It is manifested, for example, in enforcers’ rising enthusiasm for limiting patent royalties (based on hypothetical ex ante comparisons to “next best” technologies, or the existence of standards on which patents “read”), for imposing compulsory licensing remedies, and for constraining the terms of private patent litigation settlements involving a single patented technology. (Not surprisingly, given its broader legal mandate to attack abuses of dominant positions, the European Commission has been more aggressive than United States antitrust agencies.) This development has troubling implications for long-term economic welfare and innovation, and merits far greater attention than it has received thus far.

What explains this phenomenon? Public enforcers are motivated by research that purports to demonstrate fundamental flaws in the workings of the patent system (including patent litigation) and the poor quality of many patents, as described, for example, in 2003 and 2011 U.S. Federal Trade Commission (FTC) Reports. Central to this scholarship is the notion that patents are “highly uncertain” and merely “probabilistic” (read “second class”) property rights that should be deemed to convey only a right to try to exclude. This type of thinking justifies a greater role for prosecutors to “look inside” the patent “black box” and use antitrust to “correct” perceived patent “abuses,” including supposed litigation excesses.

This perspective is problematic, to say the least. Government patent agencies, not antitrust enforcers, are best positioned to (and have taken steps to) rein in litigation excesses and improve patent quality, and the Supreme Court continues to issue rulings clarifying patent coverage. More fundamentally, as Professor Petit and I explain, this new patent-specific interventionist trend ignores a robust and growing law and economics literature that highlights the benefits of the patent system in enabling technology commercialization, signaling value to capital markets and innovators, and reducing information and transaction costs. It also fails to confront empirical studies that by and large suggest stronger patent regimes are associated with faster economic growth and innovation. Furthermore, decision theory and error cost considerations indicate that antitrust agencies are ill-equipped to second guess unilateral exercises of property rights that fall within the scope of a patent. Finally, other antitrust jurisdictions, such as China, are all too likely to cite new United States and European constraints on unilateral patent right assertions as justifications for even more intrusive limitations on patent rights.

What, then, should the U.S. antitrust enforcement agencies do? Ideally, they should announce that they are redirecting their emphasis to prosecuting inefficient competitive restraints involving rival patented technologies, the central thrust of the 1995 FTC-U.S. Justice Department Patent-Antitrust Licensing Guidelines. In so doing, they should state publicly that an individual patentee should be entitled to the full legitimate returns flowing from the legal scope of its patent, free from antitrust threat. (The creation of patent-specific market power through deception or fraud is not a legitimate return on patent rights, of course, and should be subject to antitrust prosecution when found.) One would hope that eventually the European Commission (and, dare we suggest, other antitrust authorities as well) would be inspired to adopt a similar program. Additional empirical research documenting the economy-wide benefits of encouraging robust unilateral patent assertions could prove helpful in this regard.

I share Alden’s disappointment that the Supreme Court did not overrule Basic v. Levinson in Monday’s Halliburton decision.  I’m also surprised by the Court’s ruling.  As I explained in this lengthy post, I expected the Court to alter Basic to require Rule 10b-5 plaintiffs to prove that the complained of misrepresentation occasioned a price effect.  Instead, the Court maintained Basic’s rule that price impact is presumed if the plaintiff proves that the misinformation was public and material and that “the stock traded in an efficient market.”

An upshot of Monday’s decision is that courts adjudicating Rule 10b-5 class actions will continue to face at the outset not the fairly simple question of whether the misstatement at issue moved the relevant stock’s price but instead whether that stock was traded in an “efficient market.”  Focusing on market efficiency—rather than on price impact, ultimately the key question—raises practical difficulties and creates a bit of a paradox.

First, the practical difficulties.  How is a court to know whether the market in which a security is traded is “efficient” (or, given that market efficiency is not a binary matter, “efficient enough”)?  Chief Justice Roberts’ majority opinion suggested this is a simple inquiry, but it’s not.  Courts typically consider a number of factors to assess market efficiency.  According to one famous district court decision (Cammer), the relevant factors are: “(1) the stock’s average weekly trading volume; (2) the number of securities analysts that followed and reported on the stock; (3) the presence of market makers and arbitrageurs; (4) the company’s eligibility to file a Form S-3 Registration Statement; and (5) a cause-and-effect relationship, over time, between unexpected corporate events or financial releases and an immediate response in stock price.”  In re Xcelera.com Securities Litig., 430 F.3d 503 (2005).  Other courts have supplemented these Cammer factors with a few others: market capitalization, the bid/ask spread, float, and analyses of autocorrelation.  No one can say, though, how each factor should be assessed (e.g., How many securities analysts must follow the stock? How much autocorrelation is permissible?  How large may the bid-ask spread be?).  Nor is there guidance on how to balance factors when some weigh in favor of efficiency and others don’t.  It’s a crapshoot.

In addition, focusing at the outset on whether the market at issue is efficient creates a market definition paradox in Rule 10b-5 actions.  When courts assess whether the market for a company’s stock is efficient, they assume that “the market” consists of trades in that company’s stock.  This is apparent from the Cammer (and supplementary) factors, all of which are company-specific.  It’s also implicit in portions of the Halliburton majority opinion, such as the observation that the plaintiff “submitted an event study of vari­ous episodes that might have been expected to affect the price of Halliburton’s stock, in order to demonstrate that the market for that stock takes account of material, public information about the company.”  (Emphasis added.)

But the semi-strong version of the Efficient Capital Markets Hypothesis (ECMH), the economic theorem upon which Basic rests, rejects the notion that there is a “market” for a single company’s stock.  Both the semi-strong ECMH and Basic reason that public misinformation is quickly incorporated into the price of securities traded on public exchanges.  Private misinformation, by contrast, usually is not – even when such misinformation results in large trades that significantly alter the quantity demanded or quantity supplied of the relevant stock.  The reason private misinformation is not taken to affect a security’s price, even when it results in substantial changes in quantities demanded or supplied, is because the relevant market is not the stock of that particular company but is instead the universe of stocks offering a similar package of risk and reward.  Because a private misinformation-induced increase in demand for a single company’s stock – even if large relative to the  number of shares outstanding – is likely to be tiny compared to the number of available shares of close substitutes for that company’s stock, private misinformation about a company is unlikely to be reflected in the price of the company’s stock.  Public misinformation, by contrast, affects a stock’s price because it not only changes quantities demanded and supplied but also causes investors to adjust their willingness-to-pay or willingness-to-accept.  Accordingly, both the semi-strong ECMH and Basic assume that only public misinformation can be assured to affect stock prices.  That’s why, as the Halliburton majority observes, there is a presumption of price effect only if the plaintiff proves public misinformation, materiality, and an efficient market.  (For a nice explanation of this idea in the context of a real case, see Judge Easterbrook’s opinion in West v. Prudential Securities.)

The paradox, then, is that Basic and the semi-strong ECMH, in requiring public misinformation, assume that the relevant market is not company specific.  But for purposes of determining whether the “market” is efficient, the market is assumed to consist of trades of a single company’s stock.

The Supreme Court could have avoided both the practical difficulties in assessing market efficiency and the theoretical paradox identified herein had it altered Basic to require plaintiffs to establish not an efficient market but an actual price impact. Alas.