Archives For scholarship

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.

 

As Thom previously posted, he and I have a new paper explaining The Case for Doing Nothing About Common Ownership of Small Stakes in Competing Firms. Our paper is a response to cries from the likes of Einer Elhauge and of Eric Posner, Fiona Scott Morton, and Glen Weyl, who have called for various types of antitrust action to reign in what they claim is an “economic blockbuster” and “the major new antitrust challenge of our time,” respectively. This is the first in a series of posts that will unpack some of the issues and arguments we raise in our paper.

At issue is the growth in the incidence of common-ownership across firms within various industries. In particular, institutional investors with broad portfolios frequently report owning small stakes in a number of firms within a given industry. Although small, these stakes may still represent large block holdings relative to other investors. This intra-industry diversification, critics claim, changes the managerial objectives of corporate executives from aggressively competing to increase their own firm’s profits to tacitly colluding to increase industry-level profits instead. The reason for this change is that competition by one firm comes at a cost of profits from other firms in the industry. If investors own shares across firms, then any competitive gains in one firm’s stock are offset by competitive losses in the stocks of other firms in the investor’s portfolio. If one assumes corporate executives aim to maximize total value for their largest shareholders, then managers would have incentive to soften competition against firms with which they share common ownership. Or so the story goes (more on that in a later post.)

Elhague and Posner, et al., draw their motivation for new antitrust offenses from a handful of papers that purport to establish an empirical link between the degree of common ownership among competing firms and various measures of softened competitive behavior, including airline prices, banking fees, executive compensation, and even corporate disclosure patterns. The paper of most note, by José Azar, Martin Schmalz, and Isabel Tecu and forthcoming in the Journal of Finance, claims to identify a causal link between the degree of common ownership among airlines competing on a given route and the fares charged for flights on that route.

Measuring common ownership with MHHI

Azar, et al.’s airline paper uses a metric of industry concentration called a Modified Herfindahl–Hirschman Index, or MHHI, to measure the degree of industry concentration taking into account the cross-ownership of investors’ stakes in competing firms. The original Herfindahl–Hirschman Index (HHI) has long been used as a measure of industry concentration, debuting in the Department of Justice’s Horizontal Merger Guidelines in 1982. The HHI is calculated by squaring the market share of each firm in the industry and summing the resulting numbers.

The MHHI is rather more complicated. MHHI is composed of two parts: the HHI measuring product market concentration and the MHHI_Delta measuring the additional concentration due to common ownership. We offer a step-by-step description of the calculations and their economic rationale in an appendix to our paper. For this post, I’ll try to distill that down. The MHHI_Delta essentially has three components, each of which is measured relative to every possible competitive pairing in the market as follows:

  1. A measure of the degree of common ownership between Company A and Company -A (Not A). This is calculated by multiplying the percentage of Company A shares owned by each Investor I with the percentage of shares Investor I owns in Company -A, then summing those values across all investors in Company A. As this value increases, MHHI_Delta goes up.
  2. A measure of the degree of ownership concentration in Company A, calculated by squaring the percentage of shares owned by each Investor I and summing those numbers across investors. As this value increases, MHHI_Delta goes down.
  3. A measure of the degree of product market power exerted by Company A and Company -A, calculated by multiplying the market shares of the two firms. As this value increases, MHHI_Delta goes up.

This process is repeated and aggregated first for every pairing of Company A and each competing Company -A, then repeated again for every other company in the market relative to its competitors (e.g., Companies B and -B, Companies C and -C, etc.). Mathematically, MHHI_Delta takes the form:

where the Ss represent the firm market shares of, and Betas represent ownership shares of Investor I in, the respective companies A and -A.

As the relative concentration of cross-owning investors to all investors in Company A increases (i.e., the ratio on the right increases), managers are assumed to be more likely to soften competition with that competitor. As those two firms control more of the market, managers’ ability to tacitly collude and increase joint profits is assumed to be higher. Consequently, the empirical research assumes that as MHHI_Delta increases, we should observe less competitive behavior.

And indeed that is the “blockbuster” evidence giving rise to Elhauge’s and Posner, et al.,’s arguments  For example, Azar, et. al., calculate HHI and MHHI_Delta for every US airline market–defined either as city-pairs or departure-destination pairs–for each quarter of the 14-year time period in their study. They then regress ticket prices for each route against the HHI and the MHHI_Delta for that route, controlling for a number of other potential factors. They find that airfare prices are 3% to 7% higher due to common ownership. Other papers using the same or similar measures of common ownership concentration have likewise identified positive correlations between MHHI_Delta and their respective measures of anti-competitive behavior.

Problems with the problem and with the measure

We argue that both the theoretical argument underlying the empirical research and the empirical research itself suffer from some serious flaws. On the theoretical side, we have two concerns. First, we argue that there is a tremendous leap of faith (if not logic) in the idea that corporate executives would forgo their own self-interest and the interests of the vast majority of shareholders and soften competition simply because a small number of small stakeholders are intra-industry diversified. Second, we argue that even if managers were so inclined, it clearly is not the case that softening competition would necessarily be desirable for institutional investors that are both intra- and inter-industry diversified, since supra-competitive pricing to increase profits in one industry would decrease profits in related industries that may also be in the investors’ portfolios.

On the empirical side, we have concerns both with the data used to calculate the MHHI_Deltas and with the nature of the MHHI_Delta itself. First, the data on institutional investors’ holdings are taken from Schedule 13 filings, which report aggregate holdings across all the institutional investor’s funds. Using these data masks the actual incentives of the institutional investors with respect to investments in any individual company or industry. Second, the construction of the MHHI_Delta suffers from serious endogeneity concerns, both in investors’ shareholdings and in market shares. Finally, the MHHI_Delta, while seemingly intuitive, is an empirical unknown. While HHI is theoretically bounded in a way that lends to interpretation of its calculated value, the same is not true for MHHI_Delta. This makes any inference or policy based on nominal values of MHHI_Delta completely arbitrary at best.

We’ll expand on each of these concerns in upcoming posts. We will then take on the problems with the policy proposals being offered in response to the common ownership ‘problem.’

 

 

 

 

 

 

I’ll be participating in two excellent antitrust/consumer protection events next week in DC, both of which may be of interest to our readers:

5th Annual Public Policy Conference on the Law & Economics of Privacy and Data Security

hosted by the GMU Law & Economics Center’s Program on Economics & Privacy, in partnership with the Future of Privacy Forum, and the Journal of Law, Economics & Policy.

Conference Description:

Data flows are central to an increasingly large share of the economy. A wide array of products and business models—from the sharing economy and artificial intelligence to autonomous vehicles and embedded medical devices—rely on personal data. Consequently, privacy regulation leaves a large economic footprint. As with any regulatory enterprise, the key to sound data policy is striking a balance between competing interests and norms that leaves consumers better off; finding an approach that addresses privacy concerns, but also supports the benefits of technology is an increasingly complex challenge. Not only is technology continuously advancing, but individual attitudes, expectations, and participation vary greatly. New ideas and approaches to privacy must be identified and developed at the same pace and with the same focus as the technologies they address.

This year’s symposium will include panels on Unfairness under Section 5: Unpacking “Substantial Injury”, Conceptualizing the Benefits and Costs from Data Flows, and The Law and Economics of Data Security.

I will be presenting a draft paper, co-authored with Kristian Stout, on the FTC’s reasonableness standard in data security cases following the Commission decision in LabMD, entitled, When “Reasonable” Isn’t: The FTC’s Standard-less Data Security Standard.

Conference Details:

  • Thursday, June 8, 2017
  • 8:00 am to 3:40 pm
  • at George Mason University, Founders Hall (next door to the Law School)
    • 3351 Fairfax Drive, Arlington, VA 22201

Register here

View the full agenda here

 

The State of Antitrust Enforcement

hosted by the Federalist Society.

Panel Description:

Antitrust policy during much of the Obama Administration was a continuation of the Bush Administration’s minimal involvement in the market. However, at the end of President Obama’s term, there was a significant pivot to investigations and blocks of high profile mergers such as Halliburton-Baker Hughes, Comcast-Time Warner Cable, Staples-Office Depot, Sysco-US Foods, and Aetna-Humana and Anthem-Cigna. How will or should the new Administration analyze proposed mergers, including certain high profile deals like Walgreens-Rite Aid, AT&T-Time Warner, Inc., and DraftKings-FanDuel?

Join us for a lively luncheon panel discussion that will cover these topics and the anticipated future of antitrust enforcement.

Speakers:

  • Albert A. Foer, Founder and Senior Fellow, American Antitrust Institute
  • Profesor Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Honorable Joshua D. Wright, Professor of Law, George Mason University School of Law
  • Moderator: Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law and President, Cass & Associates, PC

Panel Details:

  • Friday, June 09, 2017
  • 12:00 pm to 2:00 pm
  • at the National Press Club, MWL Conference Rooms
    • 529 14th Street, NW, Washington, DC 20045

Register here

Hope to see everyone at both events!

TOTM is pleased to welcome guest blogger Nicolas Petit, Professor of Law & Economics at the University of Liege, Belgium.

Nicolas has also recently been named a (non-resident) Senior Scholar at ICLE (joining Joshua Wright, Joanna Shepherd, and Julian Morris).

Nicolas is also (as of March 2017) a Research Professor at the University of South Australia, co-director of the Liege Competition & Innovation Institute and director of the LL.M. program in EU Competition and Intellectual Property Law. He is also a part-time advisor to the Belgian competition authority.

Nicolas is a prolific scholar specializing in competition policy, IP law, and technology regulation. Nicolas Petit is the co-author (with Damien Geradin and Anne Layne-Farrar) of EU Competition Law and Economics (Oxford University Press, 2012) and the author of Droit européen de la concurrence (Domat Montchrestien, 2013), a monograph that was awarded the prize for the best law book of the year at the Constitutional Court in France.

One of his most recent papers, Significant Impediment to Industry Innovation: A Novel Theory of Harm in EU Merger Control?, was recently published as an ICLE Competition Research Program White Paper. His scholarship is available on SSRN and he tweets at @CompetitionProf.

Welcome, Nicolas!

Please Join Us For A Conference On Intellectual Property Law

INTELLECTUAL PROPERTY & GLOBAL PROSPERITY

Keynote Speaker: Dean Kamen

October 6-7, 2016

Antonin Scalia Law School
George Mason University
Arlington, Virginia

CLICK HERE TO REGISTER NOW

**9 Hours CLE**

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

An important new paper was recently posted to SSRN by Commissioner Joshua Wright and Joanna Tsai.  It addresses a very hot topic in the innovation industries: the role of patented innovation in standard setting organizations (SSO), what are known as standard essential patents (SEP), and whether the nature of the contractual commitment that adheres to a SEP — specifically, a licensing commitment known by another acronym, FRAND (Fair, Reasonable and Non-Discriminatory) — represents a breakdown in private ordering in the efficient commercialization of new technology.  This is an important contribution to the growing literature on patented innovation and SSOs, if only due to the heightened interest in these issues by the FTC and the Antitrust Division at the DOJ.

http://ssrn.com/abstract=2467939.

“Standard Setting, Intellectual Property Rights, and the Role of Antitrust in Regulating Incomplete Contracts”

JOANNA TSAI, Government of the United States of America – Federal Trade Commission
Email:
JOSHUA D. WRIGHT, Federal Trade Commission, George Mason University School of Law
Email:

A large and growing number of regulators and academics, while recognizing the benefits of standardization, view skeptically the role standard setting organizations (SSOs) play in facilitating standardization and commercialization of intellectual property rights (IPRs). Competition agencies and commentators suggest specific changes to current SSO IPR policies to reduce incompleteness and favor an expanded role for antitrust law in deterring patent holdup. These criticisms and policy proposals are based upon the premise that the incompleteness of SSO contracts is inefficient and the result of market failure rather than an efficient outcome reflecting the costs and benefits of adding greater specificity to SSO contracts and emerging from a competitive contracting environment. We explore conceptually and empirically that presumption. We also document and analyze changes to eleven SSO IPR policies over time. We find that SSOs and their IPR policies appear to be responsive to changes in perceived patent holdup risks and other factors. We find the SSOs’ responses to these changes are varied across SSOs, and that contractual incompleteness and ambiguity for certain terms persist both across SSOs and over time, despite many revisions and improvements to IPR policies. We interpret this evidence as consistent with a competitive contracting process. We conclude by exploring the implications of these findings for identifying the appropriate role of antitrust law in governing ex post opportunism in the SSO setting.

As it begins its hundredth year, the FTC is increasingly becoming the Federal Technology Commission. The agency’s role in regulating data security, privacy, the Internet of Things, high-tech antitrust and patents, among other things, has once again brought to the forefront the question of the agency’s discretion and the sources of the limits on its power.Please join us this Monday, December 16th, for a half-day conference launching the year-long “FTC: Technology & Reform Project,” which will assess both process and substance at the FTC and recommend concrete reforms to help ensure that the FTC continues to make consumers better off.

FTC Commissioner Josh Wright will give a keynote luncheon address titled, “The Need for Limits on Agency Discretion and the Case for Section 5 UMC Guidelines.” Project members will discuss the themes raised in our inaugural report and how they might inform some of the most pressing issues of FTC process and substance confronting the FTC, Congress and the courts. The afternoon will conclude with a Fireside Chat with former FTC Chairmen Tim Muris and Bill Kovacic, followed by a cocktail reception.

Full Agenda:

  • Lunch and Keynote Address (12:00-1:00)
    • FTC Commissioner Joshua Wright
  • Introduction to the Project and the “Questions & Frameworks” Report (1:00-1:15)
    • Gus Hurwitz, Geoffrey Manne and Berin Szoka
  • Panel 1: Limits on FTC Discretion: Institutional Structure & Economics (1:15-2:30)
    • Jeffrey Eisenach (AEI | Former Economist, BE)
    • Todd Zywicki (GMU Law | Former Director, OPP)
    • Tad Lipsky (Latham & Watkins)
    • Geoffrey Manne (ICLE) (moderator)
  • Panel 2: Section 5 and the Future of the FTC (2:45-4:00)
    • Paul Rubin (Emory University Law and Economics | Former Director of Advertising Economics, BE)
    • James Cooper (GMU Law | Former Acting Director, OPP)
    • Gus Hurwitz (University of Nebraska Law)
    • Berin Szoka (TechFreedom) (moderator)
  • A Fireside Chat with Former FTC Chairmen (4:15-5:30)
    • Tim Muris (Former FTC Chairman | George Mason University) & Bill Kovacic (Former FTC Chairman | George Washington University)
  • Reception (5:30-6:30)
Our conference is a “widely-attended event.” Registration is $75 but free for nonprofit, media and government attendees. Space is limited, so RSVP today!

Working Group Members:
Howard Beales
Terry Calvani
James Cooper
Jeffrey Eisenach
Gus Hurwitz
Thom Lambert
Tad Lipsky
Geoffrey Manne
Timothy Muris
Paul Rubin
Joanna Shepherd-Bailey
Joe Sims
Berin Szoka
Sasha Volokh
Todd Zywicki

William Buckley once described a conservative as “someone who stands athwart history, yelling Stop.” Ironically, this definition applies to Professor Tim Wu’s stance against the Supreme Court applying the Constitution’s protections to the information age.

Wu admits he is going against the grain by fighting what he describes as leading liberals from the civil rights era, conservatives and economic libertarians bent on deregulation, and corporations practicing “First Amendment opportunism.” Wu wants to reorient our thinking on the First Amendment, limiting its domain to what he believes are its rightful boundaries.

But in his relatively recent piece in The New Republic and journal article in U Penn Law Review, Wu bites off more than he can chew. First, Wu does not recognize that the First Amendment is used “opportunistically” only because the New Deal revolution and subsequent jurisprudence has foreclosed all other Constitutional avenues to challenge economic regulations. Second, his positive formulation for differentiating protected speech from non-speech will lead to results counter to his stated preferences. Third, contra both conservatives like Bork and liberals like Wu, the Constitution’s protections can and should be adapted to new technologies, consistent with the original meaning.

Wu’s Irrational Lochner-Baiting

Wu makes the case that the First Amendment has been interpreted to protect things that aren’t really within the First Amendment’s purview. He starts his New Republic essay with Sorrell v. IMS (cf. TechFreedom’s Amicus Brief), describing the data mining process as something undeserving of any judicial protection. He deems the application of the First Amendment to economic regulation a revival of Lochner, evincing a misunderstanding of the case that appeals to undefended academic prejudice and popular ignorance. This is important because the economic liberty which was long protected by the Constitution, either as matter of federalism or substantive rights, no longer has any protection from government power aside from the First Amendment jurisprudence Wu decries.

Lochner v. New York is a 1905 Supreme Court case that has received more scorn, left and right, than just about any case that isn’t dealing with slavery or segregation. This has led to the phenomenon (my former Constitutional Law) Professor David Bernstein calls “Lochner-baiting,” where a commentator describes any Supreme Court decision with which he or she disagrees as Lochnerism. Wu does this throughout his New Republic piece, somehow seeing parallels between application of the First Amendment to the Internet and a Liberty of Contract case under substantive Due Process.

The idea that economic regulation should receive little judicial scrutiny is not new. In fact, it has been the operating law since at least the famous Carolene Products footnote four. However, the idea that only insular and discrete minorities should receive First Amendment protection is a novel application of law. Wu implicitly argues exactly this when he says “corporations are not the Jehovah’s Witnesses, unpopular outsiders needing a safeguard that legislators and law enforcement could not be moved to provide.” On the contrary, the application of First Amendment protections to Jehovah’s Witnesses and student protesters is part and parcel of the application of the First Amendment to advertising and data that drives the Internet. Just because Wu does not believe businesspersons need the Constitution’s protections does not mean they do not apply.

Finally, while Wu may be correct that the First Amendment should not apply to everything for which it is being asserted today, he does not seem to recognize why there is “First Amendment opportunism.” In theory, those trying to limit the power of government over economic regulation could use any number of provisions in the text of the Constitution: enumerated powers of Congress and the Tenth Amendment, the Ninth Amendment, the Contracts Clause, the Privileges or Immunities Clause of the Fourteenth Amendment, the Due Process Clause of the Fifth and Fourteenth Amendments, the Equal Protection Clause, etc. For much of the Constitution’s history, the combination of these clauses generally restricted the growth of government over economic affairs. Lochner was just one example of courts generally putting the burden on governments to show the restrictions placed upon economic liberty are outweighed by public interest considerations.

The Lochner court actually protected a small bakery run by immigrants from special interest legislation aimed at putting them out of business on behalf of bigger, established competitors. Shifting this burden away from government and towards the individual is not clearly the good thing Wu assumes. Applying the same Liberty of Contract doctrine, the Supreme Court struck down legislation enforcing housing segregation in Buchanan v. Warley and legislation outlawing the teaching of the German language in Meyer v. Nebraska. After the New Deal revolution, courts chose to apply only rational basis review to economic regulation, and would need to find a new way to protect fundamental rights that were once classified as economic in nature. The burden shifted to individuals to prove an economic regulation is not loosely related to any conceivable legitimate governmental purpose.

Now, the only Constitutional avenue left for a winnable challenge of economic regulation is the First Amendment. Under the rational basis test, the Tenth Circuit in Powers v. Harris actually found that protecting businesses from competition is a legitimate state interest. This is why the cat owner Wu references in his essay and describes in more detail in his law review article brought a First Amendment claim against a regime requiring licensing of his talking cat show: there is basically no other Constitutional protection against burdensome economic regulation.

The More You Edit, the More Your <sic> Protected?

In his law review piece, Machine Speech, Wu explains that the First Amendment has a functionality requirement. He points out that the First Amendment has never been interpreted to mean, and should not mean, that all communication is protected. Wu believes the dividing lines between protected and unprotected speech should be whether the communicator is a person attempting to communicate a specific message in a non-mechanical way to another, and whether the communication at issue is more speech than conduct. The first test excludes carriers and conduits that handle or process information but have an ultimately functional relationship with it–like Federal Express or a telephone company. The second excludes tools, those works that are purely functional like navigational charts, court filings, or contracts.

Of course, Wu admits the actual application of his test online can be difficult. In his law review article he deals with some easy cases, like the obvious application of the First Amendment to blog posts, tweets, and video games, and non-application to Google Maps. Of course, harder cases are the main target of his article: search engines, automated concierges, and other algorithm-based services. At the very end of his law review article, Wu finally states how to differentiate between protected speech and non-speech in such cases:

The rule of thumb is this: the more the concierge merely tells the user about himself, the more like a tool and less like protected speech the program is. The more the programmer puts in place his opinion, and tries to influence the user, the more likely there will be First Amendment coverage. These are the kinds of considerations that ultimately should drive every algorithmic output case that courts could encounter.

Unfortunately for Wu, this test would lead to results counterproductive to his goals.

Applying this rationale to Google, for instance, would lead to the perverse conclusion that the more the allegations against the company about tinkering with its algorithm to disadvantage competitors are true, the more likely Google would receive First Amendment protection. And if Net Neutrality advocates are right that ISPs are restricting consumer access to content, then the analogy to the newspaper in Tornillo becomes a good one–ISPs have a right to exercise editorial discretion and mandating speech would be unconstitutional. The application of Wu’s test to search engines and ISPs effectively puts them in a “use it or lose it” position with their First Amendment rights that courts have rejected. The idea that antitrust and FCC regulations can apply without First Amendment scrutiny only if search engines and ISPs are not doing anything requiring antitrust or FCC scrutiny is counterproductive to sound public policy–and presumably, the regulatory goals Wu holds.

First Amendment Dynamism

The application of the First Amendment to the Internet Age does not involve large leaps of logic from current jurisprudence. As Stuart Minor Benjamin shows in his article in the same issue of the U Penn Law Review, the bigger leap would be to follow Wu’s recommendations. We do not need a 21st Century First Amendment that some on the left have called for—the original one will do just fine.

This is because the Constitution’s protections can be dynamically applied, consistent with original meaning. Wu’s complaint is that he does not like how the First Amendment has evolved. Even his points that have merit, though, seem to indicate a stasis mentality. In her book, The Future and Its Enemies, Virginia Postrel described this mentality as a preference for a “controlled, uniform society that changes only with permission from some central authority.” But the First Amendment’s text is not a grant of power to the central authority to control or permit anything. It actually restricts government from intervening into the open-ended society where creativity and enterprise, operating under predictable rules, generate progress in unpredictable ways.

The application of current First Amendment jurisprudence to search engines, ISPs, and data mining will not necessarily create a world where machines have rights. Wu is right that the line must be drawn somewhere, but his technocratic attempt to empower government officials to control innovation is short-sighted. Ultimately, the First Amendment is as much about protecting the individuals who innovate and create online as those in the offline world. Such protection embraces the future instead of fearing it.

[Cross posted at the Center for the Protection of Intellectual Property blog.]

Today’s public policy debates frame copyright policy solely in terms of a “trade off” between the benefits of incentivizing new works and the social deadweight losses imposed by the access restrictions imposed by these (temporary) “monopolies.” I recently posted to SSRN a new research paper, called How Copyright Drives Innovation in Scholarly Publishing, explaining that this is a fundamental mistake that has distorted the policy debates about scholarly publishing.

This policy mistake is important because it has lead commentators and decision-makers to dismiss as irrelevant to copyright policy the investments by scholarly publishers of $100s of millions in creating innovative distribution mechanisms in our new digital world. These substantial sunk costs are in addition to the $100s of millions expended annually by publishers in creating, publishing and maintaining reliable, high-quality, standardized articles distributed each year in a wide-ranging variety of academic disciplines and fields of research. The articles now number in the millions themselves; in 2009, for instance, over 2,000 publishers issued almost 1.5 million articles just in the scientific, technical and medical fields, exclusive of the humanities and social sciences.

The mistaken incentive-to-invent conventional wisdom in copyright policy is further compounded by widespread misinformation today about the allegedly “zero cost” of digital publication. As a result, many people are simply unaware of the substantial investments in infrastructure, skilled labor and other resources required to create, publish and maintain scholarly articles on the Internet and in other digital platforms.

This is not merely a so-called “academic debate” about copyright policy and publishing.

The policy distortion caused by the narrow, reductionist incentive-to-create conventional wisdom, when combined with the misinformation about the economics of digital business models, has been spurring calls for “open access” mandates for scholarly research, such as at the National Institute of Health and in recently proposed legislation (FASTR Act) and in other proposed regulations. This policy distortion even influenced Justice Breyer’s opinion in the recent decision in Kirtsaeng v. John Wiley & Sons (U.S. Supreme Court, March 19, 2013), as he blithely dismissed commercial incentivizes as being irrelevant to fundamental copyright policy. These legal initiatives and the Kirtsaeng decision are motivated in various ways by the incentive-to-create conventional wisdom, by the misunderstanding of the economics of scholarly publishing, and by anti-copyright rhetoric on both the left and right, all of which has become more pervasive in recent years.

But, as I explain in my paper, courts and commentators have long recognized that incentivizing authors to produce new works is not the sole justification for copyright—copyright also incentivizes intermediaries like scholarly publishers to invest in and create innovative legal and market mechanisms for publishing and distributing articles that report on scholarly research. These two policies—the incentive to create and the incentive to commercialize—are interrelated, as both are necessary in justifying how copyright law secures the dynamic innovation that makes possible the “progress of science.” In short, if the law does not secure the fruits of labors of publishers who create legal and market mechanisms for disseminating works, then authors’ labors will go unrewarded as well.

As Justice Sandra Day O’Connor famously observed in the 1984 decision in Harper & Row v. Nation Enterprises: “In our haste to disseminate news, it should not be forgotten the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” Thus, in Harper & Row, the Supreme Court reached the uncontroversial conclusion that copyright secures the fruits of productive labors “where an author and publisher have invested extensive resources in creating an original work.” (emphases added)

This concern with commercial incentives in copyright law is not just theory; in fact, it is most salient in scholarly publishing because researchers are not motivated by the pecuniary benefits offered to authors in conventional publishing contexts. As a result of the policy distortion caused by the incentive-to-create conventional wisdom, some academics and scholars now view scholarly publishing by commercial firms who own the copyrights in the articles as “a form of censorship.” Yet, as courts have observed: “It is not surprising that [scholarly] authors favor liberal photocopying . . . . But the authors have not risked their capital to achieve dissemination. The publishers have.” As economics professor Mark McCabe observed (somewhat sardonically) in a research paper released last year for the National Academy of Sciences: he and his fellow academic “economists knew the value of their journals, but not their prices.”

The widespread ignorance among the public, academics and commentators about the economics of scholarly publishing in the Internet age is quite profound relative to the actual numbers.  Based on interviews with six different scholarly publishers—Reed Elsevier, Wiley, SAGE, the New England Journal of Medicine, the American Chemical Society, and the American Institute of Physics—my research paper details for the first time ever in a publication and at great length the necessary transaction costs incurred by any successful publishing enterprise in the Internet age.  To take but one small example from my research paper: Reed Elsevier began developing its online publishing platform in 1995, a scant two years after the advent of the World Wide Web, and its sunk costs in creating this first publishing platform and then digitally archiving its previously published content was over $75 million. Other scholarly publishers report similarly high costs in both absolute and relative terms.

Given the widespread misunderstandings of the economics of Internet-based business models, it bears noting that such high costs are not unique to scholarly publishers.  Microsoft reportedly spent $10 billion developing Windows Vista before it sold a single copy, of which it ultimately did not sell many at all. Google regularly invests $100s of millions, such as $890 million in the first quarter of 2011, in upgrading its data centers.  It is somewhat surprising that such things still have to be pointed out a scant decade after the bursting of the dot.com bubble, a bubble precipitated by exactly the same mistaken view that businesses have somehow been “liberated” from the economic realities of cost by the Internet.

Just as with the extensive infrastructure and staffing costs, the actual costs incurred by publishers in operating the peer review system for their scholarly journals are also widely misunderstood.  Individual publishers now receive hundreds of thousands—the large scholarly publisher, Reed Elsevier, receives more than one million—manuscripts per year. Reed Elsevier’s annual budget for operating its peer review system is over $100 million, which reflects the full scope of staffing, infrastructure, and other transaction costs inherent in operating a quality-control system that rejects 65% of the submitted manuscripts. Reed Elsevier’s budget for its peer review system is consistent with industry-wide studies that have reported that the peer review system costs approximately $2.9 billion annually in operation costs (translating into dollars the British £1.9 billion pounds reported in the study). For those articles accepted for publication, there are additional, extensive production costs, and then there are extensive post-publication costs in updating hypertext links of citations, cyber security of the websites, and related digital issues.

In sum, many people mistakenly believe that scholarly publishers are no longer necessary because the Internet has made moot all such intermediaries of traditional brick-and-mortar economies—a viewpoint reinforced by the equally mistaken incentive-to-create conventional wisdom in the copyright policy debates today. But intermediaries like scholarly publishers face the exact same incentive problems that is universally recognized for authors by the incentive-to-create conventional wisdom: no will make the necessary investments to create a work or to distribute if the fruits of their labors are not secured to them. This basic economic fact—dynamic development of innovative distribution mechanisms require substantial investment in both people and resources—is what makes commercialization an essential feature of both copyright policy and law (and of all intellectual property doctrines).

It is for this reason that copyright law has long promoted and secured the value that academics and scholars have come to depend on in their journal articles—reliable, high-quality, standardized, networked, and accessible research that meets the differing expectations of readers in a variety of fields of scholarly research. This is the value created by the scholarly publishers. Scholarly publishers thus serve an essential function in copyright law by making the investments in and creating the innovative distribution mechanisms that fulfill the constitutional goal of copyright to advance the “progress of science.”

DISCLOSURE: The paper summarized in this blog posting was supported separately by a Leonardo Da Vinci Fellowship and by the Association of American Publishers (AAP). The author thanks Mark Schultz for very helpful comments on earlier drafts, and the AAP for providing invaluable introductions to the five scholarly publishers who shared their publishing data with him.

NOTE: Some small copy-edits were made to this blog posting.