Archives For legal scholarship

Welcome to the FTC UMC Roundup, our new weekly update of news and events relating to antitrust and, more specifically, to the Federal Trade Commission’s (FTC) newfound interest in “revitalizing” the field. Each week we will bring you a brief recap of the week that was and a preview of the week to come. All with a bit of commentary and news of interest to regular readers of Truth on the Market mixed in.

This week’s headline? Of course it’s that Alvaro Bedoya has been confirmed as the FTC’s fifth commissioner—notably breaking the commission’s 2-2 tie between Democrats and Republicans and giving FTC Chair Lina Khan the majority she has been lacking. Politico and Gibson Dunn both offer some thoughts on what to expect next—though none of the predictions are surprising: more aggressive merger review and litigation; UMC rulemakings on a range of topics, including labor, right-to-repair, and pharmaceuticals; and privacy-related consumer protection. The real question is how quickly and aggressively the FTC will implement this agenda. Will we see a flurry of rulemakings in the next week, or will they be rolled out over a period of months or years? Will the FTC risk major litigation questions with a “go big or go home” attitude, or will it take a more incrementalist approach to boiling the frog?

Much of the rest of this week’s action happened on the Hill. Khan, joined by Securities and Exchange Commission (SEC) Chair Gary Gensler, made the regular trip to Congress to ask for a bigger budget to support more hires. (FTC, Law360) Sen. Mike Lee  (R-Utah) asked for unanimous consent on his State Antitrust Enforcement Venue Act, but met resistance from Sen. Amy Klobuchar (D-Minn.), who wants that bill paired with her own American Innovation and Choice Online Act. This follows reports that Senate Majority Leader Chuck Schumer (D-N.Y.) is pushing Klobuchar to get support in line for both AICOA and the Open App Markets Act to be brought to the Senate floor. Of course, if they had the needed support, we probably wouldn’t be talking so much about whether they have the needed support.

Questions about the climate at the FTC continue following release of the Office of Personnel Management’s (OPM) Federal Employee Viewpoint Survey. Sen. Roger Wicker (R-Miss.) wants to know what has caused staff satisfaction at the agency to fall precipitously. And former senior FTC staffer Eileen Harrington issued a stern rebuke of the agency at this week’s open meeting, saying of the relationship between leadership and staff that: “The FTC is not a failed agency but it’s on the road to becoming one. This is a crisis.”

Perhaps the only thing experiencing greater inflation than the dollar is interest in the FTC doing something about inflation. Alden Abbott and Andrew Mercado remind us that these calls are misplaced. But that won’t stop politicians from demanding the FTC do something about high gas prices. Or beef production. Or utilities. Or baby formula.

A little further afield, the 5th U.S. Circuit Court of Appeals issued an opinion this week in a case involving SEC administrative-law judges that took broad issue with them on delegation, due process, and “take care” grounds. It may come as a surprise that this has led to much overwrought consternation that the opinion would dismantle the administrative state. But given that it is often the case that the SEC and FTC face similar constitutional issues (recall that Kokesh v. SEC was the precursor to AMG Capital), the 5th Circuit case could portend future problems for FTC adjudication. Add this to the queue with the Supreme Court’s pending review of whether federal district courts can consider constitutional challenges to an agency’s structure. The court was already scheduled to consider this question with respect to the FTC this next term in Axon, and agreed this week to hear a similar SEC-focused case next term as well. 

Some Navel-Gazing News! 

Congratulations to recent University of Michigan Law School graduate Kacyn Fujii, winner of our New Voices competition for contributions to our recent symposium on FTC UMC Rulemaking (hey, this post is actually part of that symposium, as well!). Kacyn’s contribution looked at the statutory basis for FTC UMC rulemaking authority and evaluated the use of such authority as a way to address problematic use of non-compete clauses.

And, one for the academics (and others who enjoy writing academic articles): you might be interested in this call for proposals for a research roundtable on Market Structuring Regulation that the International Center for Law & Economics will host in September. If you are interested in writing on topics that include conglomerate business models, market-structuring regulation, vertical integration, or other topics relating to the regulation and economics of contemporary markets, we hope to hear from you!

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.

 

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

William Buckley once described a conservative as “someone who stands athwart history, yelling Stop.” Ironically, this definition applies to Professor Tim Wu’s stance against the Supreme Court applying the Constitution’s protections to the information age.

Wu admits he is going against the grain by fighting what he describes as leading liberals from the civil rights era, conservatives and economic libertarians bent on deregulation, and corporations practicing “First Amendment opportunism.” Wu wants to reorient our thinking on the First Amendment, limiting its domain to what he believes are its rightful boundaries.

But in his relatively recent piece in The New Republic and journal article in U Penn Law Review, Wu bites off more than he can chew. First, Wu does not recognize that the First Amendment is used “opportunistically” only because the New Deal revolution and subsequent jurisprudence has foreclosed all other Constitutional avenues to challenge economic regulations. Second, his positive formulation for differentiating protected speech from non-speech will lead to results counter to his stated preferences. Third, contra both conservatives like Bork and liberals like Wu, the Constitution’s protections can and should be adapted to new technologies, consistent with the original meaning.

Wu’s Irrational Lochner-Baiting

Wu makes the case that the First Amendment has been interpreted to protect things that aren’t really within the First Amendment’s purview. He starts his New Republic essay with Sorrell v. IMS (cf. TechFreedom’s Amicus Brief), describing the data mining process as something undeserving of any judicial protection. He deems the application of the First Amendment to economic regulation a revival of Lochner, evincing a misunderstanding of the case that appeals to undefended academic prejudice and popular ignorance. This is important because the economic liberty which was long protected by the Constitution, either as matter of federalism or substantive rights, no longer has any protection from government power aside from the First Amendment jurisprudence Wu decries.

Lochner v. New York is a 1905 Supreme Court case that has received more scorn, left and right, than just about any case that isn’t dealing with slavery or segregation. This has led to the phenomenon (my former Constitutional Law) Professor David Bernstein calls “Lochner-baiting,” where a commentator describes any Supreme Court decision with which he or she disagrees as Lochnerism. Wu does this throughout his New Republic piece, somehow seeing parallels between application of the First Amendment to the Internet and a Liberty of Contract case under substantive Due Process.

The idea that economic regulation should receive little judicial scrutiny is not new. In fact, it has been the operating law since at least the famous Carolene Products footnote four. However, the idea that only insular and discrete minorities should receive First Amendment protection is a novel application of law. Wu implicitly argues exactly this when he says “corporations are not the Jehovah’s Witnesses, unpopular outsiders needing a safeguard that legislators and law enforcement could not be moved to provide.” On the contrary, the application of First Amendment protections to Jehovah’s Witnesses and student protesters is part and parcel of the application of the First Amendment to advertising and data that drives the Internet. Just because Wu does not believe businesspersons need the Constitution’s protections does not mean they do not apply.

Finally, while Wu may be correct that the First Amendment should not apply to everything for which it is being asserted today, he does not seem to recognize why there is “First Amendment opportunism.” In theory, those trying to limit the power of government over economic regulation could use any number of provisions in the text of the Constitution: enumerated powers of Congress and the Tenth Amendment, the Ninth Amendment, the Contracts Clause, the Privileges or Immunities Clause of the Fourteenth Amendment, the Due Process Clause of the Fifth and Fourteenth Amendments, the Equal Protection Clause, etc. For much of the Constitution’s history, the combination of these clauses generally restricted the growth of government over economic affairs. Lochner was just one example of courts generally putting the burden on governments to show the restrictions placed upon economic liberty are outweighed by public interest considerations.

The Lochner court actually protected a small bakery run by immigrants from special interest legislation aimed at putting them out of business on behalf of bigger, established competitors. Shifting this burden away from government and towards the individual is not clearly the good thing Wu assumes. Applying the same Liberty of Contract doctrine, the Supreme Court struck down legislation enforcing housing segregation in Buchanan v. Warley and legislation outlawing the teaching of the German language in Meyer v. Nebraska. After the New Deal revolution, courts chose to apply only rational basis review to economic regulation, and would need to find a new way to protect fundamental rights that were once classified as economic in nature. The burden shifted to individuals to prove an economic regulation is not loosely related to any conceivable legitimate governmental purpose.

Now, the only Constitutional avenue left for a winnable challenge of economic regulation is the First Amendment. Under the rational basis test, the Tenth Circuit in Powers v. Harris actually found that protecting businesses from competition is a legitimate state interest. This is why the cat owner Wu references in his essay and describes in more detail in his law review article brought a First Amendment claim against a regime requiring licensing of his talking cat show: there is basically no other Constitutional protection against burdensome economic regulation.

The More You Edit, the More Your <sic> Protected?

In his law review piece, Machine Speech, Wu explains that the First Amendment has a functionality requirement. He points out that the First Amendment has never been interpreted to mean, and should not mean, that all communication is protected. Wu believes the dividing lines between protected and unprotected speech should be whether the communicator is a person attempting to communicate a specific message in a non-mechanical way to another, and whether the communication at issue is more speech than conduct. The first test excludes carriers and conduits that handle or process information but have an ultimately functional relationship with it–like Federal Express or a telephone company. The second excludes tools, those works that are purely functional like navigational charts, court filings, or contracts.

Of course, Wu admits the actual application of his test online can be difficult. In his law review article he deals with some easy cases, like the obvious application of the First Amendment to blog posts, tweets, and video games, and non-application to Google Maps. Of course, harder cases are the main target of his article: search engines, automated concierges, and other algorithm-based services. At the very end of his law review article, Wu finally states how to differentiate between protected speech and non-speech in such cases:

The rule of thumb is this: the more the concierge merely tells the user about himself, the more like a tool and less like protected speech the program is. The more the programmer puts in place his opinion, and tries to influence the user, the more likely there will be First Amendment coverage. These are the kinds of considerations that ultimately should drive every algorithmic output case that courts could encounter.

Unfortunately for Wu, this test would lead to results counterproductive to his goals.

Applying this rationale to Google, for instance, would lead to the perverse conclusion that the more the allegations against the company about tinkering with its algorithm to disadvantage competitors are true, the more likely Google would receive First Amendment protection. And if Net Neutrality advocates are right that ISPs are restricting consumer access to content, then the analogy to the newspaper in Tornillo becomes a good one–ISPs have a right to exercise editorial discretion and mandating speech would be unconstitutional. The application of Wu’s test to search engines and ISPs effectively puts them in a “use it or lose it” position with their First Amendment rights that courts have rejected. The idea that antitrust and FCC regulations can apply without First Amendment scrutiny only if search engines and ISPs are not doing anything requiring antitrust or FCC scrutiny is counterproductive to sound public policy–and presumably, the regulatory goals Wu holds.

First Amendment Dynamism

The application of the First Amendment to the Internet Age does not involve large leaps of logic from current jurisprudence. As Stuart Minor Benjamin shows in his article in the same issue of the U Penn Law Review, the bigger leap would be to follow Wu’s recommendations. We do not need a 21st Century First Amendment that some on the left have called for—the original one will do just fine.

This is because the Constitution’s protections can be dynamically applied, consistent with original meaning. Wu’s complaint is that he does not like how the First Amendment has evolved. Even his points that have merit, though, seem to indicate a stasis mentality. In her book, The Future and Its Enemies, Virginia Postrel described this mentality as a preference for a “controlled, uniform society that changes only with permission from some central authority.” But the First Amendment’s text is not a grant of power to the central authority to control or permit anything. It actually restricts government from intervening into the open-ended society where creativity and enterprise, operating under predictable rules, generate progress in unpredictable ways.

The application of current First Amendment jurisprudence to search engines, ISPs, and data mining will not necessarily create a world where machines have rights. Wu is right that the line must be drawn somewhere, but his technocratic attempt to empower government officials to control innovation is short-sighted. Ultimately, the First Amendment is as much about protecting the individuals who innovate and create online as those in the offline world. Such protection embraces the future instead of fearing it.

[Cross posted at the Center for the Protection of Intellectual Property blog.]

Today’s public policy debates frame copyright policy solely in terms of a “trade off” between the benefits of incentivizing new works and the social deadweight losses imposed by the access restrictions imposed by these (temporary) “monopolies.” I recently posted to SSRN a new research paper, called How Copyright Drives Innovation in Scholarly Publishing, explaining that this is a fundamental mistake that has distorted the policy debates about scholarly publishing.

This policy mistake is important because it has lead commentators and decision-makers to dismiss as irrelevant to copyright policy the investments by scholarly publishers of $100s of millions in creating innovative distribution mechanisms in our new digital world. These substantial sunk costs are in addition to the $100s of millions expended annually by publishers in creating, publishing and maintaining reliable, high-quality, standardized articles distributed each year in a wide-ranging variety of academic disciplines and fields of research. The articles now number in the millions themselves; in 2009, for instance, over 2,000 publishers issued almost 1.5 million articles just in the scientific, technical and medical fields, exclusive of the humanities and social sciences.

The mistaken incentive-to-invent conventional wisdom in copyright policy is further compounded by widespread misinformation today about the allegedly “zero cost” of digital publication. As a result, many people are simply unaware of the substantial investments in infrastructure, skilled labor and other resources required to create, publish and maintain scholarly articles on the Internet and in other digital platforms.

This is not merely a so-called “academic debate” about copyright policy and publishing.

The policy distortion caused by the narrow, reductionist incentive-to-create conventional wisdom, when combined with the misinformation about the economics of digital business models, has been spurring calls for “open access” mandates for scholarly research, such as at the National Institute of Health and in recently proposed legislation (FASTR Act) and in other proposed regulations. This policy distortion even influenced Justice Breyer’s opinion in the recent decision in Kirtsaeng v. John Wiley & Sons (U.S. Supreme Court, March 19, 2013), as he blithely dismissed commercial incentivizes as being irrelevant to fundamental copyright policy. These legal initiatives and the Kirtsaeng decision are motivated in various ways by the incentive-to-create conventional wisdom, by the misunderstanding of the economics of scholarly publishing, and by anti-copyright rhetoric on both the left and right, all of which has become more pervasive in recent years.

But, as I explain in my paper, courts and commentators have long recognized that incentivizing authors to produce new works is not the sole justification for copyright—copyright also incentivizes intermediaries like scholarly publishers to invest in and create innovative legal and market mechanisms for publishing and distributing articles that report on scholarly research. These two policies—the incentive to create and the incentive to commercialize—are interrelated, as both are necessary in justifying how copyright law secures the dynamic innovation that makes possible the “progress of science.” In short, if the law does not secure the fruits of labors of publishers who create legal and market mechanisms for disseminating works, then authors’ labors will go unrewarded as well.

As Justice Sandra Day O’Connor famously observed in the 1984 decision in Harper & Row v. Nation Enterprises: “In our haste to disseminate news, it should not be forgotten the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” Thus, in Harper & Row, the Supreme Court reached the uncontroversial conclusion that copyright secures the fruits of productive labors “where an author and publisher have invested extensive resources in creating an original work.” (emphases added)

This concern with commercial incentives in copyright law is not just theory; in fact, it is most salient in scholarly publishing because researchers are not motivated by the pecuniary benefits offered to authors in conventional publishing contexts. As a result of the policy distortion caused by the incentive-to-create conventional wisdom, some academics and scholars now view scholarly publishing by commercial firms who own the copyrights in the articles as “a form of censorship.” Yet, as courts have observed: “It is not surprising that [scholarly] authors favor liberal photocopying . . . . But the authors have not risked their capital to achieve dissemination. The publishers have.” As economics professor Mark McCabe observed (somewhat sardonically) in a research paper released last year for the National Academy of Sciences: he and his fellow academic “economists knew the value of their journals, but not their prices.”

The widespread ignorance among the public, academics and commentators about the economics of scholarly publishing in the Internet age is quite profound relative to the actual numbers.  Based on interviews with six different scholarly publishers—Reed Elsevier, Wiley, SAGE, the New England Journal of Medicine, the American Chemical Society, and the American Institute of Physics—my research paper details for the first time ever in a publication and at great length the necessary transaction costs incurred by any successful publishing enterprise in the Internet age.  To take but one small example from my research paper: Reed Elsevier began developing its online publishing platform in 1995, a scant two years after the advent of the World Wide Web, and its sunk costs in creating this first publishing platform and then digitally archiving its previously published content was over $75 million. Other scholarly publishers report similarly high costs in both absolute and relative terms.

Given the widespread misunderstandings of the economics of Internet-based business models, it bears noting that such high costs are not unique to scholarly publishers.  Microsoft reportedly spent $10 billion developing Windows Vista before it sold a single copy, of which it ultimately did not sell many at all. Google regularly invests $100s of millions, such as $890 million in the first quarter of 2011, in upgrading its data centers.  It is somewhat surprising that such things still have to be pointed out a scant decade after the bursting of the dot.com bubble, a bubble precipitated by exactly the same mistaken view that businesses have somehow been “liberated” from the economic realities of cost by the Internet.

Just as with the extensive infrastructure and staffing costs, the actual costs incurred by publishers in operating the peer review system for their scholarly journals are also widely misunderstood.  Individual publishers now receive hundreds of thousands—the large scholarly publisher, Reed Elsevier, receives more than one million—manuscripts per year. Reed Elsevier’s annual budget for operating its peer review system is over $100 million, which reflects the full scope of staffing, infrastructure, and other transaction costs inherent in operating a quality-control system that rejects 65% of the submitted manuscripts. Reed Elsevier’s budget for its peer review system is consistent with industry-wide studies that have reported that the peer review system costs approximately $2.9 billion annually in operation costs (translating into dollars the British £1.9 billion pounds reported in the study). For those articles accepted for publication, there are additional, extensive production costs, and then there are extensive post-publication costs in updating hypertext links of citations, cyber security of the websites, and related digital issues.

In sum, many people mistakenly believe that scholarly publishers are no longer necessary because the Internet has made moot all such intermediaries of traditional brick-and-mortar economies—a viewpoint reinforced by the equally mistaken incentive-to-create conventional wisdom in the copyright policy debates today. But intermediaries like scholarly publishers face the exact same incentive problems that is universally recognized for authors by the incentive-to-create conventional wisdom: no will make the necessary investments to create a work or to distribute if the fruits of their labors are not secured to them. This basic economic fact—dynamic development of innovative distribution mechanisms require substantial investment in both people and resources—is what makes commercialization an essential feature of both copyright policy and law (and of all intellectual property doctrines).

It is for this reason that copyright law has long promoted and secured the value that academics and scholars have come to depend on in their journal articles—reliable, high-quality, standardized, networked, and accessible research that meets the differing expectations of readers in a variety of fields of scholarly research. This is the value created by the scholarly publishers. Scholarly publishers thus serve an essential function in copyright law by making the investments in and creating the innovative distribution mechanisms that fulfill the constitutional goal of copyright to advance the “progress of science.”

DISCLOSURE: The paper summarized in this blog posting was supported separately by a Leonardo Da Vinci Fellowship and by the Association of American Publishers (AAP). The author thanks Mark Schultz for very helpful comments on earlier drafts, and the AAP for providing invaluable introductions to the five scholarly publishers who shared their publishing data with him.

NOTE: Some small copy-edits were made to this blog posting.

 

Meese on Bork (and the AALS)

Thom Lambert —  22 December 2012

William & Mary’s Alan Meese has posted a terrific tribute to Robert Bork, who passed away this week.  Most of the major obituaries, Alan observes, have largely ignored the key role
Bork played in rationalizing antitrust, a body of law that veered sharply off course in the middle of the last century.  Indeed, Bork began his 1978 book, The Antitrust Paradox, by comparing the then-prevailing antitrust regime to the sheriff of a frontier town:  “He did not sift the evidence, distinguish between suspects, and solve crimes, but merely walked the main street and every so often pistol-whipped a few people.”  Bork went on to explain how antitrust, if focused on consumer welfare (which equated with allocative efficiency), could be reconceived in a coherent fashion.

It is difficult to overstate the significance of Bork’s book and his earlier writings on which it was based.  Chastened by Bork’s observations, the Supreme Court began correcting its antitrust mistakes in the mid-1970s.  The trend began with the 1977 Sylvania decision, which overruled a precedent making it per se illegal for manufacturers to restrict the territories in which their dealers could operate.  (Manufacturers seeking to enhance sales of their brand may wish to give dealers exclusive sales territories to protect them against “free-riding” on their demand-enhancing customer services; pre-Sylvania precedent made it hard for manufacturers to do this.)  Sylvania was followed by:

  • Professional Engineers (1978), which helpfully clarified that antitrust’s theretofore unwieldy “Rule of Reason” must be focused exclusively on competition;
  • Broadcast Music, Inc. (1979), which held that competitors’ price-tampering arrangements that reduce costs and enhance output may be legal;
  • NCAA (1984), which recognized that trade restraints among competitors may be necessary to create new products and services and thereby made it easier for competitors to enter into output-enhancing joint ventures;
  • Khan (1997), which abolished the ludicrous per se rule against maximum resale price maintenance;
  • Trinko (2004), which recognized that some monopoly pricing may aid consumers in the long run (by enhancing the incentive to innovate) and narrowly circumscribed the situations in which a firm has a duty to assist its rivals; and
  • Leegin (2007), which overruled a 96 year-old precedent declaring minimum resale price maintenance–a practice with numerous potential procompetitive benefits–to be per se illegal.

Bork’s fingerprints are all over these decisions.  Alan’s terrific post discusses several of them and provides further detail on Bork’s influence.

And while you’re checking out Alan’s Bork tribute, take a look at his recent post discussing my musings on the AALS hiring cartel.  Alan observes that AALS’s collusive tendencies reach beyond the lateral hiring context.  Who’d have guessed?

Available here.  Although not the first article to build on Orin Kerr’s brilliant paper, A Theory of Law (blog post here) (that honor belongs to Josh Blackman’s challenging and thought-provoking paper, My Own Theory of the Law) (blog post here), I think this is an important contribution to this burgeoning field.  It’s still a working paper, though, so comments are welcome.

In a response to my essay, The Trespass Fallacy in Patent Law, in which I explain why patent scholars like Michael Meurer, James Bessen, T.J. Chiang and others are committing the nirvana fallacy in their critiques of the patent system, my colleague, T.J. Chiang writes at PrawfsBlawg:

The Nirvana fallacy, at least as I understand it, is to compare an imperfect existing arrangement (such as the existing patent system) to a hypothetical idealized system. But the people comparing the patent system to real property—and I count myself among them—are not comparing it to an idealized fictional system, whether conceptualized as land boundaries or as estate boundaries. We are saying that, based on our everyday experiences, the real property system seems to work reasonably well because we don’t feel too uncertain about our real property rights and don’t get into too many disputes with our neighbors. This is admittedly a loose intuition, but it is not an idealization in the sense of using a fictional baseline. It is the same as saying that the patent system seems to work reasonably well because we see a lot of new technology in our everyday experience.

I would like to make two quick points in response to T.J.’s attempt at wiggling out from serving as one of the examples I identify in my essay as a patent scholar who uses trespass doctrine in a way that reflects the nirvana fallacy.

First, what T.J. describes as what he is doing — comparing an actual institutional system to a “loose intuition” about another institutional system — is exactly what Harold Demsetz identified as the nirvana fallacy (when he roughly coined the term in 1969).  When economists or legal scholars commit the nirvana fallacy, they always justify their idealized counterfactual standard by appeal to some intuition or gestalt sense of the world; in fact, Demsetz’s example of the nirvana fallacy is when economists have a loose intuition that regulation always works perfectly to fix market failures.  These economists do this for the simple reason that they’re social scientists, and so they have to make their critiques seem practical.

It’s like the infamous statement by Pauline Kael in 1972 (quoting from memory): “I can’t believe Nixon won, because I don’t know anyone who voted for him.” Similarly, what patent scholars like T.J. are doing is saying: “I can’t believe that trespass isn’t clear and efficient, because I don’t know anyone who has been involved in a trespass lawsuit or I don’t hear of any serious trespass lawsuits.”  Economists or legal scholars always have some anecdotal evidence — either personal experiences or merely an impressionistic intuition about other people — to offer as support for their counterfactual by which they’re evaluating (and criticizing) the actual facts of the world. The question is whether such an idealized counterfactual is a valid empirical metric or not; of course, it is not.  To do this is exactly what Demsetz criticized as the nirvana fallacy.

Ultimately, no social scientist or legal scholar ever commits the “nirvana fallacy” as T.J. has defined it in his blog posting, and this leads to my second point.  The best way to test T.J.’s definition is to ask: Does anyone know a single lawyer, legal scholar or economist who has committed the “nirvana fallacy” as defined by T.J.?  What economist or lawyer appeals to a completely imaginary “fictional baseline” as the standard for evaluating a real-world institution?

The answer to this question is obvious.  In fact, when I posited this exact question to T.J. in an exchange we had before he made his blog posting, he could not answer it.  The reason why he couldn’t answer it is because no one says in legal scholarship or in economic scholarship: “I have a completely made-up, imaginary ‘fictionalized’ world to which I’m going to compare to a real-world institution or legal doctrine.”  This is certainly is not the meaning of the nirvana fallacy, and I’m fairly sure Demsetz would be surprised to learn that he identified a fallacy that according to T.J. has never been committed by a single economist or legal scholar. Ever.

In sum, what T.J. describes in his blog posting — using a “loose intuition” of an institution an empirical standard for critiquing the operation of another institution — is the nirvana fallacy. Philosophers may posit completely imaginary and fictionalized baselines — it’s what they call “other worlds” — but that is not what social scientists and legal scholars do.  Demsetz was not talking about philosophers when he identified the nirvana fallacy.  Rather, he was talking about exactly what T.J. admits he does in his blog posting (and which he has done in his scholarship).

My former student and recent George Mason Law graduate (and co-author, here) Angela Diveley has posted Clarifying State Action Immunity Under the Antitrust Laws: FTC v. Phoebe Putney Health System, Inc.  It is a look at the state action doctrine and the Supreme Court’s next chance to grapple with it in Phoebe Putney.  here is the abstract:

The tension between federalism and national competition policy has come to a head. The state action doctrine finds its basis in principles of federalism, permitting states to replace free competition with alternative regulatory regimes they believe better serve the public interest. Public restraints have a unique ability to undermine the regime of free competition that provides the basis of U.S.- and state-commerce policies. Nevertheless, preservation of federalism remains an important rationale for protecting such restraints. The doctrine has elusive contours, however, which have given rise to circuit splits and overbroad application that threatens to subvert the state action doctrine’s dual goals of federalism and competition. The recent Eleventh Circuit decision in FTC v. Phoebe Putney Health System, Inc. epitomizes the concerns associated with misapplication of state action immunity. The U.S. Supreme Court recently granted the FTC’s petition for certiorari and now has the opportunity to more clearly define the contours of the doctrine. In Phoebe Putney, the FTC has challenged a merger it claims is the product of a sham transaction, an allegation certain to test the boundaries of the state action doctrine and implicate the interpretation of a two-pronged test designed to determine whether consumer welfare-reducing conduct taken pursuant to purported state authorization is immune from antitrust challenge. The FTC’s petition for writ of certiorari raises two issues for review. First, it presents the question concerning the appropriate interpretation of foreseeability of anticompetitive conduct. Second, the FTC presents the question whether a passive supervisory role on the state’s part can be construed as state action or whether its approval of the merger was a sham. In this paper, I seek to explicate the areas in which the state action doctrine needs clarification and to predict how the Court will decide the case in light of precedent and the principles underlying the doctrine.

Go read the whole thing.

HT: Danny Sokol.

TOP 10 Papers for Journal of Antitrust: Antitrust Law & Policy eJournal June 4, 2012 to August 3, 2012.

Rank Downloads Paper Title
1 244 The Antitrust/Consumer Protection Paradox: Two Policies at War with Each Other 
Joshua D. Wright,
George Mason University – School of Law, Faculty,
Date posted to database: May 31, 2012
Last Revised: May 31, 2012
2 237 Cartels, Corporate Compliance and What Practitioners Really Think About Enforcement 
D. Daniel Sokol,
University of Florida – Levin College of Law,
Date posted to database: June 7, 2012
Last Revised: July 16, 2012
3 175 The Implications of Behavioral Antitrust 
Maurice E. Stucke,
University of Tennessee College of Law,
Date posted to database: July 17, 2012
Last Revised: July 17, 2012
4 167 The Oral Hearing in Competition Proceedings Before the European Commission 
Wouter P. J. WilsWouter P. J. Wils,
European Commission, University of London – School of Law,
Date posted to database: May 3, 2012
Last Revised: June 18, 2012
5 141 Citizen Petitions: An Empirical Study 
Michael A. CarrierDaryl Wander,
Rutgers University School of Law – Camden, Unaffiliated Authors – affiliation not provided to SSRN,
Date posted to database: June 4, 2012
Last Revised: June 4, 2012
6 138 The Role of the Hearing Officer in Competition Proceedings Before the European Commission 
Wouter P. J. WilsWouter P. J. Wils,
European Commission, University of London – School of Law,
Date posted to database: May 3, 2012
Last Revised: May 7, 2012
7 90 Google, in the Aftermath of Microsoft and Intel: The Right Approach to Antitrust Enforcement in Innovative High Tech Platform Markets? 
Fernando Diez,
University of Antonio de Nebrija,
Date posted to database: June 12, 2012
Last Revised: June 26, 2012
8 140 Dynamic Analysis and the Limits of Antitrust Institutions 
Douglas H. GinsburgJoshua D. Wright,
U.S. Court of Appeals for the District of Columbia, George Mason University – School of Law, Faculty,
Date posted to database: June 14, 2012
Last Revised: June 17, 2012
9 114 Optimal Antitrust Remedies: A Synthesis 
William H. Page,
University of Florida – Fredric G. Levin College of Law,
Date posted to database: May 17, 2012
Last Revised: July 29, 2012
10 111 An Economic Analysis of the AT&T-T-Mobile USA Wireless Merger 
Stanley M. BesenStephen KletterSerge MoresiSteven C. Salopjohn woodbury,
Charles River Associates (CRA), Charles River Associates (CRA), Charles River Associates (CRA), Georgetown University Law Center, Charles River Associates (CRA),
Date posted to database: April 25, 2012
Last Revised: April 25, 2012

An interesting new joint venture between Oxford University Press, Ariel Ezrachi, and Bill Kovacic (GW).  Sounds like a fantastic idea and with top notch management and might be of interest to many of our readers.

The Journal of Antitrust Enforcement 

Call for Papers – The Journal of Antitrust Enforcement (OUP) Oxford University Press is delighted to announce the launch of a new competition law journal dedicated to antitrust enforcement. The Journal of Antitrust Enforcement forms a joint collaboration between OUP, the Oxford University Centre for Competition Law and Policy and the George Washington University Competition Law Center.

The Journal of Antitrust Enforcement will provide a platform for cutting edge scholarship relating to public and private competition law enforcement, both at the international and domestic levels.

The journal covers a wide range of enforcement related topics, including: public and private competition law enforcement, cooperation between competition agencies, the promotion of worldwide competition law enforcement, optimal design of enforcement policies, performance measurement, empirical analysis of enforcement policies, combination of functions in the mandate of the competition agency, competition agency governance, procedural fairness, competition enforcement and human rights, the role of the judiciary in competition enforcement, leniency, cartel prosecution, effective merger enforcement and the regulation of sectors.

Submission of papers: Original articles that advance the field are published following a peer and editorial review process. The editors welcome submission of papers on all subjects related to antitrust enforcement. Papers should range from 8,000 to 15,000 words (including footnotes) and should be prefaced by an abstract of less than 200 words.

General inquiries may be directed to the editors: Ariel Ezrachi at the Oxford CCLP or William Kovacic at George Washington University. Submission, by email, should be directed to the Managing Editor, Hugh Hollman.

Further information about the journal may be found online: http://www.oxfordjournals.org/our_journals/antitrust/

I am the co-editor of the Supreme Court Economic Review, a peer-review publication that is one of the country’s top-rated law and economics journals, along with my colleagues Todd Zywicki and Ilya Somin.  SCER, along with its publisher, the University of Chicago Press, have put together a new submissions website.  If you have a relevant submission, please submit at the website for our review.