Archives For technology

Commissioner Wright makes a powerful and important case in dissenting from the FTC’s 2-1 (Commissioner Ohlhausen was recused from the matter) decision imposing conditions on Nielsen’s acquisition of Arbitron.

Essential to Josh’s dissent is the absence of any actual existing market supporting the Commission’s challenge:

Nielsen and Arbitron do not currently compete in the sale of national syndicated cross-platform audience measurement services. In fact, there is no commercially available national syndicated cross-platform audience measurement service today. The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

* * *

[W]e…do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

* * *

To be clear, I do not base my disagreement with the Commission today on the possibility that the potential efficiencies arising from the transaction would offset any anticompetitive effect. As discussed above, I find no reason to believe the transaction is likely to substantially lessen competition because the evidence does not support the conclusion that it is likely to generate anticompetitive effects in the alleged relevant market.

This is the kind of theory that seriously threatens innovation. Regulators in Washington are singularly ill-positioned to predict the course of technological evolution — that’s why they’re regulators and not billionaire innovators. To impose antitrust-based constraints on economic activity that hasn’t even yet occurred is the height of folly. As Virginia Postrel discusses in The Future and Its Enemies, this is the technocratic mindset, in all its stasist glory:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation.

* * *

By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

* * *

For technocrats, a kaleidoscope of trial-and-error innovation is not enough; decentralized experiments lack coherence. “Today, we have an opportunity to shape technology,” wrote [Newt] Gingrich in classic technocratic style. His message was that computer technology is too important to be left to hackers, hobbyists, entrepreneurs, venture capitalists, and computer buyers. “We” must shape it into a “coherent picture.” That is the technocratic notion of progress: Decide on the one best way, make a plan, and stick to it.

It should go without saying that this is the antithesis of the environment most conducive to economic advance. Whatever antitrust’s role in regulating technology markets, it must be evidence-based, grounded in economics and aware of its own limitations.

As Josh notes:

A future market case, such as the one alleged by the Commission today, presents a number of unique challenges not confronted in a typical merger review or even in “actual potential competition” cases. For instance, it is inherently more difficult in future market cases to define properly the relevant product market, to identify likely buyers and sellers, to estimate cross-elasticities of demand or understand on a more qualitative level potential product substitutability, and to ascertain the set of potential entrants and their likely incentives. Although all merger review necessarily is forward looking, it is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer these basic questions upon which proper merger analysis is based.

* * *

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Josh’s dissent also contains an important, related criticism of the FTC’s problematic reliance on consent agreements. It’s so good, in fact, I will quote it almost in its entirety:

Whether parties to a transaction are willing to enter into a consent agreement will often have little to do with whether the agreed upon remedy actually promotes consumer welfare. The Commission’s ability to obtain concessions instead reflects the weighing by the parties of the private costs and private benefits of delaying the transaction and potentially litigating the merger against the private costs and private benefits of acquiescing to the proposed terms. Indeed, one can imagine that where, as here, the alleged relevant product market is small relative to the overall deal size, the parties would be happy to agree to concessions that cost very little and finally permit the deal to close. Put simply, where there is no reason to believe a transaction violates the antitrust laws, a sincerely held view that a consent decree will improve upon the post-merger competitive outcome or have other beneficial effects does not justify imposing those conditions. Instead, entering into such agreements subtly, and in my view harmfully, shifts the Commission’s mission from that of antitrust enforcer to a much broader mandate of “fixing” a variety of perceived economic welfare-reducing arrangements.

Consents can and do play an important and productive role in the Commission’s competition enforcement mission. Consents can efficiently address competitive concerns arising from a merger by allowing the Commission to reach a resolution more quickly and at less expense than would be possible through litigation. However, consents potentially also can have a detrimental impact upon consumers. The Commission’s consents serve as important guidance and inform practitioners and the business community about how the agency is likely to view and remedy certain mergers. Where the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare. Because there is no judicial approval of Commission settlements, it is especially important that the Commission take care to ensure its consents are in the public interest.

This issue of the significance of the FTC’s tendency to, effectively, legislate by consent decree is of great importance, particularly in its Section 5 practice (as we discuss in our amicus brief in the Wyndham case).

As the FTC begins its 100th year next week, we need more voices like those of Commissioners Wright and Ohlhausen challenging the FTC’s harmful, technocratic mindset.

Susan Crawford recently received the OneCommunity Broadband Hero Award for being a “tireless advocate for 21st century high capacity network access.” In her recent debate with Geoffrey Manne and Berin Szoka, she emphasized that there is little competition in broadband or between cable broadband and wireless, asserting that the main players have effectively divided the markets. As a result, she argues (as she did here at 17:29) that broadband and wireless providers “are deciding not to invest in the very expensive infrastructure because they are very happy with the profits they are getting now.” In the debate, Manne countered by pointing to substantial investment and innovation in both the wired and wireless broadband marketplaces, and arguing that this is not something monopolists insulated from competition do. So, who’s right?

The recently released 2013 Progressive Policy Institute Report, U.S. Investment Heroes of 2013: The Companies Betting on America’s Future, has two useful little tables that lend support to Manne’s counterargument.

skitch

The first shows the top 25 investors that are nonfinancial companies, and guess who comes in 1st, 2nd, 10th, 13th, and 17th place? None other than AT&T, Verizon Communications, Comcast, Sprint Nextel, and Time Warner, respectively.

skatch

And when the table is adjusted by removing non-energy companies, the ranks become 1st, 2nd, 5th, 6th, and 9th. In fact, cable and telecom combined to invest over $50.5 billion in 2012.

This high level of investment by supposed monopolists is not a new development. The Progressive Policy Institute’s 2012 Report, Investment Heroes: Who’s Betting on America’s Future? indicates that the same main players have been investing heavily for years. Since 1996, the cable industry has invested over $200 billion into infrastructure alone. These investments have allowed 99.5% of Americans to have access to broadband – via landline, wireless, or both – as of the end of 2012.

There’s more. Not only has there been substantial investment that has increased access, but the speeds of service have increased dramatically over the past few years. The National Broadband Map data show that by the end of 2012:

  • Landline service ≧ 25 megabits per second download available to 81.7% of households, up from 72.9% at the end of 2011 and 58.4% at the end of 2010
  • Landline service ≧ 100 megabits per second download available to 51.5% of households, up from 43.4% at the end of 2011 and only 12.9% at the end of 2010
  • ≧ 1 gigabit per second download available to 6.8% of households, predominantly via fiber
  • Fiber at any speed was available to 22.9% of households, up from 16.8% at the end of 2011 and 14.8% at the end of 2010
  • Landline broadband service at the 3 megabits / 768 kilobits threshold available to 93.4% of households, up from 92.8% at the end of 2011
  • Mobile wireless broadband at the 3 megabits / 768 kilobits threshold available to 94.1% of households , up from 75.8% at the end of 2011
  • Access to mobile wireless broadband providing ≧ 10 megabits per second download has grown to 87%, up from 70.6 percent at the end of 2011 and 8.9 percent at the end of 2010
  • Landline broadband ≧ 10 megabits download was available to 91.1% of households

This leaves only one question: Will the real broadband heroes please stand up?

On Debating Imaginary Felds

Gus Hurwitz —  18 September 2013

Harold Feld, in response to a recent Washington Post interview with AEI’s Jeff Eisenach about AEI’s new Center for Internet, Communications, and Technology Policy, accused “neo-conservative economists (or, as [Feld] might generalize, the ‘Right’)” of having “stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.”

(Full disclosure: The Center for Internet, Communications, and Technology Policy includes TechPolicyDaily.com, to which I am a contributor.)

Perhaps to the surprise of many, I’m going to agree with Feld. But in so doing, I’m going to expand upon his point: The problem with anti-economics social activists (or, as we might generalize, the ‘Left’)[*] is that they have stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.

I don’t mean this to be snarky. Rather, it is a very real problem throughout modern political discourse, and one that we participants in telecom and media debates frequently contribute to. One of the reasons that I love – and sometimes hate – researching and teaching in this area is that fundamental tensions between government and market regulation lie at its core. These tensions present challenging and engaging questions, making work in this field exciting, but are sometimes intractable and often evoke passion instead of analysis, making work in this field seem Sisyphean.

One of these tensions is how to secure for consumers those things which the market does not (appear to) do a good job of providing. For instance, those of us on both the left and right are almost universally agreed that universal service is a desirable goal. The question – for both sides – is how to provide it. Feld reminds us that “real world economics is painfully complicated.” I would respond to him that “real world regulation is painfully complicated.”

I would point at Feld, while jumping up and down shouting “J’accuse! Nirvana Fallacy!” – but I’m certain that Feld is aware of this fallacy, just as I hope he’s aware that those of us who have spent much of our lives studying economics are bitterly aware that economics and markets are complicated things. Indeed, I think those of us who study economics are even more aware of this than is Feld – it is, after all, one of our mantras that “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” This mantra is particularly apt in telecommunications, where one of the most consistent and important lessons of the past century has been that the market tends to outperform regulation.

This isn’t because the market is perfect; it’s because regulation is less perfect. Geoff recently posted a salient excerpt from Tom Hazlett’s 1997 Reason interview of Ronald Coase, in which Coase recounted that “When I was editor of The Journal of Law and Economics, we published a whole series of studies of regulation and its effects. Almost all the studies – perhaps all the studies – suggested that the results of regulation had been bad, that the prices were higher, that the product was worse adapted to the needs of consumers, than it otherwise would have been.”

I don’t want to get into a tit-for-tat over individual points that Feld makes. But I will look at one as an example: his citation to The Market for Lemons. This is a classic paper, in which Akerlof shows that information asymmetries can cause rational markets to unravel. But does it, as Feld says, show “market failure in the presence of robust competition?” That is a hotly debated point in the economics literature. One view – the dominant view, I believe – is that it does not. See, e.g., the EconLib discussion (“Akerlof did not conclude that the lemon problem necessarily implies a role for government”). Rather, the market has responded through the formation of firms that service and certify used cars, document car maintenance, repairs and accidents, warranty cars, and suffer reputational harms for selling lemons. Of course, folks argue, and have long argued, both sides. As Feld says, economics is painfully complicated – it’s a shame he draws a simple and reductionist conclusion from one of the seminal articles is modern economics, and a further shame he uses that conclusion to buttress his policy position. J’accuse!

I hope that this is in no way taken as an attack on Feld – and I wish his piece was less of an attack on Jeff. Fundamentally, he raises a very important point, that there is a real disconnect between the arguments used by the “left” and “right” and how those arguments are understood by the other. Indeed, some of my current work is exploring this very disconnect and how it affects telecom debates. I’m really quite thankful to Feld for highlighting his concern that at least one side is blind to the views of the other – I hope that he’ll be receptive to the idea that his side is subject to the same criticism.

[*] I do want to respond specifically to what I think is an important confusion in Feld piece, which motivated my admittedly snarky labelling of the “left.” I think that he means “neoclassical economics,” not “neo-conservative economics” (which he goes on to dub “Neocon economics”). Neoconservativism is a political and intellectual movement, focused primarily on US foreign policy – it is rarely thought of as a particular branch of economics. To the extent that it does hold to a view of economics, it is actually somewhat skeptical of free markets, especially of lack of moral grounding and propensity to forgo traditional values in favor of short-run, hedonistic, gains.

With thanks to Geoff and everyone else, it’s great to join the cast here at TOTM. Geoff gave a nice introduction, so I won’t use this first post to further that purpose – especially when I have substance to discuss. The only prefatory words I’ll offer are that my work lies at the intersection of law and technology, with a focus on telecommunications and the regulation of technology. Most of my posts here will likely relate to those subjects. But I may occasionally use this forum to write briefly on topics further afield of my research agenda (and to which I therefore cannot dedicate more than blog-post-length musings to develop).

But one paragraph of navel-grazing is enough; on to substance:

The WSJ had a nice piece the other day about the Consumer Product Safety Commission’s (CPSC) ongoing persecution of Craig Zucker. Several years ago, Zucker founded a company that sold small, strong, rare-earth magnets that are a ton of fun to play with. He called them BuckyBalls. In 2011, the CPSC determined that BuckBalls are inherently unsafe because children may swallow them, which can result in serious injury. The CPSC effectively forced the company to shut down in 2012. Unsatisfied with forcing a profitable small firm out of the market, the CPSC is now going after Zucker individually to, at his own expense, recall and refund the purchase price of all BuckyBalls the company sold.

BuckyBalls(Full disclosure: I own a bunch of BuckyBalls. In fact, they’re all over my office. To date, they have not harmed anyone. The photo to the left is of the “BuckyBall decapode” that I have behind my chair. Note: the CPSC is not concerned about BuckyBall decapodes, which could pose a legitimate danger if they became sentient, but about the individual magnets.)

The CPSC’s action is a case study in bad judgment, arguably abusive and vindictive government conduct, and a basic lack of common sense. But I don’t want to focus on common sense here – I want to focus on the common law. My question is why in the world do we need the CPSC protecting consumers from these magnets when the common law clearly offers sufficient protection?

These cases almost always follow a similar pattern. Adults buy BuckyBalls. Adults either give children BuckyBalls or leave BuckyBalls where children can get them. Children, acting as children are wont to act, somehow swallow BuckyBalls.

The CPSC’s complaint identifies 5 specific cases of children ingesting BuckyBalls and notes that “over one dozen” reports have been received. The complaint doesn’t discuss in detail any injuries that resulted, beyond noting that in some cases surgery was required (and in one case, treatment included “monitoring for infection and internal damage”). It doesn’t say whether any of these cases resulted in permanent injury or disability (presumably not, or that would surely be mentioned). There have been no reported deaths or, that I have seen reported, debilitating injuries.

On the flipside, over the few years that Zucker was in business (roughly 2009, when the product became popular, through 2012, when the company closed down), he sold about $75 million worth of BuckyBalls (per the WSJ piece, “’Two and a half million adults spent $30’”). This product wasn’t a mere novelty, but something created substantial economic value for consumers.

So, what do we have? A relatively small number of injuries, with very few disputable facts, and readily identifiable harm. These would be some of the easiest possible cases to bring to court, and would occur in small enough numbers that they wouldn’t burden the court system. After the first of these cases was decided, most of the others – given the similarity of facts – would likely settle. If the harms caused by BuckyBalls were sufficient to outweigh the economic value created by this product, Zucker could have responded by altering the product, seeking insurance, or shutting down. This is exactly the sort of case we have the courts for!

That penultimate sentence should be dwelt upon: the incremental approach of the common law would allow the firm to alter and improve its product, to avoid or reduce future harm. In this way, the law develops along with new products and technologies, supporting a dynamic market. Compare this to the CPSC approach, which was to demand that Zucker comply with the agency’s demands in a short period of time (which he did), and then, the very next day, to bring the administrative suit that forced Zucker to shut the company down. The CPSC could not have reviewed his response to its demands in that timeframe; even if it did and found the response lacking, its next step should have been to engage him to address any problems, with the twain objectives of both remedying any problems but also preserving the business. Rather, the CPSC’s purpose seems to have been from the outset to shut Zucker down. It seems that in its fervor to protect the children from negligent adults, it is willing to harm the consumers who enjoy these products — perhaps we should rechristen it the Children’s Product Safety Commission.

Others have written about the CPSC’s lack of common sense in this matter. My contribution to that discussion would be to say that the CPSC has become the FTC’s successor as the “National Nanny” (not to say the FTC does not deserve the title, as demonstrated by the POM Wonderful case – but today CPSC may be even more deserving of the title).

But the BuckyBalls case raises a more fundamental concern. The CPSC surely should be lambasted for its decision to pursue this matter at all; and even more for persecuting Mr. Zucker. But beyond that, this case raises fundamental questions about the need for, and the basic legitimacy of, the CPSC.

On July 24, the Federal Trade Commission issued a modified complaint and consent order in the Google/Motorola case. The FTC responded to the 25 comments on the proposed Order by making several amendments, but the Final Order retains the original order’s essential restrictions on injunctions, as the FTC explains in a letter accompanying the changes. With one important exception, the modifications were primarily minor changes to the required process by which Google/Motorola must negotiate and arbitrate with potential licensees. Although an improvement on the original order, the Complaint and Final Order’s continued focus on the use of injunctions to enforce SEPs presents a serious risk of consumer harm, as I discuss below.

The most significant modification in the new Complaint is the removal of the original UDAP claim. As suggested in my comments on the Order, there is no basis in law for such a claim against Google, and it’s a positive step that the FTC seems to have agreed. Instead, the FTC ended up resting its authority solely upon an Unfair Methods of Competition claim, even though the Commission failed to develop any evidence of harm to competition—as both Commissioner Wright and Commissioner Ohlhausen would (sensibly) require.

Unfortunately, the FTC’s letter offers no additional defense of its assertion of authority, stating only that

[t]he Commission disagrees with commenters who argue that the Commission’s actions in this case are outside of its authority to challenge unfair methods of competition under Section 5 and lack a limiting principle. As reflected in the Commission’s recent statements in Bosch and the Commission’s initial Statement in this matter, this action is well within our Section 5 authority, which both Congress and the Supreme Court have expressly deemed to extend beyond the Sherman Act.

Another problem, as noted by Commissioner Ohlhausen in her dissent from the original order, is that

the consent agreement creates doctrinal confusion. The Order contradicts the decisions of federal courts, standard-setting organizations (“SSOs”), and other stakeholders about the availability of injunctive relief on SEPs and the meaning of concepts like willing licensee and FRAND.

The FTC’s statements in Bosch and this case should not be thought of as law on par with actual court decisions unless we want to allow the FTC to determine the scope of its own authority unilaterally.

This is no small issue. On July 30, the FTC used the Google settlement, along with the settlement in Bosch, as examples of the FTC’s authority in the area of policing SEPs during a hearing on the issue. And as FTC Chairwoman Ramirez noted in response to questions for the record in a different hearing earlier in 2013,

Section 5 of the FTC Act has been developed over time, case-by-case, in the manner of common law. These precedents provide the Commission and the business community with important guidance regarding the appropriate scope and use of the FTC’s Section 5 authority.

But because nearly all of these cases have resulted in consent orders with an administrative agency and have not been adjudicated in court, they aren’t, in fact, developed “in the manner of common law.” Moreover, settlements aren’t binding on anyone except the parties to the settlement. Nevertheless, the FTC has pointed to these sorts of settlements (and congressional testimony summarizing them) as sufficient guidance to industry on the scope of its Section 5 authority. But as we noted in our amicus brief in the Wyndham litigation (in which the FTC makes this claim in the context of its “unfair or deceptive acts or practices” authority):

Settlements (and testimony summarizing them) do not in any way constrain the FTC’s subsequent enforcement decisions; they cannot alone be the basis by which the FTC provides guidance on its unfairness authority because, unlike published guidelines, they do not purport to lay out general enforcement principles and are not recognized as doing so by courts and the business community.

Beyond this more general problem, the Google Final Order retains its own, substantive problem: considerable constraints upon injunctions. The problem with these restraints are twofold: (1) Injunctions are very important to an efficient negotiation process, as recognized by the FTC itself; and (2) if patent holders may no longer pursue injunctions consistently with antitrust law, one would expect a reduction in consumer welfare.

In its 2011 Report on the “IP Marketplace,” the FTC acknowledged the important role of injunctions in preserving the value of patents and in encouraging efficient private negotiation.

Second, the credible threat of an injunction deters infringement in the first place. This results from the serious consequences of an injunction for an infringer, including the loss of sunk investment. Third, a predictable injunction threat will promote licensing by the parties. Private contracting is generally preferable to a compulsory licensing regime because the parties will have better information about the appropriate terms of a license than would a court, and more flexibility in fashioning efficient agreements. But denying an injunction every time an infringer’s switching costs exceed the economic value of the invention would dramatically undermine the ability of a patent to deter infringement and encourage innovation. For this reason, courts should grant injunctions in the majority of cases.

Building on insights from Commissioner Wright and Professor Kobayashi, I argued in my comments that injunctions create conditions that

increase innovation, the willingness to license generally and the willingness to enter into FRAND commitments in particular–all to the likely benefit of consumer welfare.

Monopoly power granted by IP law encourages innovation because it incentivizes creativity through expected profits. If the FTC interprets its UMC authority in a way that constrains the ability of patent holders to effectively police their patent rights, then less innovation would be expected–to the detriment of consumers as well as businesses.

And this is precisely what has happened. Innovative technology companies are responding to the current SEP enforcement environment exactly as we would expect them to—by avoiding the otherwise-consumer-welfare enhancing standardization process entirely.

Thus, for example, at a recent event sponsored by Global Competition Review (gated), representatives from Nokia, Ericsson, Siemens and Qualcomm made no bones about the problems they see and where they’re headed if they persist:

[Jenni Lukander, global head of competition law at Nokia] said the problem of “free-riding”, whereby technology companies adopt standard essential patents (SEPs) without complying with fair, reasonable and non-discriminatory (FRAND) licensing terms was a “far bigger problem” than patent holders pursuing injunctive relief. She said this behaviour was “unsustainable”, as it discouraged innovation and jeopardised standardisation.

Because of the current atmosphere, Lukander said, Nokia has stepped back from the standardisation process, electing either not to join certain standard-setting organisations (SSOs) or not to contribute certain technologies to these organisations.

The fact that every licence negotiation takes places “under the threat of injunction litigation” is not a sign of failure, said Lukander, but an indicator of the system working “as it was designed to work”.

This, said [Dan Hermele, director of IP rights and licensing for Qualcomm Europe], amounted to “reverse hold-up”. “The licensor is pressured to accept less than reasonable licensing terms due to the threat of unbalanced regulatory intervention,” he said, adding that the trend was moving to an “infringe and litigate model”, which threatened to harm innovators, particularly small and medium-sized businesses, “for whom IPR is their life blood”.

Beat Weibel, chief IP counsel at Siemens, said…innovation can only be beneficial if it occurs within a “safe and strong IP system,” he said, where a “willing licensee is favoured over a non-willing licensee” and the enforcer is not a “toothless tiger”.

It remains to be seen if the costs to consumers from firms curtailing their investments in R&D or withholding their patents from the standard-setting process will outweigh the costs (yes, some costs do exist; the patent system is not frictionless and it is far from perfect, of course) from the “over”-enforcement of SEPs lamented by critics. But what is clear is that these costs can’t be ignored. Reverse hold-up can’t be wished away, and there is a serious risk that the harm likely to be caused by further eroding the enforceability of SEPs by means of injunctions will significantly outweigh whatever benefits it may also confer.

Meanwhile, stay tuned for tomorrow’s TOTM blog symposium on “Regulating the Regulators–Guidance for the FTC’s Section 5 Unfair Methods of Competition Authority” for much more discussion on this issue.

Over at Law360 I have a piece on patent enforcement at the ITC (gated), focusing on the ITC’s two Apple-Samsung cases: one in which the the ITC issued a final determination in which it found Apple to have infringed one of Samsung’s 3G-related SEPs, and the other (awaiting a final determination from the Commission) in which an ALJ found Samsung infringed four of Apple’s patents, including a design patent. Here’s a taste:

In fact, there is a strong argument in favor of ITC adjudication of FRAND-encumbered patents. As the name suggests, FRAND-encumbered patents must be licensed by their owners on reasonable, nondiscriminatory terms. Despite Apple’s claims that Samsung refused to negotiate, this seems unlikely (and the ITC found otherwise, of course). What’s more, post-adjudication, the FRAND requirement associated with a FRAND-encumbered patent remains.

As a result, negotiation over license terms for FRAND-encumbered patents can only be more likely than for other patents on which there is no duty to negotiate. Agreement over terms is similarly more likely as FRAND narrows the bargaining range for patent holders. What that means is that (1) avoiding a possible ITC exclusion order ex ante is a simple matter of entering into negotiations and licensing, an outcome that is required by FRAND, and (2) ex post (that is, after an exclusion order is issued), reinstating the ability to import and sell otherwise-infringing devices is also more readily accomplished, likewise through obligatory negotiation and licensing.

* * *

The ITC’s threat of injunctive relief can impel negotiation and licensing in all contexts, of course. But the absence of monetary damages, coupled with the inherent uncertainties surrounding design patents, the broad scope of enforcement and the vagaries of CBP’s implementation of ITC orders, is significantly more troubling in the design patent context. Thus, contrary to many critics’ assertions, the White House’s recent proposal and pending bills in Congress, it is actually FRAND-encumbered SEPs that are most amenable to adjudication and enforcement by the ITC

As they say, read the whole thing.

Coincidentally, Verizon’s general counsel, Randal Milch, has an op-ed on the same topic in today’s Wall Street Journal. Notes Milch:

What we have warned is that patent litigation at the ITC—where the only remedy is to keep products from the American public—is too high-stakes a game for patent disputes. The fact that the ITC’s intellectual-property-dispute docket has nearly quadrupled over 15 years only raises the stakes further. Smartphone patent litigation accounts for a substantial share of that increase.

Here are three instances under which the president should veto an exclusion order:

  • When the patent holder isn’t practicing the technology itself. Courts have routinely found shutdown relief inappropriate for non-practicing entities. Patent trolls shouldn’t be permitted to exclude products from our shores.
  • When the patent holder has already agreed to license the patent on reasonable terms as part of standards setting. If the patent holder has previously agreed that a reasonable licensing fee is all it needs to be made whole, it shouldn’t get shutdown relief at the ITC.
  • When the infringing piece of the product isn’t that important to the overall product, and doesn’t drive consumer demand for the product at issue. There are more than 250,000 patents relevant to today’s smartphones. It makes no sense that exclusion could occur for infringement of the most minor patent.

Obviously, the second of these is implicated in the ITC’s SEP case. But, as I have noted before, this ignores (and exacerbates) the problem of reverse holdup—where potential licensees refuse to license on reasonable terms. As the ITC noted in the Apple-Samsung SEP case:

The ALJ found that the evidence did not support a conclusion that Samsung failed to offer Apple a license on FRAND terms.

***

Apple argues that Samsung was obligated to make an initial offer to Apple of a specific fair and reasonable royalty rate. The evidence on record does not support Apple’s position….Further, there is no legal authority for Apple’s argument. Indeed, the limited precedent on the issue appears to indicate that an initial offer need not be the terms of a final FRAND license because the SSO intends the final license to be accomplished through negotiation. See Microsoft Corp. v. Motorola, Inc. (because SSOs contemplated that RAND terms be determined through negotiation, “it logically does not follow that initial offers must be on RAND terms”) [citation omitted].

***

Apple’s position illustrates the potential problem of so-called reverse patent hold-up, a concern identified in many of the public comments received by the Commission.20 In reverse patent hold-up, an implementer utilizes declared-essential technology without compensation to the patent owner under the guise that the patent owner’s offers to license were not fair or reasonable. The patent owner is therefore forced to defend its rights through expensive litigation. In the meantime, the patent owner is deprived of the exclusionary remedy that should normally flow when a party refuses to pay for the use of a patented invention.

One other note, on the point about the increase in patent litigation: This needs to be understood in context. As this article notes:

Over the last 40 years the number of patent lawsuits filed in the US has stayed relatively constant as a percentage of patents issued.

And the accompanying charts paint the picture even more clearly. Perhaps the numbers at the ITC would look somewhat different, as it seems to have increased in importance as a locus of patent litigation activity. But the larger point about the purported excess of patent litigation remains. I hasten to add that this doesn’t mean that the system is perfect, in particular (as my Law360 piece notes) with respect to the issuance and enforcement of design patents. But that may be an argument for USPTO reform, design patent reform, and/or, as Scott Kieff (who, by the way, finally got a hearing last week on his nomination by President Obama to be a member of the ITC) has argued, targeted reforms of the presumption of validity and fee-shifting. But it’s not a strong argument against injunctive remedies (at the ITC or elsewhere) in SEP cases.

Patent Activity by Year (in Terms of Applications Filed, Patents Issued and Lawsuits Filed)

Patent Activity by Year (in Terms of Applications Filed, Patents Issued and Lawsuits Filed)

Patent Lawsuits Normalized Against Patents Issued and Applications Filed

Patent Lawsuits Normalized Against Patents Issued and Applications Filed

Patent Activity by Year (in Terms of Applications Filed, Patents Issued and Lawsuits Filed), 5-year Moving Averages

Patent Activity by Year (in Terms of Applications Filed, Patents Issued and Lawsuits Filed), 5-year Moving Averages

On July 10 a federal judge ruled that Apple violated antitrust law by conspiring to raise prices of e-books when it negotiated deals with five major publishers. I’ve written on the case and the issues involved in it several times, including here, here, here and here. The most recent of these was titled, “Why I think the government will have a tough time winning the Apple e-books antitrust case.” I’m hedging my bets with the title this time, but it’s fairly clear to me that the court got this case wrong.

The predominant sentiment among pundits following the decision seems to be approval (among authors, however, the response to the suit has been decidedly different). Supporters believe it will lower e-book prices and instigate a shift in the electronic publishing industry toward some more-preferred business model. This sort of reasoning is dangerous and inconsistent with principled, restrained antitrust. Neither the government nor its supporting commentators should use, or applaud the use, of antitrust to impose the government’s (or anyone else’s) preferred business model on industry. And lower prices in the short run, while often an indication of increased competition, are not, by themselves, sufficient to determine that a business model is efficient in the long run.

For example, in a recent article, Mark Lemley is quoted supporting the outcome, noting that it may spur a shift toward his preferred model of electronic publishing:

It also makes no sense that publishers, not authors, capture most of the revenue from e-books, when they do very little of the work. I understand why publishers are reluctant to give up their old business model, but if they want to survive in the digital world, it’s time to make some changes.

As noted, there is no basis for using antitrust enforcement to coerce an industry to shift to a particular distribution of profits simply because “it’s time to make some changes.” Lemley’s characterization of the market’s dynamics is also seriously lacking in economic grounding (and the Authors Guild response to the suit linked above suggests the same). The economics of entrepreneurship has an impressive intellectual pedigree that began with Frank Knight, was further developed by Joseph Schumpeter, Israel Kirzner and Harold Demsetz, among others, and continues to today with its inclusion as a factor of production. (On the development of this tradition and especially Harold Demsetz’s important contribution to it, see here). The implicit claim that publishers’ and authors’ interests (to say nothing of consumers’ interests) are simply at odds, and that the “right” distribution of profits would favor authors over publishers based on the amount of “work” they do is economically baseless. Although it is a common claim, reflecting either idiosyncratic preferences or ignorance about the role of content publishers and distributors in the e-book marketplace and the role of entrepreneurship more generally, it is nonetheless mistaken and has no place in a consumer-welfare-based assessment of the market or antitrust intervention in it.

It’s also utterly unclear how the antitrust suit would do anything to change the relative distribution of profits between publishers and authors. In fact, the availability of direct publishing (offered by both Amazon and Apple) is the most likely disruptor of that dynamic, and authors could only be helped by an increase in competition among platforms—in other words, by Apple’s successful entry into the market.

Apple entered the e-books market as a relatively small upstart battling a dominant incumbent. That it did so by offering publishers (suppliers) attractive terms to deal with its new iBookstore is no different than a new competitor in any industry offering novel products or loss-leader prices to attract customers and build market share. When new entry then induces an industry-wide shift toward the new entrants’ products, prices or business model it’s usually called “competition,” and lauded as the aim of properly functioning markets. The same should be true here.

Despite the court’s claim that

there is overwhelming evidence that the Publisher Defendants joined with each other in a horizontal price-fixing conspiracy,

that evidence is actually extremely weak. What is unclear is why the publishers would need a conspiracy when they rarely compete against each other directly.

The court states that

To protect their then-existing business model, the Publisher Defendants agreed to raise the prices of e-books by taking control of retail pricing.

But despite the use of the antitrust trigger-words, “agreed to raise prices,” this agreement is not remotely clear, and rests entirely on circumstantial evidence (more on this later). None of the evidence suggests actual agreement over price, and none of the evidence demonstrates conclusively any real incentive for the publishers to reach “agreement” at all. In actuality, publishers rarely compete against each other directly (least of all on price); instead, for each individual publisher (and really for each individual title), the most relevant competition for this case is between the e-book version of a particular title and its physical counterpart. In this situation it should matter little to any particular e-book’s sales whether every other e-book in the world is sold at the same price or even a lower price.

While the opinion asserts that each publisher

could also expect to lose substantial sales if they unilaterally raised the prices of their own e-books and none of their competitors followed suit,

it also states that

there is no evidence that the Publisher Defendants have ever competed with each other on price. To the contrary, several of the Publishers’ CEOs explained that they have not competed with each other on that basis.

These statements are difficult to reconcile, but the evidence supports the latter statement, not the former.

The only explanation offered by the court for the publishers’ alleged need for concerted action is an ambiguous claim that Amazon would capitulate in shifting to the agency model only if every publisher pressured it to do so simultaneously. The court claims that

if the Publisher Defendants were going to take control of e-book pricing and move the price point above $9.99, they needed to act collectively; any other course would leave an individual Publisher vulnerable to retaliation from Amazon.

But it’s not clear why this would be so.

On the one hand, if Apple really were the electronic publishing juggernaut implied by this antitrust action, this concern should be minimal: Publishers wouldn’t need Amazon and could simply sell their e-books through Apple’s iBookstore. In this case the threat of even any individual publisher’s “retaliation” against Amazon (decamping to Apple) would suffice to shift relative bargaining power between the publishers and Amazon, and concerted action wouldn’t be necessary. On this theory, the fact that it was only after Apple’s entry that Amazon agreed to shift to the agency model—a fact cited by the court many times to support its conclusions—is utterly unremarkable.

That prices may have shifted as well is equally unremarkable: The agency model puts pricing decisions in publishers’ hands (who, as I’ve previously discussed, have very different incentives than Amazon) where before Amazon had control over prices. Moreover, even when Apple presented evidence that average e-book prices actually fell after its entrance into the market, the court demanded that Apple prove a causal relationship between its entrance and lower overall prices. (Even the DOJ’s own evidence shows, at worst, little change in price, despite its heated claims to the contrary.) But the burden of proof in such cases rests with the government to prove that Apple caused prices to rise, not for Apple to explain why they fell.

On the other hand, if the loss of Amazon as a retail outlet were really so significant for publishers, Apple’s ability to function as the lynchpin of the alleged conspiracy is seriously questionable. While the agency model coupled with the persistence of $9.99 pricing by Amazon would seem to mean reduced revenue for publishers on each book sold through Apple’s store, the relatively trivial number of Apple sales compared with Amazon’s, particularly at the outset, would be of little concern to publishers, and thus to Amazon. In this case it is difficult to believe that publishers would threaten their relationships with Amazon for the sake of preserving the return on their newly negotiated contracts with Apple (and even more difficult to believe that Amazon would capitulate), and the claimed coordinating effects of the MFN provisions is difficult to sustain.

The story with respect to Amazon is questionable for another reason. While the court claims that the publishers’ concern with Amazon’s $9.99 pricing was its effect on physical book sales, it is extremely hard to believe that somehow $12.99 for the electronic version of a $30 (or, often, even more expensive) physical book would be significantly less damaging to physical book sales. Moreover, the evidence put forth by the DOJ and found persuasive by the court all pointed to e-book revenues alone, not physical book sales, as the issue of most concern to publishers (thus, for example, Steve Jobs wrote to HarperCollins’ CEO that it could “[k]eep going with Amazon at $9.99. You will make a bit more money in the short term, but in the medium term Amazon will tell you they will be paying you 70% of $9.99. They have shareholders too.”).

Moreover, as Joshua Gans points out, the agency model that Amazon may have entered into with the publishers would have been particularly unhelpful in ensuring monopoly returns for the publishers (we don’t know the exact terms of their contracts, however, and there are reports from trial that Amazon’s terms were “identical” to Apple’s):

While Apple gave publishers a 70 percent share of book sales and the ability to set their own price, Amazon offered a menu. If you price below $9.99 for a book, Amazon’s share will be 70 percent but if you price above $10, Amazon only returns 35 percent to the publisher. Amazon also charged publishers a delivery fee based on the book’s size (in kb).

Thus publishers could, of course, raise prices to $12.99 in both Apple’s and Amazon’s e-book stores, but, if this effective price cap applied, doing so would result in a significant loss of revenue from Amazon. In other words, the court’s claim—that, having entered into MFNs with Apple, the publishers then had to move Amazon to the agency model to ensure that they didn’t end up being forced by the MFNs to sell books via Apple (on the less-attractive agency terms) at Amazon’s $9.99—is far-fetched. To the extent that raising Amazon’s prices above $10 may have cut royalties almost in half, the MFNs with Apple would be extremely unlikely to have such a powerful effect. But, as noted above, because of the relative sales volumes involved the same dynamic would have applied even under identical terms.

It is true, of course, that Apple cares about price differences between books sold through its iBookstore and the same titles sold through other electronic retailers—and thus it imposed MFN clauses on the publishers. But this is not anticompetitive. In fact, by facilitating Apple’s entry, the MFN clauses plainly increased competition by introducing a new competitor to the industry. What’s more, the terms of Apple’s agreements with the publishers exactly mirrors the terms it uses for apps and music sold through the iTunes store, as well. And as Gordon Crovitz noted:

As this column reported when the case was brought last year, Apple executive Eddy Cue in 2011 turned down my effort to negotiate different terms for apps by news publishers by telling me: “I don’t think you understand. We can’t treat newspapers or magazines any differently than we treat FarmVille.” His point was clear: The 30% revenue-share model is how Apple does business with everyone. It is not, as the government alleges, a scheme Apple concocted to fix prices with book publishers.

Another important error in the case — and, unfortunately, it is one to which Apple’s lawyers acceded—is the treatment of “trade e-books” as the relevant market. For antitrust purposes, there is no generalized e-book (or physical book, for that matter) market. As noted above, the court itself acknowledged that the publishers “have [n]ever competed with each other on price.” The price of Stephen King’s latest novel likely has, at best, a trivial effect on sales of…nearly every other fiction book published, and probably zero effect on sales of non-fiction books.

This is important because the court’s opinion turns on mostly circumstantial evidence of an alleged conspiracy among publishers to raise prices and on the role of concerted action in protecting publishers from being “undercut” by their competitors. But in a world where publishers don’t compete on price (and where the alleged agreement would have reduced the publishers’ revenues in the short run and done little if anything to shore up physical book sales in the long run), it is far-fetched to interpret this evidence as the court does—to infer a conspiracy to raise prices.

Meanwhile, by restricting itself to consideration of competitive effects in the e-book market alone, the court also inappropriately and without commentary dispenses with Apple’s pro-competitive justifications for its conduct. Put simply, Apple contends that its entry into the e-book retail and reader markets was facilitated by its contract terms. But the court ignores these arguments.

On the one hand, it does so because it treats this as a per se case, in which procompetitive effects are irrelevant. But the court’s determination to treat this as a per se case—with its lengthy recitation of relevant legal precedent and only cursory application of precedent to the facts of the case—is suspect. As I have noted before:

What would [justify per se treatment] is if the publishers engaged in concerted action to negotiate these more-favorable terms with other publishers, and what would be problematic for Apple is if its agreement with each publisher facilitated that collusion.

But I don’t see any persuasive evidence that the terms of Apple’s deals with each publisher did any such thing. For MFNs to perform the function alleged by the DOJ it seems to me that the MFNs would have to contribute to the alleged agreement between the publishers, just as the actions of the vertical co-conspirators in Interstate Circuit and Toys-R-Us were alleged to facilitate coordination. But neither the agency agreement itself nor the MFN and price cap terms in the contracts in any way affected the publishers’ incentive to compete with each other. Nor, as noted above, did they require any individual publisher to cause its books to be sold at higher prices through other distributors.

Even if it is true that the publishers participated in a per se illegal horizontal price fixing scheme (and despite the court’s assertion that this is beyond dispute, the evidence is not nearly so clear as the court suggests), Apple’s unique role in that alleged scheme can’t be analyzed in the same fashion. As Leegin notes (and the court in this case quotes), for conduct to merit per se treatment it must “always or almost always tend to restrict competition and decrease output.” But the conduct at issue here—whether somehow coupled with a horizontal price fixing scheme or not—doesn’t meet this standard. The agency model, the MFN terms in the publishers’ contracts with Apple, and the efforts by Apple to secure broad participation by the largest publishers before entering the market are all potentially—if not likely—procompetitive. And output seems to have increased substantially following Apple’s entry into the e-book retail market.

In short, I continue to believe that the facts of this case do not merit per se treatment, and there is a good chance the court’s opinion could be overturned on this ground. For this reason, its rejection of Apple’s procompetitive arguments was inappropriate.

But even in its brief “even under the rule of reason…” analysis, the court improperly rejects Apple’s procompetitive arguments. The court’s consideration of these arguments is basically summed up here:

The pro-competitive effects to which Apple has pointed, including its launch of the iBookstore, the technical novelties of the iPad, and the evolution of digital publishing more generally, are phenomena that are independent of the Agreements and therefore do not demonstrate any pro-competitive effects flowing from the Agreements.

But this is factually inaccurate. Apple has claimed that its entry—and thus at minimum its development and marketing of the iPad as an e-reader and its creation of the iBookstore—were indeed functions of the contract terms and the simultaneous acceptance by the largest publishers of these terms.

The court goes on to assert that, even if the claimed pro-competitive effect was the introduction of competition into the e-book market,

Apple demanded, as a precondition of its entry into the market, that it would not have to compete with Amazon on price. Thus, from the consumer’s perspective — a not unimportant perspective in the field of antitrust — the arrival of the iBookstore brought less price competition and higher prices.

In making this claim the court effectively—and improperly—condemns MFNs to per se illegal status. In doing so the court claims that its opinion’s reach is not so broad:

this Court has not found that any of these [agency agreements, MFN clauses, etc.]…components of Apple’s entry into the market were wrongful, either alone or in combination. What was wrongful was the use of those components to facilitate a conspiracy with the Publisher Defendants”

But the claimed absence of retail price competition that accompanied Apple’s entry is entirely a function of the MFN clauses: Whether at $9.99 or $12.99, the MFN clauses were what ensured that Apple’s and Amazon’s prices would be the same, and disclaimer or not they are swept in to the court’s holding.

This effective condemnation of MFN clauses, while plainly sought by the DOJ, is simply inappropriate as a matter of law. In order to condemn Apple’s conduct under the per se rule, the court relies on the operation of the MFNs in allegedly reducing competition and raising prices to make its case. But that these do not “always or almost always tend to restrict competition and reduce output” is clear. While the DOJ may view such terms otherwise (more on this here and here), courts have not done so, and Leegin’s holding that such vertical restraints are to be assessed under the rule of reason still holds. The court’s use of the per se standard and its refusal to consider Apple’s claimed pro-competitive effects are improper.

Thus I (somewhat more cautiously this time…) suggest that the court’s decision may be overturned on appeal, and I most certainly think it should be. It seems plainly troubling as a matter of economics, and inappropriate as a matter of law.

Trial begins today in the Southern District of New York in United States v. Apple (the Apple e-books case), which I discussed previously here. Along with co-author Will Rinehart, I also contributed an  essay to a discussion of the case in Concurrences (alongside contributions from Jon Jacobson and Mark Powell, among others).

Much of my writing on the case has essentially addressed it as a rule of reason case, assessing the economic merits of Apple’s contract terms. And as I mention in this Reuters article from yesterday on the case, one of the key issues in this analysis (and one of the government’s key targets in the case) is the use of MFN clauses.

But as Josh pointed out in a blog post last year,

my hunch is that if the case is litigated its legacy will be as an “agreement” case rather than what it contributes to rule of reason analysis.  In other words, if Apple gets to the rule of reason, the DOJ (like most plaintiffs in rule of reason cases) are likely to lose — especially in light of at least preliminary evidence of dramatic increases in output.  The critical question — I suspect — will be about proof of an actual naked price fixing agreement among publishers and Apple, and as a legal matter, what evidence is sufficient to establish that agreement for the purposes of Section 1 of the Sherman Act.

He’s likely correct, of course, that a central question at trial will be whether or not this is a per se or rule of reason case, and that trial will focus in significant part on the sufficiency of the evidence of agreement. But because this determination will turn considerably on the purpose and function of the MFN and price cap terms in Apple’s agreements with the publishers, I don’t think there should (or will) be much difference. Nor do I think the government should (or will) win.

Before the court can apply the per se rule, it must satisfy itself that the conduct at issue “would always or almost always tend to restrict competition and decrease output.” But it is not true as a matter of economics — and certainly not true as a matter of law — that MFNs meet this standard.

After State Oil v. Kahn there can be no question about the rule of reason (if not per se legal) status of price caps. And as the Court noted in Leegin:

Resort to per se rules is confined to restraints, like those mentioned, “that would always or almost always tend to restrict competition and decrease output.” To justify a per se prohibition a restraint must have “manifestly anticompetitive” effects, and “lack any redeeming virtue.

As a consequence, the per se rule is appropriate only after courts have had considerable experience with the type of restraint at issue, and only if courts can predict with confidence that it would be invalidated in all or almost all instances under the rule of reason. It should come as no surprise, then, that “we have expressed reluctance to adopt per se rules with regard to restraints imposed in the context of business relationships where the economic impact of certain practices is not immediately obvious.” And, as we have stated, a “departure from the rule-of-reason standard must be based upon demonstrable economic effect rather than . . . upon formalistic line drawing.”

After Leegin, all vertical non-price restraints, including MFNs, are assessed under the rule of reason.  Courts neither have “considerable experience” with MFNs, nor can they remotely “predict with confidence that they would be invalidated in all or almost all instances under the rule of reason.” As a recent article in Antitrust points out,

The DOJ and FTC have brought approximately ten cases over the last two decades challenging MFNs. Most of these cases involved the health care industry and all were resolved by consent judgments.

Even if the court does take a harder look at whether a per se rule should govern, however, as a practical matter there is not likely to be much difference between a “does this merit per se treatment” analysis and analysis of the facts under the rule of reason. As the Court pointed out in California Dental Association,

The truth is that our categories of analysis of anticompetitive effect are less fixed than terms like “per se,” “quick look,” and “rule of reason” tend to make them appear. We have recognized, for example, that “there is often no bright line separating per se from Rule of Reason analysis,” since “considerable inquiry into market conditions” may be required before the application of any so-called “per se” condemnation is justified. “[W]hether the ultimate finding is the product of a presumption or actual market analysis, the essential inquiry remains the same–whether or not the challenged restraint enhances competition.”

And as my former classmate Tom Nachbar points out in a recent article,

it’s hard to identity much relative simplicity in the per se rule. Indeed, the moniker “per se” has become somewhat misleading, as cases determining whether to apply the per se or rule of reason become as long as ones actually applying the rule of reason itself.

Of course that doesn’t end the analysis, and the government’s filings do all they can to sidestep the direct antitrust treatment of MFNs and instead assert that they (and other evidence alleged) permit the court to infer Apple’s participation as the coordinator of a horizontal price-fixing conspiracy among the publishers.

But as Apple argues in its filings,

The[ relevant] cases mandate an inquiry into the possibility that the challenged contract terms and negotiation approach were in Apple’s independent economic interests. The evidence is overwhelming—not just possible—that Apple acted for its own valid business reasons and not to “raise consumer prices market-wide.”…Plaintiffs ask this Court to infer Apple’s participation in a conspiracy from (1) its MFN and price cap terms and (2) negotiations with publishers.

* * *

What is obvious, however, is that Apple has not fixed prices with its competitors. What is remarkable is that the government seeks to impose grave legal consequences on an inherently pro-competitive act—entry—accomplished via agency, an MFN, and price caps, none of which is per se unlawful.

The government’s strenuous objection to Apple’s interpretation of the controlling Supreme Court authority, Monsanto v. Spray-Rite, notwithstanding, it’s difficult to see the MFN clauses as evidence of Apple’s participation in the publishers’ alleged conspiracy.

An important point supporting Apple’s argument here is that, unlike the “hubs” in the other “hub and spoke” conspiracies on which the DOJ bases its case, Apple has no significant leverage over the alleged co-conspirators, and thus no power to coordinate — let alone enforce — a price-fixing scheme. As Apple argues in its Opposition brief,

The only “power” Apple could wield over the publishers was the attractiveness of a business opportunity—hardly the “make or break” scenarios found in Interstate Circuit and [Toys-R-Us]. Far from capitulating to Apple’s requested core business terms, the publishers fought Apple tooth and nail and negotiated intensely to the very end, and the largest, Random House, declined.

And as Will and I note in our Concurrences article,

MFNs are essentially an important way of…offering some protection against publishers striking a deal with a competitor that leaves Apple forced to price its ebooks out of the market.

There is nothing, that we know of, in the MFNs or elsewhere in the agreements that requires the publishers to impose higher resale prices elsewhere, or prevents the publishers from selling through Apple at a lower price, if necessary. Most important, for Apple’s negotiated prices to dominate in the market it would have to enjoy market power – a condition, currently at least, that is exceedingly unlikely given its 10% share of the ebook market.

The point is that, even if everything the government alleges about the publishers’ price fixing scheme were true, it’s extremely difficult to see Apple as a co-conspirator in such a scheme. The Supreme Court’s holding in Monsanto stands for nothing if not the principle that courts may not infer a vertical party’s participation in a horizontal price-fixing scheme from the existence of otherwise-legal and -defensible interactions between the vertically related parties. Because MFNs have valid purposes outside the realm of price-fixing, they may not be converted into illegal conduct on Apple’s part simply because they might also “sharpen [a publisher's] incentives” to try to raise prices elsewhere.

Remember, we are in a world where the requisite anticompetitive conduct can’t be simply the vertical restraint itself. Rather, we’re evaluating whether the vertical restraint was part of a broader anticompetitive scheme among the publishers. For the MFN clauses to be part of that alleged scheme they must have an identifiable place in the scheme.

First of all, it is unremarkable that Apple might offer terms to any individual publisher (or to all publishers independently) that might be more favorable to the publisher than terms it is getting elsewhere; that’s how a new entrant in Apple’s position attracts suppliers. It is likewise unremarkable that Apple would seek to impose terms (like the MFN) that would preserve its ability to offer a publisher’s books for the same price they are offered elsewhere (which is necessary because the agency agreements negotiated by Apple otherwise remove pricing authority from Apple and confer it on the publishers themselves). And finally it is unremarkable that each publisher would try to negotiate similarly favorable terms with other distributors (or, more accurately, continue to try: bargaining over distribution terms with other distributors hardly started only after the agreements were signed with Apple). What would be notable is if the publishers engaged in concerted action to negotiate these more-favorable terms with other publishers, and what would be problematic for Apple is if its agreement with each publisher facilitated that collusion.

But I don’t see any persuasive evidence that the terms of Apple’s deals with each publisher did any such thing. For MFNs to perform the function alleged by the DOJ it seems to me that the MFNs would have to contribute to the alleged agreement between the publishers, just as the actions of the vertical co-conspirators in Interstate Circuit and Toys-R-Us were alleged to facilitate coordination. But neither the agency agreement itself nor the MFN and price cap terms in the contracts in any way affected the publishers’ incentive to compete with each other. Nor, as noted above, did they require any individual publisher to cause its books to be sold at higher prices through other distributors.

On this latter point, the DOJ alleges that the MFNs “sharpen[ed publishers'] incentives” to raise prices:

If a retailer were allowed to remain on wholesale terms, and that retailer continued to price new release e-books at $9.99, the Publisher Defendant would be forced to lower the iBookstore price to match the $9.99 price

Not only does this say nothing about the incentives of the publishers to compete with each other on price (except that it may have increased that incentive by undermining the prevailing $9.99-for-all-books standard), it seems far-fetched to suggest that fear of having to lower prices for books sold in Apple’s relatively trivial corner of the market would have an apreciable effect on a publisher’s incentives to raise prices elsewhere. For what it’s worth, it also seems far-fetched to suggest that Apple’s motivation was to raise prices given that e-book sales generate only about .0005% of Apple’s total revenues.

Beyond this, the DOJ essentially argues that Apple coordinated agreement among the publishers to accept the terms being offered by Apple, with the intent and effect that this would lead to imposition by the publishers of similar terms (and higher prices) on other distributors. Perhaps, but it’s a stretch. And if it is true, it isn’t because of the MFN clauses. Moreover, it isn’t clear to me (maybe I’m missing some obvious controlling case law?) that agreement over the type of contract used amounts to an illegal horizontal agreement; arguably in this case, at least, it is closer to an ancillary restraint or  justified agreement (as in BMI, e.g.) than, say, a group boycott or bid rigging. In any case, if the DOJ has a case at all turning on this scenario, I think it will have to be based entirely on the alleged evidence of direct coordination (i.e., communications between Apple and publishers during dinners and phone calls) rather than the operation of the contract terms themselves.

In any case, it will be interesting to see how the trial unfolds.

[Cross posted at the Center for the Protection of Intellectual Property blog.]

Today’s public policy debates frame copyright policy solely in terms of a “trade off” between the benefits of incentivizing new works and the social deadweight losses imposed by the access restrictions imposed by these (temporary) “monopolies.” I recently posted to SSRN a new research paper, called How Copyright Drives Innovation in Scholarly Publishing, explaining that this is a fundamental mistake that has distorted the policy debates about scholarly publishing.

This policy mistake is important because it has lead commentators and decision-makers to dismiss as irrelevant to copyright policy the investments by scholarly publishers of $100s of millions in creating innovative distribution mechanisms in our new digital world. These substantial sunk costs are in addition to the $100s of millions expended annually by publishers in creating, publishing and maintaining reliable, high-quality, standardized articles distributed each year in a wide-ranging variety of academic disciplines and fields of research. The articles now number in the millions themselves; in 2009, for instance, over 2,000 publishers issued almost 1.5 million articles just in the scientific, technical and medical fields, exclusive of the humanities and social sciences.

The mistaken incentive-to-invent conventional wisdom in copyright policy is further compounded by widespread misinformation today about the allegedly “zero cost” of digital publication. As a result, many people are simply unaware of the substantial investments in infrastructure, skilled labor and other resources required to create, publish and maintain scholarly articles on the Internet and in other digital platforms.

This is not merely a so-called “academic debate” about copyright policy and publishing.

The policy distortion caused by the narrow, reductionist incentive-to-create conventional wisdom, when combined with the misinformation about the economics of digital business models, has been spurring calls for “open access” mandates for scholarly research, such as at the National Institute of Health and in recently proposed legislation (FASTR Act) and in other proposed regulations. This policy distortion even influenced Justice Breyer’s opinion in the recent decision in Kirtsaeng v. John Wiley & Sons (U.S. Supreme Court, March 19, 2013), as he blithely dismissed commercial incentivizes as being irrelevant to fundamental copyright policy. These legal initiatives and the Kirtsaeng decision are motivated in various ways by the incentive-to-create conventional wisdom, by the misunderstanding of the economics of scholarly publishing, and by anti-copyright rhetoric on both the left and right, all of which has become more pervasive in recent years.

But, as I explain in my paper, courts and commentators have long recognized that incentivizing authors to produce new works is not the sole justification for copyright—copyright also incentivizes intermediaries like scholarly publishers to invest in and create innovative legal and market mechanisms for publishing and distributing articles that report on scholarly research. These two policies—the incentive to create and the incentive to commercialize—are interrelated, as both are necessary in justifying how copyright law secures the dynamic innovation that makes possible the “progress of science.” In short, if the law does not secure the fruits of labors of publishers who create legal and market mechanisms for disseminating works, then authors’ labors will go unrewarded as well.

As Justice Sandra Day O’Connor famously observed in the 1984 decision in Harper & Row v. Nation Enterprises: “In our haste to disseminate news, it should not be forgotten the Framers intended copyright itself to be the engine of free expression. By establishing a marketable right to the use of one’s expression, copyright supplies the economic incentive to create and disseminate ideas.” Thus, in Harper & Row, the Supreme Court reached the uncontroversial conclusion that copyright secures the fruits of productive labors “where an author and publisher have invested extensive resources in creating an original work.” (emphases added)

This concern with commercial incentives in copyright law is not just theory; in fact, it is most salient in scholarly publishing because researchers are not motivated by the pecuniary benefits offered to authors in conventional publishing contexts. As a result of the policy distortion caused by the incentive-to-create conventional wisdom, some academics and scholars now view scholarly publishing by commercial firms who own the copyrights in the articles as “a form of censorship.” Yet, as courts have observed: “It is not surprising that [scholarly] authors favor liberal photocopying . . . . But the authors have not risked their capital to achieve dissemination. The publishers have.” As economics professor Mark McCabe observed (somewhat sardonically) in a research paper released last year for the National Academy of Sciences: he and his fellow academic “economists knew the value of their journals, but not their prices.”

The widespread ignorance among the public, academics and commentators about the economics of scholarly publishing in the Internet age is quite profound relative to the actual numbers.  Based on interviews with six different scholarly publishers—Reed Elsevier, Wiley, SAGE, the New England Journal of Medicine, the American Chemical Society, and the American Institute of Physics—my research paper details for the first time ever in a publication and at great length the necessary transaction costs incurred by any successful publishing enterprise in the Internet age.  To take but one small example from my research paper: Reed Elsevier began developing its online publishing platform in 1995, a scant two years after the advent of the World Wide Web, and its sunk costs in creating this first publishing platform and then digitally archiving its previously published content was over $75 million. Other scholarly publishers report similarly high costs in both absolute and relative terms.

Given the widespread misunderstandings of the economics of Internet-based business models, it bears noting that such high costs are not unique to scholarly publishers.  Microsoft reportedly spent $10 billion developing Windows Vista before it sold a single copy, of which it ultimately did not sell many at all. Google regularly invests $100s of millions, such as $890 million in the first quarter of 2011, in upgrading its data centers.  It is somewhat surprising that such things still have to be pointed out a scant decade after the bursting of the dot.com bubble, a bubble precipitated by exactly the same mistaken view that businesses have somehow been “liberated” from the economic realities of cost by the Internet.

Just as with the extensive infrastructure and staffing costs, the actual costs incurred by publishers in operating the peer review system for their scholarly journals are also widely misunderstood.  Individual publishers now receive hundreds of thousands—the large scholarly publisher, Reed Elsevier, receives more than one million—manuscripts per year. Reed Elsevier’s annual budget for operating its peer review system is over $100 million, which reflects the full scope of staffing, infrastructure, and other transaction costs inherent in operating a quality-control system that rejects 65% of the submitted manuscripts. Reed Elsevier’s budget for its peer review system is consistent with industry-wide studies that have reported that the peer review system costs approximately $2.9 billion annually in operation costs (translating into dollars the British £1.9 billion pounds reported in the study). For those articles accepted for publication, there are additional, extensive production costs, and then there are extensive post-publication costs in updating hypertext links of citations, cyber security of the websites, and related digital issues.

In sum, many people mistakenly believe that scholarly publishers are no longer necessary because the Internet has made moot all such intermediaries of traditional brick-and-mortar economies—a viewpoint reinforced by the equally mistaken incentive-to-create conventional wisdom in the copyright policy debates today. But intermediaries like scholarly publishers face the exact same incentive problems that is universally recognized for authors by the incentive-to-create conventional wisdom: no will make the necessary investments to create a work or to distribute if the fruits of their labors are not secured to them. This basic economic fact—dynamic development of innovative distribution mechanisms require substantial investment in both people and resources—is what makes commercialization an essential feature of both copyright policy and law (and of all intellectual property doctrines).

It is for this reason that copyright law has long promoted and secured the value that academics and scholars have come to depend on in their journal articles—reliable, high-quality, standardized, networked, and accessible research that meets the differing expectations of readers in a variety of fields of scholarly research. This is the value created by the scholarly publishers. Scholarly publishers thus serve an essential function in copyright law by making the investments in and creating the innovative distribution mechanisms that fulfill the constitutional goal of copyright to advance the “progress of science.”

DISCLOSURE: The paper summarized in this blog posting was supported separately by a Leonardo Da Vinci Fellowship and by the Association of American Publishers (AAP). The author thanks Mark Schultz for very helpful comments on earlier drafts, and the AAP for providing invaluable introductions to the five scholarly publishers who shared their publishing data with him.

NOTE: Some small copy-edits were made to this blog posting.

 

Over at Forbes Berin Szoka and I have a lengthy piece discussing “10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is.” Crawford has become the unofficial spokesman for a budding campaign to reshape broadband. She sees cable companies monopolizing broadband, charging too much, withholding content and keeping speeds low, all in order to suppress disruptive innovation — and argues for imposing 19th century common carriage regulation on the Internet. Berin and I begin (we expect to contribute much more to this discussion in the future) to explain both why her premises are erroneous and also why her proscription is faulty. Here’s a taste:

Things in the US today are better than Crawford claims. While Crawford claims that broadband is faster and cheaper in other developed countries, her statistics are convincingly disputed. She neglects to mention the significant subsidies used to build out those networks. Crawford’s model is Europe, but as Europeans acknowledge, “beyond 100 Mbps supply will be very difficult and expensive. Western Europe may be forced into a second fibre build out earlier than expected, or will find themselves within the slow lane in 3-5 years time.” And while “blazing fast” broadband might be important for some users, broadband speeds in the US are plenty fast enough to satisfy most users. Consumers are willing to pay for speed, but, apparently, have little interest in paying for the sort of speed Crawford deems essential. This isn’t surprising. As the LSE study cited above notes, “most new activities made possible by broadband are already possible with basic or fast broadband: higher speeds mainly allow the same things to happen faster or with higher quality, while the extra costs of providing higher speeds to everyone are very significant.”

Even if she’s right, she wildly exaggerates the costs. Using a back-of-the-envelope calculation, Crawford claims that slow downloads (compared to other countries) could cost the U.S. $3 trillion/year in lost productivity from wasted time spent “waiting for a link to load or an app to function on your wireless device.” This intentionally sensationalist claim, however, rests on a purely hypothetical average wait time in the U.S. of 30 seconds (vs. 2 seconds in Japan). Whatever the actual numbers might be, her methodology would still be shaky, not least because time spent waiting for laggy content isn’t necessarily simply wasted. And for most of us, the opportunity cost of waiting for Angry Birds to load on our phones isn’t counted in wages — it’s counted in beers or time on the golf course or other leisure activities. These are important, to be sure, but does anyone seriously believe our GDP would grow 20% if only apps were snappier? Meanwhile, actual econometric studies looking at the productivity effects of faster broadband on businesses have found that higher broadband speeds are not associated with higher productivity.

* * *

So how do we guard against the possibility of consumer harm without making things worse? For us, it’s a mix of promoting both competition and a smarter, subtler role for government.

Despite Crawford’s assertion that the DOJ should have blocked the Comcast-NBCU merger, antitrust and consumer protection laws do operate to constrain corporate conduct, not only through government enforcement but also private rights of action. Antitrust works best in the background, discouraging harmful conduct without anyone ever suing. The same is true for using consumer protection law to punish deception and truly harmful practices (e.g., misleading billing or overstating speeds).

A range of regulatory reforms would also go a long way toward promoting competition. Most importantly, reform local franchising so competitors like Google Fiber can build their own networks. That means giving them “open access” not to existing networks but to the public rights of way under streets. Instead of requiring that franchisees build out to an entire franchise area—which often makes both new entry and service upgrades unprofitable—remove build-out requirements and craft smart subsidies to encourage competition to deliver high-quality universal service, and to deliver superfast broadband to the customers who want it. Rather than controlling prices, offer broadband vouchers to those that can’t afford it. Encourage telcos to build wireline competitors to cable by transitioning their existing telephone networks to all-IP networks, as we’ve urged the FCC to do (here and here). Let wireless reach its potential by opening up spectrum and discouraging municipalities from blocking tower construction. Clear the deadwood of rules that protect incumbents in the video marketplace—a reform with broad bipartisan appeal.

In short, there’s a lot of ground between “do nothing” and “regulate broadband like electricity—or railroads.” Crawford’s arguments simply don’t justify imposing 19th century common carriage regulation on the Internet. But that doesn’t leave us powerless to correct practices that truly harm consumers, should they actually arise.

Read the whole thing here.