Today, in Horne v. Department of Agriculture, the U.S. Supreme Court held that the Fifth Amendment requires that the Government pay just compensation when it takes personal property, just as when it takes real property, and that the Government cannot make raisin growers relinquish their property without just compensation as a condition of selling their raisins in interstate commerce. This decision represents a major victory for economic liberty, but it is at best the first step in the reining in of anticompetitive cartel-like government regulation by government. (See my previous discussion of this matter at Truth on the Market here and a more detailed discussion of today’s decision here.) A capsule summary of the Court’s holding follows.

Most American raisins are grown in California. Under a United States Department of Agriculture Raisin Marketing Order, California raisin growers must give a percentage of their crop to a Raisin Administrative Committee (a government entity largely comprised of raisin producers appointed by the Secretary of Agriculture) to sell, allocate, or dispose of, and the government sets the compensation price that growers are paid for these “reserved” raisins. After selling the reserved raisins and deducting expenses, the Committee returns any net proceeds to the growers. The Hornes were assessed a fine of $480,000 plus a $200,000 civil penalty for refusing to set aside raisins for the government in 2002. The Hornes sued in court, arguing that the reserve requirement violated the Fifth Amendment Takings Clause. The Ninth Circuit rejected the Hornes’ claim that this was a per se taking, because personal property is entitled to less protection than private property, and concluded rather that this should be treated as a regulatory taking, such as a government condition on the grant of a land use permit. The Supreme Court reversed, holding that neither the text nor the history of the Takings Clause suggests that appropriation of personal property is different from appropriation of real property. The Court also held that the government may not avoid its categorical duty to pay just compensation by reserving to the property owner a contingent interest in the property. The Court further held that in this case, the government mandate to surrender property as a condition to engage in commerce effects a per se taking, noting that selling raisins in interstate commerce is “not a special governmental benefit that the Government may hold hostage, to be ransomed by the waiver of constitutional protection.” The Court majority determined that the case should not be remanded to the Ninth Circuit to calculate the amount of just compensation, because the government already did so when it fined the Hornes $480,000, the fair market value of the raisins.

The Horne decision is a victory for economic freedom and the right of individuals not to participate in government cartel schemes that harm the public interest. Unfortunately, however, it is a limited one. As the dissent by Justice Sotomayor indicates, “the Government . . . can permissibly achieve its market control goals by imposing a quota without offering raisin producers a way of reaping any return whatsoever on the raisins they cannot sell.” In short, today’s holding turns entirely on the conclusion that the raisin marketing order involves a “physical taking” of raisins. A more straightforward regulatory scheme under which the federal government directly limited production by raisin growers (much as the government did to a small wheat farmer in Wickard v. Filburn) likely would pass constitutional muster under modern Commerce Clause jurisprudence.

Thus, if it is truly interested in benefiting the American public and ferreting out special interest favoritism in agriculture, Congress should give serious consideration to prohibiting far more than production limitations in agricultural marketing orders. More generally, it should consider legislation to bar any regulatory restrictions that have the effect of limiting the freedom of individual farmers to grow and sell as much of their crop as they please. Such a rule would promote general free market competition, to the benefit of American consumers and the American economy.

Today, in Kimble v. Marvel Entertainment, a case involving the technology underlying the Spider-Man Web-Blaster, the Supreme Court invoked stare decisis to uphold an old precedent based on bad economics. In so doing, the Court spun a tangled web of formalism that trapped economic common sense within it, forgetting that, as Spider-Man was warned in 1962, “with great power there must also come – great responsibility.”

In 1990, Stephen Kimble obtained a patent on a toy that allows children (and young-at-heart adults) to role-play as “a spider person” by shooting webs—really, pressurized foam string—“from the palm of [the] hand.” Marvel Entertainment made and sold a “Web-Blaster” toy based on Kimble’s invention, without remunerating him. Kimble sued Marvel for patent infringement in 1997, and the parties settled, with Marvel agreeing to buy Kimble’s patent for a lump sum (roughly a half-million dollars) plus a 3% royalty on future sales, with no end date set for the payment of royalties.

Marvel subsequently sought a declaratory judgment in federal district court confirming that it could stop paying Kimble royalties after the patent’s expiration date. The district court granted relief, the Ninth Circuit Court of Appeals affirmed, and the Supreme Court affirmed the Ninth Circuit. In an opinion by Justice Kagan, joined by Justices Scalia, Kennedy, Ginsburg, Breyer, and Sotomayor, the Court held that a patentee cannot continue to receive royalties for sales made after his patent expires. Invoking stare decisis, the Court reaffirmed Brulotte v. Thys (1964), which held that a patent licensing agreement that provided for the payment of royalties accruing after the patent’s expiration was illegal per se, because it extended the patent monopoly beyond its statutory time period. The Kimble Court stressed that stare decisis is “the preferred course,” and noted that though the Brulotte rule may prevent some parties from entering into deals they desire, parties can often find ways to achieve similar outcomes.

Justice Alito, joined by Chief Justice Roberts and Justice Thomas, dissented, arguing that Brulotte is a “baseless and damaging precedent” that interferes with the ability of parties to negotiate licensing agreements that reflect the true value of a patent. More specifically:

“There are . . . good reasons why parties sometimes prefer post-expiration royalties over upfront fees, and why such arrangements have pro-competitive effects. Patent holders and licensees are often unsure whether a patented idea will yield significant economic value, and it often takes years to monetize an innovation. In those circumstances, deferred royalty agreements are economically efficient. They encourage innovators, like universities, hospitals, and other institutions, to invest in research that might not yield marketable products until decades down the line. . . . And they allow producers to hedge their bets and develop more products by spreading licensing fees over longer periods. . . . By prohibiting these arrangements, Brulotte erects an obstacle to efficient patent use. In patent law and other areas, we have abandoned per se rules with similarly disruptive effects. . . . [T]he need to avoid Brulotte is an economic inefficiency in itself. . . . And the suggested alternatives do not provide the same benefits as post-expiration royalty agreements. . . . The sort of agreements that Brulotte prohibits would allow licensees to spread their costs, while also allowing patent holders to capitalize on slow-developing inventions.”

Furthermore, the Supreme Court was willing to overturn a nearly century-old antitrust precedent that absolutely barred resale price maintenance in the Leegin case, despite the fact that the precedent was extremely well know (much better known than the Brulotte rule) and had prompted a vast array of contractual workarounds. Given the seemingly greater weight of the Leegin precedent, why was stare decisis set aside in Leegin, but not in Kimble? The Kimble majority’s argument that stare decisis should weigh more heavily in patent than in antitrust because, unlike the antitrust laws, “the patent laws do not turn over exceptional law-shaping authority to the courts”, is unconvincing. As the dissent explains:

“[T]his distinction is unwarranted. We have been more willing to reexamine antitrust precedents because they have attributes of common-law decisions. I see no reason why the same approach should not apply where the precedent at issue, while purporting to apply a statute, is actually based on policy concerns. Indeed, we should be even more willing to reconsider such a precedent because the role implicitly assigned to the federal courts under the Sherman [Antitrust] Act has no parallel in Patent Act cases.”

Stare decisis undoubtedly promotes predictability and the rule of law and, relatedly, institutional stability and efficiency – considerations that go to the costs of administering the legal system and of formulating private conduct in light of prior judicial precedents. The cost-based efficiency considerations underlying applying stare decisis to any particular rule, must, however, be weighed against the net economic benefits associated with abandonment of that rule. The dissent in Kimble did this, but the majority opinion regrettably did not.

In sum, let us hope that in the future the Court keeps in mind its prior advice, cited in Justice Alito’s dissent, that “stare decisis is not an ‘inexorable command’,” and that “[r]evisiting precedent is particularly appropriate where . . . a departure would not upset expectations, the precedent consists of a judge-made rule . . . , and experience has pointed up the precedent’s shortcomings.”

If you haven’t been following the ongoing developments emerging from the demise of Grooveshark, the story has only gotten more interesting. As the RIAA and major record labels have struggled to shut down infringing content on Grooveshark’s site (and now its copycats), groups like EFF would have us believe that the entire Internet was at stake — even in the face of a fairly marginal victory by the recording industry. In the most recent episode, the issuance of a TRO against CloudFlare — a CDN service provider for the copycat versions of Grooveshark — has sparked much controversy. Ironically for CloudFlare, however, its efforts to evade compliance with the TRO may well have opened it up to far more significant infringement liability.

In response to Grooveshark’s shutdown in April, copycat sites began springing up. Initially, the record labels played a game of whac-a-mole as the copycats hopped from server to server within the United States. Ultimately the copycats settled on grooveshark.li, using a host and registrar outside of the country, as well as anonymized services that made direct action against the actual parties next to impossible. Instead of continuing the futile chase, the plaintiffs decided to address the problem more strategically.

High volume web sites like Grooveshark frequently depend upon third party providers to optimize their media streaming and related needs. In this case, the copycats relied upon the services of CloudFlare to provide DNS hosting and a content delivery network (“CDN”). Failing to thwart Grooveshark through direct action alone, the plaintiffs sought and were granted a TRO against certain third-parties, eventually served on CloudFlare, hoping to staunch the flow of infringing content by temporarily enjoining the ancillary activities that enabled the pirates to continue operations.

CloudFlare refused to comply with the TRO, claiming the TRO didn’t apply to it (for reasons discussed below). The court disagreed, however, and found that CloudFlare was, in fact, bound by the TRO.

Unsurprisingly the copyright scolds came out strongly against the TRO and its application to CloudFlare, claiming that

Copyright holders should not be allowed to blanket infrastructure companies with blocking requests, co-opting them into becoming private trademark and copyright police.

Devlin Hartline wrote an excellent analysis of the court’s decision that the TRO was properly applied to CloudFlare, concluding that it was neither improper nor problematic. In sum, as Hartline discusses, the court found that CloudFlare was indeed engaged in “active concert and participation” and was, therefore, properly subject to a TRO under FRCP 65 that would prevent it from further enabling the copycats to run their service.

Hartline’s analysis is spot-on, but we think it important to clarify and amplify his analysis in a way that, we believe, actually provides insight into a much larger problem for CloudFlare.

As Hartline states,

This TRO wasn’t about the “world at large,” and it wasn’t about turning the companies that provide internet infrastructure into the “trademark and copyright police.” It was about CloudFlare knowingly helping the enjoined defendants to continue violating the plaintiffs’ intellectual property rights.

Importantly, the issuance of the TRO turned in part on whether the plaintiffs were likely to succeed on the merits — which is to say that the copycats could in fact be liable for copyright infringement. Further, the initial TRO became a preliminary injunction before the final TRO hearing because the copycats failed to show up to defend themselves. Thus, CloudFlare was potentially exposing itself to a claim of contributory infringement, possibly from the time it was notified of the infringing activity by the RIAA. This is so because a claim of contributory liability would require that CloudFlare “knowingly” contributed to the infringement. Here there was actual knowledge upon issuance of the TRO (if not before).

However, had CloudFlare gone along with the proceedings and complied with the court’s order in good faith, § 512 of the Digital Millennium Copyright Act (DMCA) would have provided a safe harbor. Nevertheless, following from CloudFlare’s actual behavior, the company does now have a lot more to fear than a mere TRO.

Although we don’t have the full technical details of how CloudFlare’s service operates, we can make some fair assumptions. Most importantly, in order to optimize the content it serves, a CDN would necessarily have to store that content at some point as part of an optimizing cache scheme. Under the terms of the DMCA, an online service provider (OSP) that engages in caching of online content will be immune from liability, subject to certain conditions. The most important condition relevant here is that, in order to qualify for the safe harbor, the OSP must “expeditiously [] remove, or disable access to, the material that is claimed to be infringing upon notification of claimed infringement[.]”

Here, not only had CloudFlare been informed by the plaintiffs that it was storing infringing content, but a district court had gone so far as to grant a TRO against CloudFlare’s serving of said content. It certainly seems plausible to view CloudFlare as acting outside the scope of the DMCA safe harbor once it refused to disable access to the infringing content after the plaintiffs contacted it, but certainly once the TRO was deemed to apply to it.

To underscore this point, CloudFlare’s arguments during the TRO proceedings essentially admitted to knowledge that infringing material was flowing through its CDN. CloudFlare focused its defense on the fact that it was not an active participant in the infringing activity, but was merely a passive network through which the copycats’ content was flowing. Moreover, CloudFlare argued that

Even if [it]—and every company in the world that provides similar services—took proactive steps to identify and block the Defendants, the website would remain up and running at its current domain name.

But while this argument may make some logical sense from the perspective of a party resisting an injunction, it amounts to a very big admission in terms of a possible infringement case — particularly given CloudFlare’s obstinance in refusing to help the plaintiffs shut down the infringing sites.

As noted above, CloudFlare had an affirmative duty to to at least suspend access to infringing material once it was aware of the infringement (and, of course, even more so once it received the TRO). Instead, CloudFlare relied upon its “impossibility” argument against complying with the TRO based on the claim that enjoining CloudFlare would be futile in thwarting the infringement of others. CloudFlare does appear to have since complied with the TRO (which is now a preliminary injunction), but the compliance does not change a very crucial fact: knowledge of the infringement on CloudFlare’s part existed before the preliminary injunction took effect, while CloudFlare resisted the initial TRO as well as RIAA’s efforts to secure compliance.

Phrased another way, CloudFlare became an infringer by virtue of having cached copyrighted content and been given notice of that content. However, in its view, merely removing CloudFlare’s storage of that copyrighted content would have done nothing to prevent other networks from also storing the copyrighted content, and therefore it should not be enjoined from its infringing behavior. This essentially amounts to an admission of knowledge of infringing content being stored in its network.

It would be hard to believe that CloudFlare’s counsel failed to advise it to consider the contributory infringement issues that could arise from its conduct prior to and during the TRO proceedings. Thus CloudFlare’s position is somewhat perplexing, particularly once the case became a TRO proceeding. CloudFlare could perhaps have made technical arguments against the TRO in an attempt to demonstrate to its customers that it didn’t automatically shut down services at the behest of the RIAA. It could have done this in good faith, and without the full-throated “impossibility” argument that could very plausibly draw them into infringement litigation. But whatever CloudFlare thought it was gaining in taking a “moral” stance on behalf of OSPs everywhere with its “impossibility” argument, it may well have ended up costing itself much more.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET

Moderator:

  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP

Speakers:

  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.

The TCPA is an Antiquated Law

The TCPA is an Antiquated Law

The Telephone Consumer Protection Act (“TCPA”) is back in the news following a letter sent to PayPal from the Enforcement Bureau of the FCC.  At issue are amendments that PayPal intends to introduce into its end user agreement. Specifically, PayPal is planning on including an automated call and text message system with which it would reach out to its users to inform them of account updates, perform quality assurance checks, and provide promotional offers.

Enter the TCPA, which, as the Enforcement Bureau noted in its letter, has been used for over twenty years by the FCC to “protect consumers from harassing, intrusive, and unwanted calls and text messages.” The FCC has two primary concerns in its warning to PayPal. First, there was no formal agreement between PayPal and its users that would satisfy the FCC’s rules and allow PayPal to use an automated call system. And, perhaps most importantly, PayPal is not entitled to simply attach an “automated calls” clause to its user agreement as a condition of providing the PayPal service (as it clearly intends to do with its amendments).

There are a number of things wrong with the TCPA and the FCC’s decision to enforce its provisions against PayPal in the current instance. The FCC has the power to provide for some limited exemptions to the TCPA’s prohibition on automated dialing systems. Most applicable here, the FCC has the discretion to provide exemptions where calls to cell phone users won’t result in those users being billed for the calls. Although most consumers still buy plans that allot minutes for their monthly use, the practical reality for most cell phone users is that they no longer need to count minutes for every call. Users typically have a large number of minutes on their plans, and certainly many of those minutes can go unused. It seems that the progression of technology and the economics of cellphones over the last twenty-five years should warrant a Congressional revisit to the underlying justifications of at least this prohibition in the TCPA.

However, exceptions aside, there remains a much larger issue with the TCPA, one that is also rooted in the outdated technological assumptions underlying the law. The TCPA was meant to prevent dedicated telemarketing companies from using the latest in “automated dialing” technology circa 1991 from harassing people. It was not intended to stymie legitimate businesses from experimenting with more efficient methods of contacting their own customers.

The text of the law underscores its technological antiquity:  according to the TCPA, an “automatic telephone dialing system” means equipment which “has the capacity” to sequentially dial random numbers. This is to say, the equipment that was contemplated when the law was written was software-enabled phones that were purpose built to enable telemarketing firms to make blanket cold calls to every number in a given area code. The language clearly doesn’t contemplate phones connected to general purpose computing resources, as most phone systems are today.

The modern phone systems, connected to intelligent computer backends, are designed to flexibly reach out to hundreds or thousands of existing customers at a time, and in a way that efficiently enhances the customer’s experience with the company. Technically, yes, these systems are capable of auto-dialing a large number of random recipients; however, when a company like PayPal uses this technology, its purpose is clearly different than that employed by the equivalent of spammers on the phone system. Not having a nexus between an intent to random-dial and a particular harm experienced by an end user is a major hole in the TCPA. Particularly in this case, it seems fairly absurd that the TCPA could be used to prevent PayPal from interacting with its own customers.

Further, there is a lot at stake for those accused of violating the TCPA. In the PayPal warning letter, the FCC noted that it is empowered to levy a $16,000 fine per call or text message that it finds violates the terms of the TCPA. That’s bad, but it’s nowhere near as bad as it could get. The TCPA also contains a private right of action that was meant to encourage individual consumers to take telemarketers to small claims court in their local state.  Each individual consumer is entitled to receive provable damages or statutory damages of $500.00, whichever is greater. If willfulness can be proven, the damages are trebled, which in effect means that most individual plaintiffs in the know will plead willfulness, and wait for either a settlement conference or trial to sort the particulars out.

However, over the years a cottage industry has built up around class action lawyers aggregating “harmed” plaintiffs who had received unwanted automatic calls or texts, and forcing settlements in the tens of millions of dollars. The math is pretty simple. A large company with lots of customers may be tempted to use an automatic system to send out account information and offer alerts. If it sends out five hundred thousand auto calls or texts, that could result in “damages” in the amount of $250M in a class action suit. A settlement for five or ten million dollars is a deal by comparison. For instance, in 2013 Bank of America entered into a $32M settlement for texts and calls made between 2007 and 2013 to 7.7 million people.  If they had gone to trial and lost, the damages could have been as much as $3.8B!

The purpose of the TCPA was to prevent abusive telemarketers from harassing people, not to defeat the use of an entire technology that can be employed to increase efficiency for businesses and lower costs for consumers. The per call penalties associated with violating the TCPA, along with imprecise and antiquated language in the law, provide a major incentive to use the legal system to punish well-meaning companies that are just operating their non-telemarketing businesses in a reasonable manner. It’s time to seriously revise this law in light of the changes in technology over the past twenty-five years.

During the recent debate over whether to grant the Obama Administration “trade promotion authority” (TPA or fast track) to enter into major international trade agreements (such as the Trans-Pacific Partnership, or TPP), little attention has been directed to the problem of remaining anticompetitive governmental regulatory obstacles to liberalized trade and free markets.  Those remaining obstacles, which merit far more public attention, are highlighted in an article coauthored by Shanker Singham and me on competition policy and international trade distortions.

As our article explains, international trade agreements simply do not reach a variety of anticompetitive welfare-reducing government measures that create de facto trade barriers by favoring domestic interests over foreign competitors.  Moreover, many of these restraints are not in place to discriminate against foreign entities, but rather exist to promote certain favored firms. We dub these restrictions “anticompetitive market distortions” or “ACMDs,” in that they involve government actions that empower certain private interests to obtain or retain artificial competitive advantages over their rivals, be they foreign or domestic.  ACMDs are often a manifestation of cronyism, by which politically-connected enterprises successfully pressure government to shield them from effective competition, to the detriment of overall economic growth and welfare.  As we emphasize in our article, existing international trade rules have been able to reach ACMDs, which include: (1) governmental restraints that distort markets and lessen competition; and (2) anticompetitive private arrangements that are backed by government actions, have substantial effects on trade outside the jurisdiction that imposes the restrictions, and are not readily susceptible to domestic competition law challenge.  Among the most pernicious ACMDs are those that artificially alter the cost-base as between competing firms. Such cost changes will have large and immediate effects on market shares, and therefore on international trade flows.

Likewise, with the growing internationalization of commerce, ACMDs not only diminish domestic consumer welfare – they increasingly may have a harmful effect on foreign enterprises that seek to do business in the country imposing the restraint.  The home nations of the affected foreign enterprises, moreover, may as a practical matter find it not feasible to apply their competition laws extraterritorially to curb the restraint, given issues of jurisdictional reach and comity (particularly if the restraint flies under the colors of domestic law).  Because ACMDs also have not been constrained by international trade liberalization initiatives, they pose a serious challenge to global welfare enhancement by curtailing potential trade and investment opportunities.

Interest group politics and associated rent-seeking by well-organized private actors are endemic to modern economic life, guaranteeing that ACMDs will not easily be dismantled.  What is to be done, then, to curb ACMDs?

As a first step, Shanker Singham and I have proposed the development of a metric to estimate the net welfare costs of ACMDs.  Such a metric could help strengthen the hand of international organizations (including the International Competition Network, the World Bank, and the OECD) – and of reform-minded public officials – in building the case for dismantling these restraints, or (as a last resort) replacing them with less costly means for benefiting favored constituencies.  (Singham, two other coauthors, and I have developed a draft paper that delineates a specific metric, which we hope will be suitable for public release in the near future.)

Furthermore, free market-oriented think tanks can also be helpful by highlighting the harm special interest governmental restraints impose on the economy and on economic freedom.  In that regard, the Heritage Foundation’s excellent work in opposing cronyism deserves special mention.

Working to eliminate ACMDs and thereby promoting economic liberty is an arduous long-term task – one that will only succeed in increments, one battle at a time (the current principled effort to eliminate the Ex-Im Bank, strongly supported by the Heritage Foundation, is one such example).  Nevertheless, it is very much worth the candle.

In my article published today in The Daily Signal, I delve into the difficulties of curbing Internet-related copyright infringement.  The key points are summarized below.

U.S. industries that rely on copyright protection (such as motion pictures, music, television, visual arts, and software) are threatened by the unauthorized Internet downloading of copyrighted writings, designs, artwork, music and films. U.S. policymakers must decide how best to protect the creators of copyrighted works without harming growth and innovation in Internet services or vital protections for free speech.

The Internet allows consumers to alter and immediately transmit perfect digital copies of copyrighted works around the world and has generated services designed to provide these tools. Those tools include, for example, peer-to-peer file-sharing services and mobile apps designed to foster infringement. Many websites that provide pirated content—including, for example, online video-streaming sites—are located outside the United States. Such piracy costs the U.S. economy billions of dollars in losses per year—including reduced income for creators and other participants in copyright-intensive industries.

Curtailing online infringement will require a combination of litigation, technology, enhanced private-sector initiatives, public education, and continuing development of readily accessible and legally available content offerings. As the Internet continues to develop, the best approach to protecting copyright in the online environment is to rely on existing legal tools, enhanced cooperation among Internet stakeholders and business innovations that lessen incentives to infringe.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.