The most welfare-inimical restrictions on competition stem from governmental action, and the Organization for Economic Cooperation and Development’s newly promulgated “Competition Assessment Toolkit, Volume 3: Operational Manual” (“Toolkit 3,” approved by the OECD in late June 2015) provides useful additional guidance on how to evaluate and tackle such harmful market distortions. Toolkit 3 is a very helpful supplement to the second and third volumes of the Competition Assessment Toolkit. Commendably, Toolkit 3 promotes itself generally as a tool that can be employed by well-intentioned governments, rather than merely marketing itself as a manual for advocacy by national competition agencies (which may lack the political clout to sell reforms to other government bureaucracies or to legislators). It is a succinct non-highly-technical document that can be used by a wide range of governments, and applied flexibly, in light of their resource constraints and institutional capacities. Let’s briefly survey Toolkit 3’s key provisions.

Toolkit 3 begins with a “competition checklist” that states that a competition assessment should be undertaken if a regulatory or legislative proposal has any one of four effects: (1) it limits the number or range of suppliers; (2) it limits the ability of suppliers to compete; (3) it reduces the incentive of suppliers to compete; or (4) it limits the choices and information available to consumers. The Toolkit then sets forth basic guidance on competition assessments in seven relatively short, clearly written chapters.

Chapter one begins by explaining that Toolkit 3 “shows how to assess laws, regulations, and policies for their competition effects, and how to revise regulations or policies to make them more procompetitive.” To that end, the chapter introduces the concept of market studies and sectoral reviews, and outlines a six-part process for carrying out competition assessments: (1) identify policies to assess; (2) apply the competition checklist (see above); (3) identify alternative options for achieving a policy objective; (4) select the best option; (5) implement the best option; and (6) review the impacts of an option once it has been implemented.

Chapter two provides general guidance on the selection of public policies for examination, with particular attention to the identification of sectors of the economy, that have the greatest restraints on competition and a major impact on economic output and efficiency.

Chapter three focuses on competition screening through use of threshold questions embodied in the four-part competition checklist. It also provides examples of the sorts of regulations that fall into each category covered by the checklist.

Chapter four sets forth guidance for the examination of potential restrictions that have been flagged for evaluation by the checklist. It provides indicators for deciding whether or not “in-depth analysis” is required, delineates specific considerations that should be brought to bear in conducting an analysis, and provides a detailed example of the steps to be followed in assessing a hypothetical drug patent law (beginning with a preliminary assessment, followed by a detailed analysis, and ending with key findings).

Chapter five centers on identifying the policy options that allow a policymaker to achieve a desired objective with a minimum distortion of competition. It discusses: (1) identifying the purpose of a policy; (2) identifying the competition problems caused by the policy under examination and whether it is necessary to achieve the desired objective; (3) evaluating the technical features of the subject matter being regulated; (4) accounting for features of the broader regulatory environment that have an effect on the market in question, in order to develop alternatives; (5) understanding changes in the business or market environment that have occurred since the last policy implementation; and (6) identifying specific techniques that allow an objective to be achieved with a minimum distortion of competition. The chapter closes by briefly describing various policy approaches for achieving a hypothetical desired reform objective (promotion of generic drug competition).

Chapter six provides guidance on comparing the policy options that have been identified. After summarizing background concepts, it discusses qualitative analysis, quantitative analysis, and the measurement of costs and benefits. The cost-benefits section is particularly thorough, delving into data gathering, techniques of measurement, estimates of values, adjustments to values, and accounting for risk and uncertainty. These tools are then applied to a specific hypothetical involving pharmaceutical regulation, featuring an assessment of the advantages and disadvantages of alternative options.

Chapter seven outlines the steps that should be taken in submitting a recommendation for government action. Those involve: (1) selecting the best policy option; (2) presenting the recommendation to a decision-maker; (3) drafting a regulation that is needed to effectuate the desired policy option; (4) obtaining final approval; and (5) implementing the regulation. The chapter closes by applying this framework to hypothetical regulations.

Chapter 8 discusses the performance of ex post evaluations of competition assessments, in order to determine whether the option chosen following the review process had the anticipated effects and was most appropriate. Four examples of ex post evaluations are summarized.

Toolkit 3 closes with a brief annex that describes mathematically and graphically the consumer benefits that arise when moving from a restrictive market equilibrium to a competitive equilibrium.

In sum, the release of Toolkit 3 is best seen as one more small step forward in the long-term fight against state-managed regulatory capitalism and cronyism, on a par with increased attention to advocacy initiatives within the International Competition Network and growing World Bank efforts to highlight the welfare harm due to governmental regulatory impediments. Although anticompetitive government market distortions will remain a huge problem for the foreseeable future, at least international organizations are starting to acknowledge their severity and to provide conceptual tools for combatting them. Now it is up to free market proponents to work in the trenches to secure the political changes needed to bring such distortions – and their rent-seeking advocates – to heel. This is a long-term fight, but well worth the candle.

Today, in Michigan v. EPA, a five-Justice Supreme Court majority (Antonin Scalia, joined by Chief Justice John Roberts, and Justices Anthony Kennedy, Clarence Thomas, and Samuel Alito, with Thomas issuing a separate concurrence) held that the Clean Air Act requires the Environmental Protection Agency (EPA) to consider costs, including the cost of compliance, when deciding whether to regulate hazardous air pollutants emitted by power plants.  The Clean Air Act, 42 U. S. C. §7412, authorizes the EPA to regulate emissions of hazardous air pollutants from certain stationary sources, such as refineries and factories.  The EPA may, however, regulate power plants under this program only if it concludes that such regulation is “appropriate and necessary” after studying hazards to public health posed by power-plant emissions, 42 U.S.C. §7412(n)(1)(A).  EPA determined that it was “appropriate and necessary” to regulate oil- and coal-fired power plants, because the plants’ emissions pose risks to public health and the environment and because controls capable of reducing these emissions were available.  (The EPA contended that its regulations would have ancillary benefits (including cutting power plants’ emissions of  particulate matter and sulfur dioxide) not covered by the hazardous air pollutants program, but conceded that its estimate of benefits “played no role” in its finding that regulation was “appropriate and necessary.”)  The EPA refused to consider costs when deciding to regulate, even though it estimated that the cost of its regulations to power plants would be $9.6 billion a year, but the quantifiable benefits from the resulting reduction in hazardous-air-pollutant emissions would be $4 to $6 million a year.  Twenty-three states challenged the EPA’s refusal to consider cost, but the U.S. Court of Appeals for the D.C. Circuit upheld the agency’s decision not to consider costs at the outset.  In reversing the D.C. Circuit, the Court stressed that EPA strayed well beyond the bounds of reasonable interpretation in concluding that cost is not a factor relevant to the appropriateness of regulating power plants.  Read naturally against the backdrop of established administrative law, the phrase “appropriate and necessary” plainly encompasses cost, according to the Court.

In a concurring opinion, Justice Thomas opined that this case “raises serious questions about the constitutionality of our broader practice of deferring to agency interpretations of federal statutes.”  Justice Elena Kagan, joined by Justices Ruth Bader Ginsburg, Stephen Breyer, and Sonya Sotomayor, dissented, reasoning that EPA “acted well within its authority in declining to consider costs at the [beginning] . . . of the regulatory process given that it would do so in every round thereafter.”

Although the Supreme Court’s holding merits praise, it is inherently limited in scope, and should not be expected to significantly constrain regulatory overreach, whether by the EPA or by other agencies.  First, in remanding the case, the Court did not opine on the precise manner in which costs and benefits should be evaluated, potentially leaving EPA broad latitude to try to reach its desired regulatory result with a bit of “cost-benefit” wordsmithing.  Such a result would not be surprising, given that “[t]he U.S. Government has a strong tendency to overregulate.  More specifically, administrative agencies such as EPA, whose staffs are dominated by regulatorily-minded permanent bureaucrats, will have every incentive to skew judicially-required “cost assessments” to justify their actions – based on, for example, “false assumptions and linkages, black-box computer models, secretive collusion with activist groups, outright deception, and supposedly ‘scientific’ reports whose shady data and methodologies the agency refuses to share with industries, citizens or even Congress.”  Since, as a practical matter, appellate courts have neither the resources nor the capacity to sort out legitimate from illegitimate agency claims that regulatory programs truly meet cost-benefit standards, it would be naïve to believe that the Court’s majority opinion will be able to do much to rein in the federal regulatory behemoth.

What, then, is the solution?  The concern that federal administrative agencies are being allowed to arrogate to themselves inherently executive and judicial functions, a theme previously stressed by Justice Thomas, has not led other justices to call for wide-scale judicial nullification or limitation of expansive agency regulatory findings.  Absent an unexpected Executive Branch epiphany, then, the best bet for reform lies primarily in congressional action.

What sort of congressional action?  The Heritage Foundation has described actions needed to help stem the tide of overregulation:  (1) require congressional approval of new major regulations promulgated by agencies; (2) establish a sunset date for federal regulations; (3) subject “independent” agencies to executive branch regulatory review; and (4) develop a congressional regulatory analysis capability.  Legislative proposals such as the REINS Act (Regulations from the Executive in Need of Scrutiny Act of 2015), would meet the first objective, while other discrete measures could advance the other three goals.  Public choice considerations suggest that these reforms will not be easily achieved (beneficiaries of the intrusive regulatory status quo may be expected to vigorously oppose reform), but they nevertheless should be pursued posthaste.

imageI am of two minds when it comes to the announcement today that the NYC taxi commission will permit companies like Uber and Lyft to update, when the companies wish, the mobile apps that serve as the front end for the ridesharing platforms.

My first instinct is to breathe a sigh of relief that even the NYC taxi commission eventually rejected the patently ridiculous notion that an international technology platform should have its update schedule in anyway dictated by the parochial interests of a local transportation fiefdom.

My second instinct is to grit my teeth in frustration that, in the face of the overwhelming transformation going on in the world today because of technology platforms offered by the likes of Uber and Lyft, anyone would even think to ask the question “should I ask the NYC taxi commission whether or not I can update the app on my users’ smartphones?”

That said, it’s important to take the world as you find it, not as you wish it to be, and so I want to highlight some items from the decision that deserve approbation.

Meera Josh, the NYC Taxi Commission chairperson and CEO, had this to say of the proposed rule:

We re-stylized the rules so they’re tech agnostic because our point is not to go after one particular technology – things change quicker than we do – it’s to provide baseline consumer protection and driver safety requirements[.]

I love that the commission gets this. The real power in the technology that drives the sharing economy is that it can change quickly in response to consumer demand. Further, regulators can offer value to these markets only when they understand that the nature of work and services are changing, and that their core justification as consumer protection agencies necessarily requires them to adjust when and how they intervene.

Although there is always more work to be done to make room for these entrepreneurial platforms (for instance, the NYC rules appear to require that all on-demand drivers – including the soccer mom down the street driving for Lyft – be licensed through the commission), this is generally forward-thinking. I hope that more municipalities across the country take notice, and that the relevant regulators follow suit in repositioning themselves as partners with these innovative companies.

Today, in Horne v. Department of Agriculture, the U.S. Supreme Court held that the Fifth Amendment requires that the Government pay just compensation when it takes personal property, just as when it takes real property, and that the Government cannot make raisin growers relinquish their property without just compensation as a condition of selling their raisins in interstate commerce. This decision represents a major victory for economic liberty, but it is at best the first step in the reining in of anticompetitive cartel-like government regulation by government. (See my previous discussion of this matter at Truth on the Market here and a more detailed discussion of today’s decision here.) A capsule summary of the Court’s holding follows.

Most American raisins are grown in California. Under a United States Department of Agriculture Raisin Marketing Order, California raisin growers must give a percentage of their crop to a Raisin Administrative Committee (a government entity largely comprised of raisin producers appointed by the Secretary of Agriculture) to sell, allocate, or dispose of, and the government sets the compensation price that growers are paid for these “reserved” raisins. After selling the reserved raisins and deducting expenses, the Committee returns any net proceeds to the growers. The Hornes were assessed a fine of $480,000 plus a $200,000 civil penalty for refusing to set aside raisins for the government in 2002. The Hornes sued in court, arguing that the reserve requirement violated the Fifth Amendment Takings Clause. The Ninth Circuit rejected the Hornes’ claim that this was a per se taking, because personal property is entitled to less protection than private property, and concluded rather that this should be treated as a regulatory taking, such as a government condition on the grant of a land use permit. The Supreme Court reversed, holding that neither the text nor the history of the Takings Clause suggests that appropriation of personal property is different from appropriation of real property. The Court also held that the government may not avoid its categorical duty to pay just compensation by reserving to the property owner a contingent interest in the property. The Court further held that in this case, the government mandate to surrender property as a condition to engage in commerce effects a per se taking, noting that selling raisins in interstate commerce is “not a special governmental benefit that the Government may hold hostage, to be ransomed by the waiver of constitutional protection.” The Court majority determined that the case should not be remanded to the Ninth Circuit to calculate the amount of just compensation, because the government already did so when it fined the Hornes $480,000, the fair market value of the raisins.

The Horne decision is a victory for economic freedom and the right of individuals not to participate in government cartel schemes that harm the public interest. Unfortunately, however, it is a limited one. As the dissent by Justice Sotomayor indicates, “the Government . . . can permissibly achieve its market control goals by imposing a quota without offering raisin producers a way of reaping any return whatsoever on the raisins they cannot sell.” In short, today’s holding turns entirely on the conclusion that the raisin marketing order involves a “physical taking” of raisins. A more straightforward regulatory scheme under which the federal government directly limited production by raisin growers (much as the government did to a small wheat farmer in Wickard v. Filburn) likely would pass constitutional muster under modern Commerce Clause jurisprudence.

Thus, if it is truly interested in benefiting the American public and ferreting out special interest favoritism in agriculture, Congress should give serious consideration to prohibiting far more than production limitations in agricultural marketing orders. More generally, it should consider legislation to bar any regulatory restrictions that have the effect of limiting the freedom of individual farmers to grow and sell as much of their crop as they please. Such a rule would promote general free market competition, to the benefit of American consumers and the American economy.

Today, in Kimble v. Marvel Entertainment, a case involving the technology underlying the Spider-Man Web-Blaster, the Supreme Court invoked stare decisis to uphold an old precedent based on bad economics. In so doing, the Court spun a tangled web of formalism that trapped economic common sense within it, forgetting that, as Spider-Man was warned in 1962, “with great power there must also come – great responsibility.”

In 1990, Stephen Kimble obtained a patent on a toy that allows children (and young-at-heart adults) to role-play as “a spider person” by shooting webs—really, pressurized foam string—“from the palm of [the] hand.” Marvel Entertainment made and sold a “Web-Blaster” toy based on Kimble’s invention, without remunerating him. Kimble sued Marvel for patent infringement in 1997, and the parties settled, with Marvel agreeing to buy Kimble’s patent for a lump sum (roughly a half-million dollars) plus a 3% royalty on future sales, with no end date set for the payment of royalties.

Marvel subsequently sought a declaratory judgment in federal district court confirming that it could stop paying Kimble royalties after the patent’s expiration date. The district court granted relief, the Ninth Circuit Court of Appeals affirmed, and the Supreme Court affirmed the Ninth Circuit. In an opinion by Justice Kagan, joined by Justices Scalia, Kennedy, Ginsburg, Breyer, and Sotomayor, the Court held that a patentee cannot continue to receive royalties for sales made after his patent expires. Invoking stare decisis, the Court reaffirmed Brulotte v. Thys (1964), which held that a patent licensing agreement that provided for the payment of royalties accruing after the patent’s expiration was illegal per se, because it extended the patent monopoly beyond its statutory time period. The Kimble Court stressed that stare decisis is “the preferred course,” and noted that though the Brulotte rule may prevent some parties from entering into deals they desire, parties can often find ways to achieve similar outcomes.

Justice Alito, joined by Chief Justice Roberts and Justice Thomas, dissented, arguing that Brulotte is a “baseless and damaging precedent” that interferes with the ability of parties to negotiate licensing agreements that reflect the true value of a patent. More specifically:

“There are . . . good reasons why parties sometimes prefer post-expiration royalties over upfront fees, and why such arrangements have pro-competitive effects. Patent holders and licensees are often unsure whether a patented idea will yield significant economic value, and it often takes years to monetize an innovation. In those circumstances, deferred royalty agreements are economically efficient. They encourage innovators, like universities, hospitals, and other institutions, to invest in research that might not yield marketable products until decades down the line. . . . And they allow producers to hedge their bets and develop more products by spreading licensing fees over longer periods. . . . By prohibiting these arrangements, Brulotte erects an obstacle to efficient patent use. In patent law and other areas, we have abandoned per se rules with similarly disruptive effects. . . . [T]he need to avoid Brulotte is an economic inefficiency in itself. . . . And the suggested alternatives do not provide the same benefits as post-expiration royalty agreements. . . . The sort of agreements that Brulotte prohibits would allow licensees to spread their costs, while also allowing patent holders to capitalize on slow-developing inventions.”

Furthermore, the Supreme Court was willing to overturn a nearly century-old antitrust precedent that absolutely barred resale price maintenance in the Leegin case, despite the fact that the precedent was extremely well know (much better known than the Brulotte rule) and had prompted a vast array of contractual workarounds. Given the seemingly greater weight of the Leegin precedent, why was stare decisis set aside in Leegin, but not in Kimble? The Kimble majority’s argument that stare decisis should weigh more heavily in patent than in antitrust because, unlike the antitrust laws, “the patent laws do not turn over exceptional law-shaping authority to the courts”, is unconvincing. As the dissent explains:

“[T]his distinction is unwarranted. We have been more willing to reexamine antitrust precedents because they have attributes of common-law decisions. I see no reason why the same approach should not apply where the precedent at issue, while purporting to apply a statute, is actually based on policy concerns. Indeed, we should be even more willing to reconsider such a precedent because the role implicitly assigned to the federal courts under the Sherman [Antitrust] Act has no parallel in Patent Act cases.”

Stare decisis undoubtedly promotes predictability and the rule of law and, relatedly, institutional stability and efficiency – considerations that go to the costs of administering the legal system and of formulating private conduct in light of prior judicial precedents. The cost-based efficiency considerations underlying applying stare decisis to any particular rule, must, however, be weighed against the net economic benefits associated with abandonment of that rule. The dissent in Kimble did this, but the majority opinion regrettably did not.

In sum, let us hope that in the future the Court keeps in mind its prior advice, cited in Justice Alito’s dissent, that “stare decisis is not an ‘inexorable command’,” and that “[r]evisiting precedent is particularly appropriate where . . . a departure would not upset expectations, the precedent consists of a judge-made rule . . . , and experience has pointed up the precedent’s shortcomings.”

If you haven’t been following the ongoing developments emerging from the demise of Grooveshark, the story has only gotten more interesting. As the RIAA and major record labels have struggled to shut down infringing content on Grooveshark’s site (and now its copycats), groups like EFF would have us believe that the entire Internet was at stake — even in the face of a fairly marginal victory by the recording industry. In the most recent episode, the issuance of a TRO against CloudFlare — a CDN service provider for the copycat versions of Grooveshark — has sparked much controversy. Ironically for CloudFlare, however, its efforts to evade compliance with the TRO may well have opened it up to far more significant infringement liability.

In response to Grooveshark’s shutdown in April, copycat sites began springing up. Initially, the record labels played a game of whac-a-mole as the copycats hopped from server to server within the United States. Ultimately the copycats settled on grooveshark.li, using a host and registrar outside of the country, as well as anonymized services that made direct action against the actual parties next to impossible. Instead of continuing the futile chase, the plaintiffs decided to address the problem more strategically.

High volume web sites like Grooveshark frequently depend upon third party providers to optimize their media streaming and related needs. In this case, the copycats relied upon the services of CloudFlare to provide DNS hosting and a content delivery network (“CDN”). Failing to thwart Grooveshark through direct action alone, the plaintiffs sought and were granted a TRO against certain third-parties, eventually served on CloudFlare, hoping to staunch the flow of infringing content by temporarily enjoining the ancillary activities that enabled the pirates to continue operations.

CloudFlare refused to comply with the TRO, claiming the TRO didn’t apply to it (for reasons discussed below). The court disagreed, however, and found that CloudFlare was, in fact, bound by the TRO.

Unsurprisingly the copyright scolds came out strongly against the TRO and its application to CloudFlare, claiming that

Copyright holders should not be allowed to blanket infrastructure companies with blocking requests, co-opting them into becoming private trademark and copyright police.

Devlin Hartline wrote an excellent analysis of the court’s decision that the TRO was properly applied to CloudFlare, concluding that it was neither improper nor problematic. In sum, as Hartline discusses, the court found that CloudFlare was indeed engaged in “active concert and participation” and was, therefore, properly subject to a TRO under FRCP 65 that would prevent it from further enabling the copycats to run their service.

Hartline’s analysis is spot-on, but we think it important to clarify and amplify his analysis in a way that, we believe, actually provides insight into a much larger problem for CloudFlare.

As Hartline states,

This TRO wasn’t about the “world at large,” and it wasn’t about turning the companies that provide internet infrastructure into the “trademark and copyright police.” It was about CloudFlare knowingly helping the enjoined defendants to continue violating the plaintiffs’ intellectual property rights.

Importantly, the issuance of the TRO turned in part on whether the plaintiffs were likely to succeed on the merits — which is to say that the copycats could in fact be liable for copyright infringement. Further, the initial TRO became a preliminary injunction before the final TRO hearing because the copycats failed to show up to defend themselves. Thus, CloudFlare was potentially exposing itself to a claim of contributory infringement, possibly from the time it was notified of the infringing activity by the RIAA. This is so because a claim of contributory liability would require that CloudFlare “knowingly” contributed to the infringement. Here there was actual knowledge upon issuance of the TRO (if not before).

However, had CloudFlare gone along with the proceedings and complied with the court’s order in good faith, § 512 of the Digital Millennium Copyright Act (DMCA) would have provided a safe harbor. Nevertheless, following from CloudFlare’s actual behavior, the company does now have a lot more to fear than a mere TRO.

Although we don’t have the full technical details of how CloudFlare’s service operates, we can make some fair assumptions. Most importantly, in order to optimize the content it serves, a CDN would necessarily have to store that content at some point as part of an optimizing cache scheme. Under the terms of the DMCA, an online service provider (OSP) that engages in caching of online content will be immune from liability, subject to certain conditions. The most important condition relevant here is that, in order to qualify for the safe harbor, the OSP must “expeditiously [] remove, or disable access to, the material that is claimed to be infringing upon notification of claimed infringement[.]”

Here, not only had CloudFlare been informed by the plaintiffs that it was storing infringing content, but a district court had gone so far as to grant a TRO against CloudFlare’s serving of said content. It certainly seems plausible to view CloudFlare as acting outside the scope of the DMCA safe harbor once it refused to disable access to the infringing content after the plaintiffs contacted it, but certainly once the TRO was deemed to apply to it.

To underscore this point, CloudFlare’s arguments during the TRO proceedings essentially admitted to knowledge that infringing material was flowing through its CDN. CloudFlare focused its defense on the fact that it was not an active participant in the infringing activity, but was merely a passive network through which the copycats’ content was flowing. Moreover, CloudFlare argued that

Even if [it]—and every company in the world that provides similar services—took proactive steps to identify and block the Defendants, the website would remain up and running at its current domain name.

But while this argument may make some logical sense from the perspective of a party resisting an injunction, it amounts to a very big admission in terms of a possible infringement case — particularly given CloudFlare’s obstinance in refusing to help the plaintiffs shut down the infringing sites.

As noted above, CloudFlare had an affirmative duty to to at least suspend access to infringing material once it was aware of the infringement (and, of course, even more so once it received the TRO). Instead, CloudFlare relied upon its “impossibility” argument against complying with the TRO based on the claim that enjoining CloudFlare would be futile in thwarting the infringement of others. CloudFlare does appear to have since complied with the TRO (which is now a preliminary injunction), but the compliance does not change a very crucial fact: knowledge of the infringement on CloudFlare’s part existed before the preliminary injunction took effect, while CloudFlare resisted the initial TRO as well as RIAA’s efforts to secure compliance.

Phrased another way, CloudFlare became an infringer by virtue of having cached copyrighted content and been given notice of that content. However, in its view, merely removing CloudFlare’s storage of that copyrighted content would have done nothing to prevent other networks from also storing the copyrighted content, and therefore it should not be enjoined from its infringing behavior. This essentially amounts to an admission of knowledge of infringing content being stored in its network.

It would be hard to believe that CloudFlare’s counsel failed to advise it to consider the contributory infringement issues that could arise from its conduct prior to and during the TRO proceedings. Thus CloudFlare’s position is somewhat perplexing, particularly once the case became a TRO proceeding. CloudFlare could perhaps have made technical arguments against the TRO in an attempt to demonstrate to its customers that it didn’t automatically shut down services at the behest of the RIAA. It could have done this in good faith, and without the full-throated “impossibility” argument that could very plausibly draw them into infringement litigation. But whatever CloudFlare thought it was gaining in taking a “moral” stance on behalf of OSPs everywhere with its “impossibility” argument, it may well have ended up costing itself much more.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET

Moderator:

  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP

Speakers:

  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.

The TCPA is an Antiquated Law

The TCPA is an Antiquated Law

The Telephone Consumer Protection Act (“TCPA”) is back in the news following a letter sent to PayPal from the Enforcement Bureau of the FCC.  At issue are amendments that PayPal intends to introduce into its end user agreement. Specifically, PayPal is planning on including an automated call and text message system with which it would reach out to its users to inform them of account updates, perform quality assurance checks, and provide promotional offers.

Enter the TCPA, which, as the Enforcement Bureau noted in its letter, has been used for over twenty years by the FCC to “protect consumers from harassing, intrusive, and unwanted calls and text messages.” The FCC has two primary concerns in its warning to PayPal. First, there was no formal agreement between PayPal and its users that would satisfy the FCC’s rules and allow PayPal to use an automated call system. And, perhaps most importantly, PayPal is not entitled to simply attach an “automated calls” clause to its user agreement as a condition of providing the PayPal service (as it clearly intends to do with its amendments).

There are a number of things wrong with the TCPA and the FCC’s decision to enforce its provisions against PayPal in the current instance. The FCC has the power to provide for some limited exemptions to the TCPA’s prohibition on automated dialing systems. Most applicable here, the FCC has the discretion to provide exemptions where calls to cell phone users won’t result in those users being billed for the calls. Although most consumers still buy plans that allot minutes for their monthly use, the practical reality for most cell phone users is that they no longer need to count minutes for every call. Users typically have a large number of minutes on their plans, and certainly many of those minutes can go unused. It seems that the progression of technology and the economics of cellphones over the last twenty-five years should warrant a Congressional revisit to the underlying justifications of at least this prohibition in the TCPA.

However, exceptions aside, there remains a much larger issue with the TCPA, one that is also rooted in the outdated technological assumptions underlying the law. The TCPA was meant to prevent dedicated telemarketing companies from using the latest in “automated dialing” technology circa 1991 from harassing people. It was not intended to stymie legitimate businesses from experimenting with more efficient methods of contacting their own customers.

The text of the law underscores its technological antiquity:  according to the TCPA, an “automatic telephone dialing system” means equipment which “has the capacity” to sequentially dial random numbers. This is to say, the equipment that was contemplated when the law was written was software-enabled phones that were purpose built to enable telemarketing firms to make blanket cold calls to every number in a given area code. The language clearly doesn’t contemplate phones connected to general purpose computing resources, as most phone systems are today.

The modern phone systems, connected to intelligent computer backends, are designed to flexibly reach out to hundreds or thousands of existing customers at a time, and in a way that efficiently enhances the customer’s experience with the company. Technically, yes, these systems are capable of auto-dialing a large number of random recipients; however, when a company like PayPal uses this technology, its purpose is clearly different than that employed by the equivalent of spammers on the phone system. Not having a nexus between an intent to random-dial and a particular harm experienced by an end user is a major hole in the TCPA. Particularly in this case, it seems fairly absurd that the TCPA could be used to prevent PayPal from interacting with its own customers.

Further, there is a lot at stake for those accused of violating the TCPA. In the PayPal warning letter, the FCC noted that it is empowered to levy a $16,000 fine per call or text message that it finds violates the terms of the TCPA. That’s bad, but it’s nowhere near as bad as it could get. The TCPA also contains a private right of action that was meant to encourage individual consumers to take telemarketers to small claims court in their local state.  Each individual consumer is entitled to receive provable damages or statutory damages of $500.00, whichever is greater. If willfulness can be proven, the damages are trebled, which in effect means that most individual plaintiffs in the know will plead willfulness, and wait for either a settlement conference or trial to sort the particulars out.

However, over the years a cottage industry has built up around class action lawyers aggregating “harmed” plaintiffs who had received unwanted automatic calls or texts, and forcing settlements in the tens of millions of dollars. The math is pretty simple. A large company with lots of customers may be tempted to use an automatic system to send out account information and offer alerts. If it sends out five hundred thousand auto calls or texts, that could result in “damages” in the amount of $250M in a class action suit. A settlement for five or ten million dollars is a deal by comparison. For instance, in 2013 Bank of America entered into a $32M settlement for texts and calls made between 2007 and 2013 to 7.7 million people.  If they had gone to trial and lost, the damages could have been as much as $3.8B!

The purpose of the TCPA was to prevent abusive telemarketers from harassing people, not to defeat the use of an entire technology that can be employed to increase efficiency for businesses and lower costs for consumers. The per call penalties associated with violating the TCPA, along with imprecise and antiquated language in the law, provide a major incentive to use the legal system to punish well-meaning companies that are just operating their non-telemarketing businesses in a reasonable manner. It’s time to seriously revise this law in light of the changes in technology over the past twenty-five years.