Archives For

The EC’s Android decision is expected sometime in the next couple of weeks. Current speculation is that the EC may issue a fine exceeding last year’s huge 2.4B EU fine for Google’s alleged antitrust violations related to the display of general search results. Based on the statement of objections (“SO”), I expect the Android decision will be a muddle of legal theory that not only fails to connect to facts and marketplace realities, but also will  perversely incentivize platform operators to move toward less open ecosystems.

As has been amply demonstrated (see, e.g., here and here), the Commission has made fundamental errors with its market definition analysis in this case. Chief among its failures is the EC’s incredible decision to treat the relevant market as licensable mobile operating systems, which notably excludes the largest smartphone player by revenue, Apple.

This move, though perhaps expedient for the EC, leads the Commission to view with disapproval an otherwise competitively justifiable set of licensing requirements that Google imposes on its partners. This includes anti-fragmentation and app-bundling provisions (“Provisions”) in the agreements that partners sign in order to be able to distribute Google Mobile Services (“GMS”) with their devices. Among other things, the Provisions guarantee that a basic set of Google’s apps and services will be non-exclusively featured on partners’ devices.

The Provisions — when viewed in a market in which Apple is a competitor — are clearly procompetitive. The critical mass of GMS-flavored versions of Android (as opposed to vanilla Android Open Source Project (“AOSP”) devices) supplies enough predictability to an otherwise unruly universe of disparate Android devices such that software developers will devote the sometimes considerable resources necessary for launching successful apps on Android.

Open source software like AOSP is great, but anyone with more than a passing familiarity with Linux recognizes that the open source movement often fails to produce consumer-friendly software. In order to provide a critical mass of users that attract developers to Android, Google provides a significant service to the Android market as a whole by using the Provisions to facilitate a predictable user (and developer) experience.

Generativity on platforms is a complex phenomenon

To some extent, the EC’s complaint is rooted in a bias that Android act as a more “generative” platform such that third-party developers are relatively better able to reach users of Android devices. But this effort by the EC to undermine the Provisions will be ultimately self-defeating as it will likely push mobile platform providers to converge on similar, relatively more closed business models that provide less overall consumer choice.

Even assuming that the Provisions somehow prevent third-party app installs or otherwise develop a kind of path-dependency among users such that they never seek out new apps (which the data clearly shows is not happening), focusing on third-party developers as the sole or primary source of innovation on Android is a mistake.

The control that platform operators like Apple and Google exert over their respective ecosystems does not per se create more or less generativity on the platforms. As Gus Hurwitz has noted, “literature and experience amply demonstrate that ‘open’ platforms, or general-purpose technologies generally, can promote growth and increase social welfare, but they also demonstrate that open platforms can also limit growth and decrease welfare.” Conversely, tighter vertical integration (the Apple model) can also produce more innovation than open platforms.

What is important is the balance between control and freedom, and the degree to which third-party developers are able to innovate within the context of a platform’s constraints. The existence of constraints — either Apple’s more tightly controlled terms, or Google’s more generous Provisions — themselves facilitate generativity.

In short, it is overly simplistic to view generativity as something that happens at the edges without respect to structural constraints at the core. The interplay between platform and developer is complex and complementary, and needs to be viewed as a dynamic process.

Whither platform diversity?

I love Apple’s devices and I am quite happy living within its walled garden. But I certainly do not believe that Apple’s approach is the only one that makes sense. Yet, in its SO, the EC blesses Apple’s approach as the proper way to manage a mobile ecosystem. It explicitly excluded Apple from a competitive analysis, and attacked Google on the basis that it imposed restrictions in the context of licensing its software. Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

With this SO, the EC is basically asserting that Google is anticompetitively bundling without being able to plausibly assert foreclosure (because, again, third-party app installs are easy to do and are easily shown to number in the billions). I’m sure Google doesn’t want to move in the direction of having a more closed system, but the lesson of this case will loom large for tomorrow’s innovators.

In the face of eager antitrust enforcers like those in the EU, the easiest path for future innovators will be to keep everything tightly controlled so as to prevent both fragmentation and misguided regulatory intervention.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

There are some who view a host of claimed negative social ills allegedly related to the large size of firms like Amazon as an occasion to call for the company’s break up. And, unfortunately, these critics find an unlikely ally in President Trump, whose tweet storms claim that tech platforms are too big and extract unfair rents at the expense of small businesses. But these critics are wrong: Amazon is not a dangerous monopoly, and it certainly should not be broken up.  

Of course, no one really spells out what it means for these companies to be “too big.” Even Barry Lynn, a champion of the neo-Brandeisian antitrust movement, has shied away from specifics. The best that emerges when probing his writings is that he favors something like a return to Joe Bain’s “Structure-Conduct-Performance” paradigm (but even here, the details are fuzzy).

The reality of Amazon’s impact on the market is quite different than that asserted by its critics. Amazon has had decades to fulfill a nefarious scheme to suddenly raise prices and reap the benefits of anticompetive behavior. Yet it keeps putting downward pressure on prices in a way that seems to be commoditizing goods instead of building anticompetitive moats.

Amazon Does Not Anticompetitively Exercise Market Power

Twitter rants aside, more serious attempts to attack Amazon on antitrust grounds argue that it is engaging in pricing that is “predatory.” But “predatory pricing” requires a specific demonstration of factors — which, to date, have not been demonstrated — in order to justify legal action. Absent a showing of these factors, it has long been understood that seemingly “predatory” conduct is unlikely to harm consumers and often actually benefits consumers.

One important requirement that has gone unsatisfied is that a firm engaging in predatory pricing must have market power. Contrary to common characterizations of Amazon as a retail monopolist, its market power is less than it seems. By no means does it control retail in general. Rather, less than half of all online commerce (44%) takes place on its platform (and that number represents only 4% of total US retail commerce). Of that 44 percent, a significant portion is attributable to the merchants who use Amazon as a platform for their own online retail sales. Rather than abusing a monopoly market position to predatorily harm its retail competitors, at worst Amazon has created a retail business model that puts pressure on other firms to offer more convenience and lower prices to their customers. This is what we want and expect of competitive markets.

The claims leveled at Amazon are the intellectual kin of the ones made against Walmart during its ascendancy that it was destroying main street throughout the nation. In 1993, it was feared that Walmart’s quest to vertically integrate its offerings through Sam’s Club warehouse operations meant that “[r]etailers could simply bypass their distributors in favor of Sam’s — and Sam’s could take revenues from local merchants on two levels: as a supplier at the wholesale level, and as a competitor at retail.” This is a strikingly similar accusation to those leveled against Amazon’s use of its Seller Marketplace to aggregate smaller retailers on its platform.

But, just as in 1993 with Walmart, and now with Amazon, the basic fact remains that consumer preferences shift. Firms need to alter their behavior to satisfy their customers, not pretend they can change consumer preferences to suit their own needs. Preferring small, local retailers to Amazon or Walmart is a decision for individual consumers interacting in their communities, not for federal officials figuring out how best to pattern the economy.

All of this is not to say that Amazon is not large, or important, or that, as a consequence of its success it does not exert influence over the markets it operates in. But having influence through success is not the same as anticompetitively asserting market power.

Other criticisms of Amazon focus on its conduct in specific vertical markets in which it does have more significant market share. For instance, a UK Liberal Democratic leader recently claimed that “[j]ust as Standard Oil once cornered 85% of the refined oil market, today… Amazon accounts for 75% of ebook sales … .”

The problem with this concern is that Amazon’s conduct in the ebook market has had, on net, pro-competitive, not anti-competitive, effects. Amazon’s behavior in the ebook market has actually increased demand for books overall (and expanded output), increased the amount that consumers read, and decreased the price of theses books. Amazon is now even opening physical bookstores. Lina Khan made much hay in her widely cited article last year that this was all part of a grand strategy to predatorily push competitors out of the market:

The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws.

But it’s hard to allege predation in a market when over the past twenty years Amazon has consistently expanded output and lowered overall prices in the book market. Courts and lawmakers have sought to craft laws that encourage firms to provide consumers with more choices at lower prices — a feat that Amazon repeatedly accomplishes. To describe this conduct as anticompetitive is asking for a legal requirement that is at odds with the goal of benefiting consumers. It is to claim that Amazon has a contradictory duty to both benefit consumers and its shareholders, while also making sure that all of its less successful competitors also stay in business.

But far from creating a monopoly, the empirical reality appears to be that Amazon is driving categories of goods, like books, closer to the textbook model of commodities in a perfectly competitive market. Hardly an antitrust violation.

Amazon Should Not Be Broken Up

“Big is bad” may roll off the tongue, but, as a guiding ethic, it makes for terrible public policy. Amazon’s size and success are a direct result of its ability to enter relevant markets and to innovate. To break up Amazon, or any other large firm, is to punish it for serving the needs of its consumers.

None of this is to say that large firms are incapable of causing harm or acting anticompetitively. But we should accept calls for dramatic regulatory intervention  — especially from those in a position to influence regulatory or market reactions to such calls — to be supported by substantial factual evidence and legal and economic theory.

This tendency to go after large players is nothing new. As noted above, Walmart triggered many similar concerns thirty years ago. Thinking about Walmart then, pundits feared that direct competition with Walmart was fruitless:

In the spring of 1992 Ken Stone came to Maine to address merchant groups from towns in the path of the Wal-Mart advance. His advice was simple and direct: don’t compete directly with Wal-Mart; specialize and carry harder-to-get and better-quality products; emphasize customer service; extend your hours; advertise more — not just your products but your business — and perhaps most pertinent of all to this group of Yankee individualists, work together.

And today, some think it would be similarly pointless to compete with Amazon:

Concentration means it is much harder for someone to start a new business that might, for example, try to take advantage of the cheap housing in Minneapolis. Why bother when you know that if you challenge Amazon, they will simply dump your product below cost and drive you out of business?

The interesting thing to note, of course, is that Walmart is now desperately trying to compete with Amazon. But despite being very successful in its own right, and having strong revenues, Walmart doesn’t seem able to keep up.

Some small businesses will close as new business models emerge and consumer preferences shift. This is to be expected in a market driven by creative destruction. Once upon a time Walmart changed retail and improved the lives of many Americans. If our lawmakers can resist the urge to intervene without real evidence of harm, Amazon just might do the same.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

The Internet is a modern miracle: from providing all varieties of entertainment, to facilitating life-saving technologies, to keeping us connected with distant loved ones, the scope of the Internet’s contribution to our daily lives is hard to overstate. Moving forward there is undoubtedly much more that we can and will do with the Internet, and part of that innovation will, naturally, require a reconsideration of existing laws and how new Internet-enabled modalities fit into them.

But when undertaking such a reconsideration, the goal should not be simply to promote Internet-enabled goods above all else; rather, it should be to examine the law’s effect on the promotion of new technology within the context of other, competing social goods. In short, there are always trade-offs entailed in changing the legal order. As such, efforts to reform, clarify, or otherwise change the law that affects Internet platforms must be balanced against other desirable social goods, not automatically prioritized above them.

Unfortunately — and frequently with the best of intentions — efforts to promote one good thing (for instance, more online services) inadequately take account of the balance of the larger legal realities at stake. And one of the most important legal realities that is too often readily thrown aside in the rush to protect the Internet is that policy be established through public, (relatively) democratically accountable channels.

Trade deals and domestic policy

Recently a letter was sent by a coalition of civil society groups and law professors asking the NAFTA delegation to incorporate U.S.-style intermediary liability immunity into the trade deal. Such a request is notable for its timing in light of the ongoing policy struggles over SESTA —a bill currently working its way through Congress that seeks to curb human trafficking through online platforms — and the risk that domestic platform companies face of losing (at least in part) the immunity provided by Section 230 of the Communications Decency Act. But this NAFTA push is not merely about a tradeoff between less trafficking and more online services, but between promoting policies in a way that protects the rule of law and doing so in a way that undermines the rule of law.

Indeed, the NAFTA effort appears to be aimed at least as much at sidestepping the ongoing congressional fight over platform regulation as it is aimed at exporting U.S. law to our trading partners. Thus, according to EFF, for example, “[NAFTA renegotiation] comes at a time when Section 230 stands under threat in the United States, currently from the SESTA and FOSTA proposals… baking Section 230 into NAFTA may be the best opportunity we have to protect it domestically.”

It may well be that incorporating Section 230 into NAFTA is the “best opportunity” to protect the law as it currently stands from efforts to reform it to address conflicting priorities. But that doesn’t mean it’s a good idea. In fact, whatever one thinks of the merits of SESTA, it is not obviously a good idea to use a trade agreement as a vehicle to override domestic reforms to Section 230 that Congress might implement. Trade agreements can override domestic law, but that is not the reason we engage in trade negotiations.

In fact, other parts of NAFTA remain controversial precisely for their ability to undermine domestic legal norms, in this case in favor of guaranteeing the expectations of foreign investors. EFF itself is deeply skeptical of this “investor-state” dispute process (“ISDS”), noting that “[t]he latest provisions would enable multinational corporations to undermine public interest rules.” The irony here is that ISDS provides a mechanism for overriding domestic policy that is a close analogy for what EFF advocates for in the Section 230/SESTA context.

ISDS allows foreign investors to sue NAFTA signatories in a tribunal when domestic laws of that signatory have harmed investment expectations. The end result is that the signatory could be responsible for paying large sums to litigants, which in turn would serve as a deterrent for the signatory to continue to administer its laws in a similar fashion.

Stated differently, NAFTA currently contains a mechanism that favors one party (foreign investors) in a way that prevents signatory nations from enacting and enforcing laws approved of by democratically elected representatives. EFF and others disapprove of this.

Yet, at the same time, EFF also promotes the idea that NAFTA should contain a provision that favors one party (Internet platforms) in a way that would prevent signatory nations from enacting and enforcing laws like SESTA that (might be) approved of by democratically elected representatives.

A more principled stance would be skeptical of the domestic law override in both contexts.

Restating Copyright or creating copyright policy?

Take another example: Some have suggested that the American Law Institute (“ALI”) is being used to subvert Congressional will. Since 2013, ALI has taken upon itself the project to “restate” the law of copyright. ALI is well known and respected for its common law restatements, but it may be that something more than mere restatement is going on here. As the NY Bar Association recently observed:

The Restatement as currently drafted appears inconsistent with the ALI’s long-standing goal of promoting clarity in the law: indeed, rather than simply clarifying or restating that law, the draft offers commentary and interpretations beyond the current state of the law that appear intended to shape current and future copyright policy.  

It is certainly odd that ALI (or any other group) would seek to restate a body of law that is already stated in the form of an overarching federal statute. The point of a restatement is to gather together the decisions of disparate common law courts interpreting different laws and precedent in order to synthesize a single, coherent framework approximating an overall consensus. If done correctly, a restatement of a federal statute would, theoretically, end up with the exact statute itself along with some commentary about how judicial decisions have filled in the blanks differently — a state of affairs that already exists with the copious academic literature commenting on federal copyright law.

But it seems that merely restating judicial interpretations was not the only objective behind the copyright restatement effort. In a letter to ALI, one of the scholars responsible for the restatement project noted that:

While congressional efforts to improve the Copyright Act… may be a welcome and beneficial development, it will almost certainly be a long and contentious process… Register Pallante… [has] not[ed] generally that “Congress has moved slowly in the copyright space.”

Reform of copyright law, in other words, and not merely restatement of it, was an important impetus for the project. As an attorney for the Copyright Office observed, “[a]lthough presented as a “Restatement” of copyright law, the project would appear to be more accurately characterized as a rewriting of the law.” But “rewriting” is a job for the legislature. And even if Congress moves slowly, or the process is frustrating, the democratic processes that produce the law should still be respected.

Pyrrhic Policy Victories

Attempts to change copyright or entrench liability immunity through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.

It’s no surprise why some may be frustrated and concerned about intermediary liability and copyright issues: On the margin, it’s definitely harder to operate an Internet platform if it faces sweeping liability for the actions of third parties (whether for human trafficking or infringing copyrights). Maybe copyright law needs to be reformed and perhaps intermediary liability must be maintained exactly as it is (or expanded). But the right way to arrive at these policy outcomes is not through backdoors — and it is not to begin with the assertion that such outcomes are required.

Congress and the courts can be frustrating vehicles through which to enact public policy, but they have the virtue of being relatively open to public deliberation, and of having procedural constraints that can circumscribe excesses and idiosyncratic follies. We might get bad policy from Congress. We might get bad cases from the courts. But the theory of our system is that, on net, having a frustratingly long, circumscribed, and public process will tend to weed out most of the bad ideas and impulses that would otherwise result from unconstrained decision making, even if well-intentioned.

We should meet efforts like these to end-run Congress and the courts with significant skepticism. Short term policy “victories” are likely not worth the long-run consequences. These are important, complicated issues. If we surreptitiously adopt idiosyncratic solutions to them, we risk undermining the rule of law itself.

A panelist brought up an interesting tongue-in-cheek observation about the rising populist antitrust movement at a Heritage antitrust event this week. To the extent that the new populist antitrust movement is broadly concerned about effects on labor and wage depression, then, in principle, it should also be friendly to cartels. Although counterintuitive, employees have long supported and benefited from cartels, because cartels generally afford both job security and higher wages than competitive firms. And, of course, labor itself has long sought the protection of cartels – in the form of unions – to secure the same benefits.   

For instance, in the days before widespread foreign competition in domestic auto markets, native unionized workers of the big three producers enjoyed a relatively higher wage for relatively less output. Competition from abroad changed the economic landscape for both producers and workers with the end result being a reduction in union power and relatively lower overall wages for workers. The union model — a labor cartel — can guarantee higher wages to those workers.

The same story can be seen on other industries, as well, from telecommunications to service workers to public sector employees. Generally, market power on the labor demand side (employers) tends to facilitate market power on the labor supply side: firms with market power — with supracompetitive profits — can afford to pay more for labor and often are willing to do so in order to secure political support (and also to make it more expensive for potential competitors to hire skilled employees). Labor is a substantial cost for firms in competitive markets, however, so firms without market power are always looking to economize on labor (that is, have low wages, as few employees as needed, and to substitute capital for labor wherever efficient to do so).

Therefore, if broad labor effects should be a prime concern of antitrust, perhaps enforcers should use antitrust laws to encourage cartel formation when it might increase wages, regardless of the effects on productivity, prices, and other efficiencies that may arise (or perhaps, as a possible trump card to hold against traditional efficiencies justifications).

No one will make a serious case for promoting cartels (although Former FTC Chairman Pertshuk sounded similar notes in the late 70s), but the comment makes a deeper point about ongoing efforts to undermine the consumer welfare standard. Fundamental contradictions exist in antitrust rhetoric that is unmoored from economic analysis. Professor Hovenkamp highlighted this in a recent paper as well:

The coherence problem [in antitrust populism] shows up in goals that are unmeasurable and fundamentally inconsistent, although with their contradictions rarely exposed. Among the most problematic contradictions is the one between small business protection and consumer welfare. In a nutshell, consumers benefit from low prices, high output and high quality and variety of products and services. But when a firm or a technology is able to offer these things they invariably injure rivals, typically smaller or dedicated to older technologies, who are unable to match them. Although movement antitrust rhetoric is often opaque about specifics, its general effect is invariably to encourage higher prices or reduced output or innovation, mainly for the protection of small business. Indeed, that has been a predominant feature of movement antitrust ever since the Sherman Act was passed, and it is a prominent feature of movement antitrust today. Indeed, some spokespersons for movement antitrust write as if low prices are the evil that antitrust law should be combatting.

To be fair, even with careful economic analysis, it is not always perfectly clear how to resolve the tensions between antitrust and other policy preferences.  For instance, Jonathan Adler described the collision between antitrust and environmental protection in cases where collusion might lead to better environmental outcomes. But even in cases like that, he noted it was essentially a free-rider problem and, as with intrabrand price agreements where consumer goodwill was a “commons” that had to be suitably maintained against possible free-riding retailers, what might be an antitrust violation in one context was not necessarily a violation in a second context.  

Moreover, when the purpose of apparently “collusive” conduct is to actually ensure long term, sustainable production of a good or service (like fish), the behavior may not actually be anticompetitive. Thus, antitrust remains a plausible means of evaluating economic activity strictly on its own terms (and any alteration to the doctrine itself might actually be to prefer rule of reason analysis over per se analysis when examining these sorts of mitigating circumstances).

And before contorting antitrust into a policy cure-all, it is important to remember that the consumer welfare standard evolved out of sometimes good (price fixing bans) and sometimes questionable (prohibitions on output contracts) doctrines that were subject to legal trial and error. This was an evolution that was triggered by “increasing economic sophistication” and as “the enforcement agencies and courts [began] reaching for new ways in which to weigh competing and conflicting claims.”

The vector of that evolution was toward the use of  antitrust as a reliable, testable, and clear set of legal principles that are ultimately subject to economic analysis. When the populists ask us, for instance, to return to a time when judges could “prevent the conversion of concentrated economic power into concentrated political power” via antitrust law, they are asking for much more than just adding a new gloss to existing doctrine. They are asking for us to unlearn all of the lessons of the twentieth century that ultimately led toward the maturation of antitrust law.

It’s perfectly reasonable to care about political corruption, worker welfare, and income inequality. It’s not perfectly reasonable to try to shoehorn goals based on these political concerns into a body of legal doctrine that evolved a set of tools wholly inappropriate for achieving those ends.

Canada’s large merchants have called on the government to impose price controls on interchange fees, claiming this would benefit not only merchants but also consumers. But experience elsewhere contradicts this claim.

In a recently released Macdonald Laurier Institute report, Julian Morris, Geoffrey A. Manne, Ian Lee, and Todd J. Zywicki detail how price controls on credit card interchange fees would result in reduced reward earnings and higher annual fees on credit cards, with adverse effects on consumers, many merchants and the economy as a whole.

This study draws on the experience with fee caps imposed in other jurisdictions, highlighting in particular the effects in Australia, where interchange fees were capped in 2003. There, the caps resulted in a significant decrease in the rewards earned per dollar spent and an increase in annual card fees. If similar restrictions were imposed in Canada, resulting in a 40 percent reduction in interchange fees, the authors of the report anticipate that:

  1. On average, each adult Canadian would be worse off to the tune of between $89 and $250 per year due to a loss of rewards and increase in annual card fees:
    1. For an individual or household earning $40,000, the net loss would be $66 to $187; and
    2. for an individual or household earning $90,000, the net loss would be $199 to $562.
  2. Spending at merchants in aggregate would decline by between $1.6 billion and $4.7 billion, resulting in a net loss to merchants of between $1.6 billion and $2.8 billion.
  3. GDP would fall by between 0.12 percent and 0.19 percent per year.
  4. Federal government revenue would fall by between 0.14 percent and 0.40 percent.

Moreover, tighter fee caps would “have a more dramatic negative effect on middle class households and the economy as a whole.”

You can read the full report here.

This week, the International Center for Law & Economics filed an ex parte notice in the FCC’s Restoring Internet Freedom docket. In it, we reviewed two of the major items that were contained in our formal comments. First, we noted that

the process by which [the Commission] enacted the 2015 [Open Internet Order]… demonstrated scant attention to empirical evidence, and even less attention to a large body of empirical and theoretical work by academics. The 2015 OIO, in short, was not supported by reasoned analysis.

Further, on the issue of preemption, we stressed that

[F]ollowing the adoption of an Order in this proceeding, a number of states may enact their own laws or regulations aimed at regulating broadband service… The resulting threat of a patchwork of conflicting state regulations, many of which would be unlikely to further the public interest, is a serious one…

[T]he Commission should explicitly state that… broadband services may not be subject to certain forms of state regulations, including conduct regulations that prescribe how ISPs can use their networks. This position would also be consistent with the FCC’s treatment of interstate information services in the past.

Our full ex parte comments can be viewed here.

R Street’s Sasha Moss recently posted a piece on TechDirt describing the alleged shortcomings of the Register of Copyrights Selection and Accountability Act of 2017 (RCSAA) — proposed legislative adjustments to the Copyright Office, recently passed in the House and introduced in the Senate last month (with identical language).

Many of the article’s points are well taken. Nevertheless, they don’t support the article’s call for the Senate to “jettison [the bill] entirely,” nor the assertion that “[a]s currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.”

R Street’s main complaint with the legislation is that it doesn’t include other proposals in a House Judiciary Committee whitepaper on Copyright Office modernization. But condemning the RCSAA simply for failing to incorporate all conceivable Copyright Office improvements fails to adequately take account of the political realities confronting Congress — in other words, it lets the perfect be the enemy of the good. It also undermines R Street’s own stated preference for Copyright Office modernization effected through “targeted and immediately implementable solutions.”

Everyone — even R Street — acknowledges that we need to modernize the Copyright office. But none of the arguments in favor of a theoretical, “better” bill is undermined or impeded by passing this bill first. While there is certainly more that Congress can do on this front, the RCSAA is a sensible, targeted piece of legislation that begins to build the new foundation for a twenty-first century Copyright Office.

Process over politics

The proposed bill is simple: It would make the Register of Copyrights a nominated and confirmed position. For reasons almost forgotten over the last century and a half, the head of the Copyright Office is currently selected at the sole discretion of the Librarian of Congress. The Copyright Office was placed in the Library merely as a way to grow the Library’s collection with copies of copyrighted works.

More than 100 years later, most everyone acknowledges that the Copyright Office has lagged behind the times. And many think the problem lies with the Office’s placement within the Library, which is plagued with information technology and other problems, and has a distinctly different mission than the Copyright Office. The only real question is what to do about it.

Separating the the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization. And yet, somewhat inexplicably, R Street claims that the bill

amounts largely to a partisan battle over who will have the power to select the next Register: [Current Librarian of Congress] Hayden, who was appointed by Barack Obama, or President Donald Trump.

But this is a pretty farfetched characterization.

First, the House passed the bill 378-48, with 145 Democrats joining 233 Republicans in support. That’s more than three-quarters of the Democratic caucus.

Moreover, legislation to make the Register a nominated and confirmed position has been under discussion for more than four years — long before either Dr. Hayden was nominated or anyone knew that Donald Trump (or any Republican at all, for that matter) would be president.

R Street also claims that the legislation

will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, [and that] the nomination process could delay modernization efforts [because of Trump’s] confirmation backlog.

But precisely the opposite seems far more likely — as Sasha herself has previously recognized:

Clarifying the office’s lines of authority does have the benefit of making it more politically accountable…. The [House] bill takes a positive step forward in promoting accountability.

As far as I’m aware, no one claims that Dr. Hayden was “politicized” or that Librarians are vulnerable to capture because they are nominated and confirmed. And a Senate confirmation process will be more transparent than unilateral appointment by the Librarian, and will give the electorate a (nominal) voice in the Register’s selection. Surely unilateral selection of the Register by the Librarian is more susceptible to undue influence.

With respect to the modernization process, we should also not forget that the Copyright Office currently has an Acting Register in Karyn Temple Claggett, who is perfectly capable of moving the modernization process forward. And any limits on her ability to do so would arise from the very tenuousness of her position that the RCSAA is intended to address.

Modernizing the Copyright Office one piece at a time

It’s certainly true, as the article notes, that the legislation doesn’t include a number of other sensible proposals for Copyright Office modernization. In particular, it points to ideas like forming a stakeholder advisory board, creating new chief economist and technologist positions, upgrading the Office’s information technology systems, and creating a small claims court.

To be sure, these could be beneficial reforms, as ICLE (and many others) have noted. But I would take some advice from R Street’s own “pragmatic approach” to promoting efficient government “with the full realization that progress on the ground tends to be made one inch at a time.”

R Street acknowledges that the legislation’s authors have indicated that this is but a beginning step and that they plan to tackle the other issues in due course. At a time when passage of any legislation on any topic is a challenge, it seems appropriate to defer to those in Congress who affirmatively want more modernization about how big a bill to start with.

In any event, it seems perfectly sensible to address the Register selection process before tackling the other issues, which may require more detailed discussions of policy and cost. And with the Copyright Office currently lacking a permanent Register and discussions underway about finding a new one, addressing any changes Congress deems necessary in the selection process seems like the most pressing issue, if they are to be resolved prior to the next pick being made.

Further, because the Register would presumably be deeply involved in the selection and operation of any new advisory board, chief economist and technologist, IT system, or small claims process, Congress can also be forgiven for wanting to address the Register issue first. Moreover, a Register who can be summarily dismissed by the Librarian likely doesn’t have the needed autonomy to fully and effectively implement the other proposals from the whitepaper. Why build a house on a shaky foundation when you can fix the foundation first?

Process over substance

All of which leaves the question why R Street opposes a bill that was passed by a bipartisan supermajority in the House; that effects precisely the kind of targeted, incremental reform that R Street promotes; and that implements a specific reform that R Street favors.

The legislation has widespread support beyond Congress, although the TechDirt piece gives this support short shrift. Instead, it notes that “some” in the content industry support the legislation, but lists only the Motion Picture Association of America. There is a subtle undercurrent of the typical substantive copyright debate, in which “enlightened” thinking on copyright is set against the presumptively malicious overreach of the movie studios. But the piece neglects to mention the support of more than 70 large and small content creators, technology companies, labor unions, and free market and civil rights groups, among others.

Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates. But it’s difficult to escape. Copyright minimalists are skeptical of an effectual Copyright Office if it is more likely to promote policies that reinforce robust copyright, even if they support sensible process reforms and more-accountable government in the abstract. And, to be fair, copyright proponents are thrilled when their substantive positions might be bolstered by promotion of sensible process reforms.

But the truth is that no one really knows how an independent and accountable Copyright Office will act with respect to contentious, substantive issues. Perhaps most likely, increased accountability via nomination and confirmation will introduce more variance in its positions. In other words, on substance, the best guess is that greater Copyright Office accountability and modernization will be a wash — leaving only process itself as a sensible basis on which to assess reform. And on that basis, there is really no reason to oppose this widely supported, incremental step toward a modern US Copyright Office.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”