Archives For administrative

The EC’s Android decision is expected sometime in the next couple of weeks. Current speculation is that the EC may issue a fine exceeding last year’s huge 2.4B EU fine for Google’s alleged antitrust violations related to the display of general search results. Based on the statement of objections (“SO”), I expect the Android decision will be a muddle of legal theory that not only fails to connect to facts and marketplace realities, but also will  perversely incentivize platform operators to move toward less open ecosystems.

As has been amply demonstrated (see, e.g., here and here), the Commission has made fundamental errors with its market definition analysis in this case. Chief among its failures is the EC’s incredible decision to treat the relevant market as licensable mobile operating systems, which notably excludes the largest smartphone player by revenue, Apple.

This move, though perhaps expedient for the EC, leads the Commission to view with disapproval an otherwise competitively justifiable set of licensing requirements that Google imposes on its partners. This includes anti-fragmentation and app-bundling provisions (“Provisions”) in the agreements that partners sign in order to be able to distribute Google Mobile Services (“GMS”) with their devices. Among other things, the Provisions guarantee that a basic set of Google’s apps and services will be non-exclusively featured on partners’ devices.

The Provisions — when viewed in a market in which Apple is a competitor — are clearly procompetitive. The critical mass of GMS-flavored versions of Android (as opposed to vanilla Android Open Source Project (“AOSP”) devices) supplies enough predictability to an otherwise unruly universe of disparate Android devices such that software developers will devote the sometimes considerable resources necessary for launching successful apps on Android.

Open source software like AOSP is great, but anyone with more than a passing familiarity with Linux recognizes that the open source movement often fails to produce consumer-friendly software. In order to provide a critical mass of users that attract developers to Android, Google provides a significant service to the Android market as a whole by using the Provisions to facilitate a predictable user (and developer) experience.

Generativity on platforms is a complex phenomenon

To some extent, the EC’s complaint is rooted in a bias that Android act as a more “generative” platform such that third-party developers are relatively better able to reach users of Android devices. But this effort by the EC to undermine the Provisions will be ultimately self-defeating as it will likely push mobile platform providers to converge on similar, relatively more closed business models that provide less overall consumer choice.

Even assuming that the Provisions somehow prevent third-party app installs or otherwise develop a kind of path-dependency among users such that they never seek out new apps (which the data clearly shows is not happening), focusing on third-party developers as the sole or primary source of innovation on Android is a mistake.

The control that platform operators like Apple and Google exert over their respective ecosystems does not per se create more or less generativity on the platforms. As Gus Hurwitz has noted, “literature and experience amply demonstrate that ‘open’ platforms, or general-purpose technologies generally, can promote growth and increase social welfare, but they also demonstrate that open platforms can also limit growth and decrease welfare.” Conversely, tighter vertical integration (the Apple model) can also produce more innovation than open platforms.

What is important is the balance between control and freedom, and the degree to which third-party developers are able to innovate within the context of a platform’s constraints. The existence of constraints — either Apple’s more tightly controlled terms, or Google’s more generous Provisions — themselves facilitate generativity.

In short, it is overly simplistic to view generativity as something that happens at the edges without respect to structural constraints at the core. The interplay between platform and developer is complex and complementary, and needs to be viewed as a dynamic process.

Whither platform diversity?

I love Apple’s devices and I am quite happy living within its walled garden. But I certainly do not believe that Apple’s approach is the only one that makes sense. Yet, in its SO, the EC blesses Apple’s approach as the proper way to manage a mobile ecosystem. It explicitly excluded Apple from a competitive analysis, and attacked Google on the basis that it imposed restrictions in the context of licensing its software. Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

With this SO, the EC is basically asserting that Google is anticompetitively bundling without being able to plausibly assert foreclosure (because, again, third-party app installs are easy to do and are easily shown to number in the billions). I’m sure Google doesn’t want to move in the direction of having a more closed system, but the lesson of this case will loom large for tomorrow’s innovators.

In the face of eager antitrust enforcers like those in the EU, the easiest path for future innovators will be to keep everything tightly controlled so as to prevent both fragmentation and misguided regulatory intervention.

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

AT&T’s merger with Time Warner has lead to one of the most important, but least interesting, antitrust trials in recent history.

The merger itself is somewhat unimportant to consumers. It’s about a close to a “pure” vertical merger as we can get in today’s world and would not lead to a measurable increase in prices paid by consumers. At the same time, Richard J. Leon’s decision to approve the merger may have sent a signal regarding how the anticipated Fox-Disney (or Comcast), CVS-Aetna, and Cigna-Express Scripts mergers might proceed.

Judge Leon of the United States District Court in Washington, said the U.S. Department of Justice had not proved that AT&T’s acquisition of Time Warner would lead to fewer choices for consumers and higher prices for television and internet services.

As shown in the figure below, there is virtually no overlap in services provided by Time Warner (content creation and broadcasting) and AT&T (content distribution). We say “virtually” because, through it’s ownership of DirecTV, AT&T has an ownership stake in several channels such as the Game Show Network, the MLB Network, and Root Sports. So, not a “pure” vertical merger, but pretty close. Besides no one seems to really care about GSN, MLB, or Root.

Infographic: What's at Stake in the Proposed AT&T - Time Warner Merger | Statista

The merger trial was one of the least interesting because the government’s case opposing the merger was so weak.

The Justice Department’s economic expert, University of California, Berkeley, professor Carl Shapiro, argued the merger would harm consumers and competition in three ways:

  1. AT&T would raise the price of content to other cable companies, driving up their costs which would be passed on consumers.
  2. Across more than 1,000 subscription television markets, AT&T could benefit by drawing customers away from rival content distributors in the event of a “blackout,” in which the distributor chooses not to carry Time Warner content over a pricing dispute. In addition, AT&T could also use its control over Time Warner content to retain customers by discouraging consumers from switching to providers that don’t carry the Time Warner content. Those two factors, according to Shapiro, could cause rival cable companies to lose between 9 and 14 percent of their subscribers over the long term.
  3. AT&T and competitor Comcast could coordinate to restrict access to popular Time Warner and NBC content in ways that could stifle competition from online cable alternatives such as Dish Network’s Sling TV or Sony’s PlayStation Vue. Even tacit coordination of this type would impair consumer choices, Shapiro opined.

Price increases and blackouts

Shapiro initially indicated the merger would cause consumers to pay an additional $436 million year, which amounts to an average of 45 cents a month per customer, or a 0.4 percent increase. At trial, he testified the amount might be closer to 27 cents a month and conceded it could be a low as 13 cents a month.

The government’s “blackout” arguments seemed to get lost in the shifting sands of shifting survey results. Blackouts mattered, according to Shapiro, because “Even though they don’t happen very much, that’s the key to leverage.” His testimony on the potential for price hikes relied heavily on a study commissioned by Charter Communications Inc., which opposes the merger. Stefan Bewley, a director at consulting firm Altman Vilandrie & Co., which produced the study, testified the report predicted Charter would lose 9 percent of its subscribers if it lost access to Turner programming.

Under cross-examination by AT&T’s lawyer, Bewley acknowledged what was described as a “final” version of the study presented to Charter in April last year put the subscriber loss estimate at 5 percent. When confronted with his own emails about the change to 9 percent, Bewley said he agreed to the update after meeting with Charter. At the time of the change from 5 percent to 9 percent, Charter was discussing its opposition to the merger with the Justice Department.

Bewley noted that the change occurred because he saw that some of the figures his team had gathered about Turner networks were outliers, with a range of subcriber losses of 5 percent on the low end and 14 percent on the high end. He indicated his team came up with a “weighted average” of 9 percent.

This 5/9/14 percent distinction seems to be critical to the government’s claim the merger would raise consumer prices. Referring to Shapiro’s analysis, AT&T-Time Warner’s lead counsel, Daniel Petrocelli, asked Bewley: “Are you aware that if he’d used 5 percent there would have been a price increase of zero?” Bewley said he was not aware.

At trial, AT&T and Turner executives testified that they couldn’t credibly threaten to withhold Turner programming from rivals because the networks’ profitability depends on wide distribution. In addition, one of AT&T’s expert witnesses, University of California, Berkeley business and economics professor Michael Katz, testified about what he said were the benefits of AT&T’s offer to use “baseball style” arbitration with rival pay TV distributors if the two sides couldn’t agree on what fees to pay for Time Warner’s Turner networks. With baseball style arbitration, both sides submit their final offer to an arbitrator, who determines which of the two offers is most appropriate.

Under the terms of the arbitration offer, AT&T has agreed not to black out its networks for the duration of negotiations with distributors. Dennis Carlton, an economics professor at the University of Chicago, said Shapiro’s model was unreliable because he didn’t account for that. Shapiro conceded he did not factor that into his study, saying that he would need to use an entirely different model to study how the arbitration agreement would affect the merger.

Coordination with Comcast/NBCUniversal

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

At trial, the Justice Department seemed to abandon any claim that the merged company would unilaterally restrict access to online “virtual MVPDs.” The government’s case, made by its expert Shapiro, ended up being there would be a “risk” and “danger” that AT&T and Comcast would “coordinate” to withhold programming in a way to harm emerging online multichannel distributors. However, under cross examination, he conceded that his opinions were not based on a “quantifiable model.” Shapiro testified that he had no opinion whether the odds of such coordination would be greater than 1 percent.

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content. Emerging online multichannel distributors pitch their offerings as “skinny bundles” with a limited selection of the more popular channels. By forcing these providers to take more channels, the government argued, the skinny bundle business model is undermined in a version of raising rivals costs. This theory did not get much play at trial, but seems to suggest the government was trying to have its cake and eat it, too.

Except in this case, as with much of the government’s case in this matter, the cake was not completely baked.

 

A few weeks ago I posted a preliminary assessment of the relative antitrust risk of a Comcast vs Disney purchase of 21st Century Fox assets. (Also available in pdf as an ICLE Issue brief, here). On the eve of Judge Leon’s decision in the AT&T/Time Warner merger case, it seems worthwhile to supplement that assessment by calling attention to Assistant Attorney General Makan Delrahim’s remarks at The Deal’s Corporate Governance Conference last week. Somehow these remarks seem to have passed with little notice, but, given their timing, they deserve quite a bit more attention.

In brief, Delrahim spent virtually the entirety of his short remarks making and remaking the fundamental point at the center of my own assessment of the antitrust risk of a possible Comcast/Fox deal: The DOJ’s challenge of the AT&T/Time Warner merger tells you nothing about the likelihood that the agency would challenge a Comcast/Fox merger.

To begin, in my earlier assessment I pointed out that most vertical mergers are approved by antitrust enforcers, and I quoted Bruce Hoffman, Director of the FTC’s Bureau of Competition, who noted that:

[V]ertical merger enforcement is still a small part of our merger workload….

* * *

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

I may not have made it very clear in that post, but, of course, most horizontal mergers are approved by enforcers, as well.

Well, now we have the head of the DOJ Antitrust Division making the same point:

I’d say 95 or 96 percent of mergers — horizontal or vertical — are cleared — routinely…. Most mergers — horizontal or vertical — are procompetitive, or have no adverse effect.

Delrahim reinforced the point in an interview with The Street in advance of his remarks. Asked by a reporter, “what are your concerns with vertical mergers?,” Delrahim quickly corrected the questioner: “Well, I don’t have any concerns with most vertical mergers….”

But Delrahim went even further, noting that nothing about the Division’s approach to vertical mergers has changed since the AT&T/Time Warner case was brought — despite the efforts of some reporters to push a different narrative:

I understand that some journalists and observers have recently expressed concern that the Antitrust Division no longer believes that vertical mergers can be efficient and beneficial to competition and consumers. Some point to our recent decision to challenge some aspects of the AT&T/Time Warner merger as a supposed bellwether for a new vertical approach. Rest assured: These concerns are misplaced…. We have long recognized that vertical integration can and does generate efficiencies that benefit consumers. Indeed, most vertical mergers are procompetitive or competitively neutral. The same is of course true in horizontal transactions. To the extent that any recent action points to a closer review of vertical mergers, it’s not new…. [But,] to reiterate, our approach to vertical mergers has not changed, and our recent enforcement efforts are consistent with the Division’s long-standing, bipartisan approach to analyzing such mergers. We’ll continue to recognize that vertical mergers, in general, can yield significant economic efficiencies and benefit to competition.

Delrahim concluded his remarks by criticizing those who assume that the agency’s future enforcement decisions can be inferred from past cases with different facts, stressing that the agency employs an evidence-based, case-by-case approach to merger review:

Lumping all vertical transactions under the same umbrella, by comparison, obscures the reality that we conduct a vigorous investigation, aided by over 50 PhD economists in these markets, to make sure that we as lawyers don’t steer too far without the benefits of their views in each of these instances.

Arguably this was a rebuke directed at those, like Disney and Fox’s board, who are quick to ascribe increased regulatory risk to a Comcast/Fox tie-up because the DOJ challenged the AT&T/Time Warner merger. Recall that, in its proxy statement, the Fox board explained that it rejected Comcast’s earlier bid in favor of Disney’s in part because of “the regulatory risks presented by the DOJ’s unanticipated opposition to the proposed vertical integration of the AT&T / Time Warner transaction.”

I’ll likely have more to add once the AT&T/Time Warner decision is out. But in the meantime (and with apologies to Mark Twain), the takeaway is clear: Reports of the death of vertical mergers have been greatly exaggerated.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

Weekend Reads

Eric Fruits —  8 June 2018 — 1 Comment

Innovation dies in darkness. Well, actually, it thrives in the light, according to this new research:

We find that after a patent library opens, local patenting increases by 17% relative to control regions that have Federal Depository Libraries. … [T]]he library boost ceases to be present after the introduction of the Internet. We find that library opening is also associated with an increase in local business formation and job creation [especially for small business -ed.], which suggests that the impact of libraries is not limited to patenting outcomes.

Patent-Libraries

Don’t drink the Kool-Aid of bad data. Have a SPRITE. From the article published by self-described “data thugs“.

Scientific publications have not traditionally been accompanied by data, either during the peer review process or when published. Concern has arisen that the literature in many fields may contain inaccuracies or errors that cannot be detected without inspecting the original data. Here, we introduce SPRITE (Sample Parameter Reconstruction via Interative TEchniques), a heuristic method for reconstructing plausible samples from descriptive statistics of granular data, allowing reviewers, editors, readers, and future researchers to gain insights into the possible distributions of item values in the original data set.

Gig economy, it’s a good thing: 6.9% of all workers are independent contractors; 79% of them prefer their arrangement over a traditional job.

Gig economy, it’s a bad thing. Maybe.

[C]ensus divisions with relatively weak wage inflation also tend to have more “low-wage” informal FTE—that is, more hours of informal work performed at a wage that is less than formal pay.

Broetry. It’s a LinkedIn thing. I don’t get it.

 

 

A recent exchange between Chris Walker and Philip Hamburger about Walker’s ongoing empirical work on the Chevron doctrine (the idea that judges must defer to reasonable agency interpretations of ambiguous statutes) gives me a long-sought opportunity to discuss what I view as the greatest practical problem with the Chevron doctrine: it increases both politicization and polarization of law and policy. In the interest of being provocative, I will frame the discussion below by saying that both Walker & Hamburger are wrong (though actually I believe both are quite correct in their respective critiques). In particular, I argue that Walker is wrong that Chevron decreases politicization (it actually increases it, vice his empirics); and I argue Hamburger is wrong that judicial independence is, on its own, a virtue that demands preservation. Rather, I argue, Chevron increases overall politicization across the government; and judicial independence can and should play an important role in checking legislative abdication of its role as a politically-accountable legislature in a way that would moderate that overall politicization.

Walker, along with co-authors Kent Barnett and Christina Boyd, has done some of the most important and interesting work on Chevron in recent years, empirically studying how the Chevron doctrine has affected judicial behavior (see here and here) as well as that of agencies (and, I would argue, through them the Executive) (see here). But the more important question, in my mind, is how it affects the behavior of Congress. (Walker has explored this somewhat in his own work, albeit focusing less on Chevron than on how the role agencies play in the legislative process implicitly transfers Congress’s legislative functions to the Executive).

My intuition is that Chevron dramatically exacerbates Congress’s worst tendencies, encouraging Congress to push its legislative functions to the executive and to do so in a way that increases the politicization and polarization of American law and policy. I fear that Chevron effectively allows, and indeed encourages, Congress to abdicate its role as the most politically-accountable branch by deferring politically difficult questions to agencies in ambiguous terms.

One of, and possibly the, best ways to remedy this situation is to reestablish the role of judge as independent decisionmaker, as Hamburger argues. But the virtue of judicial independence is not endogenous to the judiciary. Rather, judicial independence has an instrumental virtue, at least in the context of Chevron. Where Congress has problematically abdicated its role as a politically-accountable decisionmaker by deferring important political decisions to the executive, judicial refusal to defer to executive and agency interpretations of ambiguous statutes can force Congress to remedy problematic ambiguities. This, in turn, can return the responsibility for making politically-important decisions to the most politically-accountable branch, as envisioned by the Constitution’s framers.

A refresher on the Chevron debate

Chevron is one of the defining doctrines of administrative law, both as a central concept and focal debate. It stands generally for the proposition that when Congress gives agencies ambiguous statutory instructions, it falls to the agencies, not the courts, to resolve those ambiguities. Thus, if a statute is ambiguous (the question at “step one” of the standard Chevron analysis) and the agency offers a reasonable interpretation of that ambiguity (“step two”), courts are to defer to the agency’s interpretation of the statute instead of supplying their own.

This judicially-crafted doctrine of deference is typically justified on several grounds. For instance, agencies generally have greater subject-matter expertise than courts so are more likely to offer substantively better constructions of ambiguous statutes. They have more resources that they can dedicate to evaluating alternative constructions. They generally have a longer history of implementing relevant Congressional instructions so are more likely attuned to Congressional intent – both of the statute’s enacting and present Congresses. And they are subject to more direct Congressional oversight in their day-to-day operations and exercise of statutory authority than the courts so are more likely concerned with and responsive to Congressional direction.

Chief among the justifications for Chevron deference is, as Walker says, “the need to reserve political (or policy) judgments for the more politically accountable agencies.” This is at core a separation-of-powers justification: the legislative process is fundamentally a political process, so the Constitution assigns responsibility for it to the most politically-accountable branch (the legislature) instead of the least politically-accountable branch (the judiciary). In turn, the act of interpreting statutory ambiguity is an inherently legislative process – the underlying theory being that Congress intended to leave such ambiguity in the statute in order to empower the agency to interpret it in a quasi-legislative manner. Thus, under this view, courts should defer both to this Congressional intent that the agency be empowered to interpret its statute (and, should this prove problematic, it is up to Congress to change the statute or to face political ramifications), and the courts should defer to the agency interpretation of that statute because agencies, like Congress, are more politically accountable than the courts.

Chevron has always been an intensively studied and debated doctrine. This debate has grown more heated in recent years, to the point that there is regularly scholarly discussion about whether Chevron should be repealed or narrowed and what would replace it if it were somehow curtailed – and discussion of the ongoing vitality of Chevron has entered into Supreme Court opinions and the appointments process with increasing frequency. These debates generally focus on a few issues. A first issue is that Chevron amounts to a transfer of the legislature’s Constitutional powers and responsibilities over creating the law to the executive, where the law ordinarily is only meant to be carried out. This has, the underlying concern is, contributed to the increase in the power of the executive compared to the legislature. A second, related, issue is that Chevron contributes to the (over)empowerment of independent agencies – agencies that are already out of favor with many of Chevron’s critics as Constitutionally-infirm entities whose already-specious power is dramatically increased when Chevron limits the judiciary’s ability to check their use of already-broad Congressionally-delegated authority.

A third concern about Chevron, following on these first two, is that it strips the judiciary of its role as independent arbiter of judicial questions. That is, it has historically been the purview of judges to answer statutory ambiguities and fill in legislative interstices.

Chevron is also a focal point for more generalized concerns about the power of the modern administrative state. In this context, Chevron stands as a representative of a broader class of cases – State Farm, Auer, Seminole Rock, Fox v. FCC, and the like – that have been criticized as centralizing legislative, executive, and judicial powers in agencies, allowing Congress to abdicate its role as politically-accountable legislator, abdicating the judiciary’s role in interpreting the law, as well as raising due process concerns for those subject to rules promulgated by federal agencies..

Walker and his co-authors have empirically explored the effects of Chevron in recent years, using robust surveys of federal agencies and judicial decisions to understand how the doctrine has affected the work of agencies and the courts. His most recent work (with Kent Barnett and Christina Boyd) has explored how Chevron affects judicial decisionmaking. Framing the question by explaining that “Chevron deference strives to remove politics from judicial decisionmaking,” they ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?” They find that, empirically speaking, “the Chevron Court’s objective to reduce partisan judicial decision-making has been quite effective.” By instructing judges to defer to the political judgments (or just statutory interpretations) of agencies, judges are less political in their own decisionmaking.

Hamburger responds to this finding somewhat dismissively – and, indeed, the finding is almost tautological: “of course, judges disagree less when the Supreme Court bars them from exercising their independent judgment about what the law is.” (While a fair critique, I would temper it by arguing that it is nonetheless an important empirical finding – empirics that confirm important theory are as important as empirics that refute it, and are too often dismissed.)

Rather than focus on concerns about politicized decisionmaking by judges, Hamburger focuses instead on the importance of judicial independence – on it being “emphatically the duty of the Judicial Department to say what the law is” (quoting Marbury v. Madison). He reframes Walker’s results, arguing that “deference” to agencies is really “bias” in favor of the executive. “Rather than reveal diminished politicization, Walker’s numbers provide strong evidence of diminished judicial independence and even of institutionalized judicial bias.”

So which is it? Does Chevron reduce bias by de-politicizing judicial decisionmaking? Or does it introduce new bias in favor of the (inherently political) executive? The answer is probably that it does both. The more important answer, however, is that neither is the right question to ask.

What’s the correct measure of politicization? (or, You get what you measure)

Walker frames his study of the effects of Chevron on judicial decisionmaking by explaining that “Chevron deference strives to remove politics from judicial decisionmaking. Such deference to the political branches has long been a bedrock principle for at least some judicial conservatives.” Based on this understanding, his project is to ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?”

This framing, that one of Chevron’s goals is to remove politics from judicial decisionmaking, is not wrong. But this goal may be more accurately stated as being to prevent the judiciary from encroaching upon the political purposes assigned to the executive and legislative branches. This restatement offers an important change in focus. It emphasizes the concern about politicizing judicial decisionmaking as a separation of powers issue. This is in apposition to concern that, on consequentialist grounds, judges should not make politicized decisions – that is, judges should avoid political decisions because it leads to substantively worse outcomes.

It is of course true that, as unelected officials with lifetime appointments, judges are the least politically accountable to the polity of any government officials. Judges’ decisions, therefore, can reasonably be expected to be less representative of, or responsive to, the concerns of the voting public than decisions of other government officials. But not all political decisions need to be directly politically accountable in order to be effectively politically accountable. A judicial interpretation of an ambiguous law, for instance, can be interpreted as a request, or even a demand, that Congress be held to political account. And where Congress is failing to perform its constitutionally-defined role as a politically-accountable decisionmaker, it may do less harm to the separation of powers for the judiciary to make political decisions that force politically-accountable responses by Congress than for the judiciary to respect its constitutional role while the Congress ignores its role.

Before going too far down this road, I should pause to label the reframing of the debate that I have impliedly proposed. To my mind, the question isn’t whether Chevron reduces political decisionmaking by judges; the question is how Chevron affects the politicization of, and ultimately accountability to the people for, the law. Critically, there is no “conservation of politicization” principle. Institutional design matters. One could imagine a model of government where Congress exercises very direct oversight over what the law is and how it is implemented, with frequent elections and a Constitutional prohibition on all but the most express and limited forms of delegation. One can also imagine a more complicated form of government in which responsibilities for making law, executing law, and interpreting law, are spread across multiple branches (possibly including myriad agencies governed by rules that even many members of those agencies do not understand). And one can reasonably expect greater politicization of decisions in the latter compared to the former – because there are more opportunities for saying that the responsibility for any decision lies with someone else (and therefore for politicization) in the latter than in the “the buck stops here” model of the former.

In the common-law tradition, judges exercised an important degree of independence because their job was, necessarily and largely, to “say what the law is.” For better or worse, we no longer live in a world where judges are expected to routinely exercise that level of discretion, and therefore to have that level of independence. Nor do I believe that “independence” is necessarily or inherently a criteria for the judiciary, at least in principle. I therefore somewhat disagree with Hamburger’s assertion that Chevron necessarily amounts to a problematic diminution in judicial independence.

Again, I return to a consequentialist understanding of the purposes of judicial independence. In my mind, we should consider the need for judicial independence in terms of whether “independent” judicial decisionmaking tends to lead to better or worse social outcomes. And here I do find myself sympathetic to Hamburger’s concerns about judicial independence. The judiciary is intended to serve as a check on the other branches. Hamburger’s concern about judicial independence is, in my mind, driven by an overwhelmingly correct intuition that the structure envisioned by the Constitution is one in which the independence of judges is an important check on the other branches. With respect to the Congress, this means, in part, ensuring that Congress is held to political account when it does legislative tasks poorly or fails to do them at all.

The courts abdicate this role when they allow agencies to save poorly drafted statutes through interpretation of ambiguity.

Judicial independence moderates politicization

Hamburger tells us that “Judges (and academics) need to wrestle with the realities of how Chevron bias and other administrative power is rapidly delegitimizing our government and creating a profound alienation.” Huzzah. Amen. I couldn’t agree more. Preach! Hear-hear!

Allow me to present my personal theory of how Chevron affects our political discourse. In the vernacular, I call this Chevron Step Three. At Step Three, Congress corrects any mistakes made by the executive or independent agencies in implementing the law or made by the courts in interpreting it. The subtle thing about Step Three is that it doesn’t exist – and, knowing this, Congress never bothers with the politically costly and practically difficult process of clarifying legislation.

To the contrary, Chevron encourages the legislature expressly not to legislate. The more expedient approach for a legislator who disagrees with a Chevron-backed agency action is to campaign on the disagreement – that is, to politicize it. If the EPA interprets the Clean Air Act too broadly, we need to retake the White House to get a new administrator in there to straighten out the EPA’s interpretation of the law. If the FCC interprets the Communications Act too narrowly, we need to retake the White House to change the chair so that we can straighten out that mess! And on the other side, we need to keep the White House so that we can protect these right-thinking agency interpretations from reversal by the loons on the other side that want to throw out all of our accomplishments. The campaign slogans write themselves.

So long as most agencies’ governing statutes are broad enough that those agencies can keep the ship of state afloat, even if drifting rudderless, legislators have little incentive to turn inward to engage in the business of government with their legislative peers. Rather, they are freed to turn outward towards their next campaign, vilifying or deifying the administrative decisions of the current government as best suits their electoral prospects.

The sharp-eyed observer will note that I’ve added a piece to the Chevron puzzle: the process described above assumes that a new administration can come in after an election and simply rewrite all of the rules adopted by the previous administration. Not to put too fine a point on the matter, but this is exactly what administrative law allows (see Fox v. FCC and State Farm). The underlying logic, which is really nothing more than an expansion of Chevron, is that statutory ambiguity delegates to agencies a “policy space” within which they are free to operate. So long as agency action stays within that space – which often allows for diametrically-opposed substantive interpretations – the courts say that it is up to Congress, not the Judiciary, to provide course corrections. Anything else would amount to politically unaccountable judges substituting their policy judgments (this is, acting independently) for those of politically-accountable legislators and administrators.

In other words, the politicization of law seen in our current political moment is largely a function of deference and a lack of stare decisis combined. A virtue of stare decisis is that it forces Congress to act to directly address politically undesirable opinions. Because agencies are not bound by stare decisis, an alternative, and politically preferable, way for Congress to remedy problematic agency decisions is to politicize the issue – instead of addressing the substantive policy issue through legislation, individual members of Congress can campaign on it. (Regular readers of this blog will be familiar with one contemporary example of this: the recent net neutrality CRA vote, which is widely recognized as having very little chance of ultimate success but is being championed by its proponents as a way to influence the 2018 elections.) This is more directly aligned with the individual member of Congress’s own incentives, because, by keeping and placing more members of her party in Congress, her party will be able to control the leadership of the agency which will thus control the shape of that agency’s policy. In other words, instead of channeling the attention of individual Congressional actors inwards to work together to develop law and policy, it channels it outwards towards campaigning on the ills and evils of the opposing administration and party vice the virtues of their own party.

The virtue of judicial independence, of judges saying what they think the law is – or even what they think the law should be – is that it forces a politically-accountable decision. Congress can either agree, or disagree; but Congress must do something. Merely waiting for the next administration to come along will not be sufficient to alter the course set by the judicial interpretation of the law. Where Congress has abdicated its responsibility to make politically-accountable decisions by deferring those decisions to the executive or agencies, the political-accountability justification for Chevron deference fails. In such cases, the better course for the courts may well be to enforce Congress’s role under the separation of powers by refusing deference and returning the question to Congress.

 

Weekend reads

Eric Fruits —  1 June 2018 — Leave a comment

Good government dies in the darkness. This article is getting a lot of attention on Wonk Twitter and what’s left of the blogosphere. From the abstract:

We examine the effect of local newspaper closures on public finance for local governments. Following a newspaper closure, we find municipal borrowing costs increase by 5 to 11 basis points in the long run …. [T]hese results are not being driven by deteriorating local economic conditions. The loss of monitoring that results from newspaper closures is associated with increased government inefficiencies, including higher likelihoods of costly advance refundings and negotiated issues, and higher government wages, employees, and tax revenues.

What the hell happened at GE? This guy blames Jeff Immelt’s buy-high/sell-low strategy. I blame Jack Welch.

Academic writing is terrible. Science journalist Anna Clemens wants to change that. (Plus she quotes one of my grad school professors, Paul Zak Here’s what Clemens says about turning your research into a story:

But – just as with any Hollywood success in the box office – your paper will not become a page-turner, if you don’t introduce an element of tension now. Your readers want to know what problem you are solving here. So, tell them what gap in the literature needs to be filled, why method X isn’t good enough to solve Y, or what still isn’t known about mechanism Z. To introduce the tension, words such as “however”, “despite”, “nevertheless”, “but”, “although” are your best friends. But don’t fool your readers with general statements, phrase the problem precisely.

Write for the busy reader. While you’re writing your next book, paper, or op-ed, check out what the readability robots think of your writing.

They tell me I’ll get more hits if I mention Bitcoin and blockchain. Um, OK. Here goes. The Seattle Times reports on the mind-blowing amount of power cryptocurrency miners are trying to buy in the electricity-rich Pacific Northwest:

In one case this winter, miners from China landed their private jet at the local airport, drove a rental car to the visitor center at the Rocky Reach Dam, just north of Wenatchee, and, according to Chelan County PUD officials, politely asked to see the “dam master because we want to buy some electricity.”

You will never find a more wretched hive of scum and villainy. The Wild West of regulating cryptocurrencies:

The government must show that the trader intended to artificially affect the price. The Federal District Court in Manhattan once explained that “entering into a legitimate transaction knowing that it will distort the market is not manipulation — only intent, not knowledge, can transform a legitimate transaction into manipulation.”

Tyler Cowen on what’s wrong with the Internet. Hint: It’s you.

And if you hate Twitter, it is your fault for following the wrong people (try hating yourself instead!).  Follow experts and people of substance, not people who seek to lower the status of others.

If that fails, “mute words” is your friend. Muting a few terms made my Twitter experience significantly more enjoyable and informative.

 

mute

At this point, only the most masochistic and cynical among DC’s policy elite actually desire for the net neutrality conflict to continue. And yet, despite claims that net neutrality principles are critical to protecting consumers, passage of the current Congressional Review Act (“CRA”) disapproval resolution in Congress would undermine consumer protection and promise only to drag out the fight even longer.

The CRA resolution is primarily intended to roll back the FCC’s re-re-classification of broadband as a Title I service under the Communications Act in the Restoring Internet Freedom Order (“RIFO”). The CRA allows Congress to vote to repeal rules recently adopted by federal agencies; upon a successful CRA vote, the rules are rescinded and the agency is prohibited from adopting substantially similar rules in the future.

But, as TechFreedom has noted, it’s not completely clear that a CRA on a regulatory classification decision will work quite the way Congress intends it and could just trigger more litigation cycles, largely because it is unclear what parts of the RIFO are actually “rules” subject to the CRA. Harold Feld has written a critique of TechFreedom’s position, arguing, in effect, that of course the RIFO is a rule; TechFreedom responded with a pretty devastating rejoinder.

But this exchange really demonstrates TechFreedom’s central argument: It is sufficiently unclear how or whether the CRA will apply to the various provisions of the RIFO, such that the only things the CRA is guaranteed to do are 1) to strip consumers of certain important protections — it would take away the FCC’s transparency requirements for ISPs, and imperil privacy protections currently ensured by the FTC — while 2) prolonging the already interminable litigation and political back-and-forth over net neutrality.

The CRA is political theater

The CRA resolution effort is not about good Internet regulatory policy; rather, it’s pure political opportunism ahead of the midterms. Democrats have recognized net neutrality as a good wedge issue because of its low political opportunity cost. The highest-impact costs of over-regulating broadband through classification decisions are hard to see: Rather than bad things happening, the costs arrive in the form of good things not happening. Eventually those costs work their way to customers through higher access prices or less service — especially in rural areas most in need of it — but even these effects take time to show up and, when they do, are difficult to pin on any particular net neutrality decision, including the CRA resolution. Thus, measured in electoral time scales, prolonging net neutrality as a painful political issue — even though actual resolution of the process by legislation would be the sensible course — offers tremendous upside for political challengers and little cost.  

The truth is, there is widespread agreement that net neutrality issues need to be addressed by Congress: A constant back and forth between the FCC (and across its own administrations) and the courts runs counter to the interests of consumers, broadband companies, and edge providers alike. Virtually whatever that legislative solution ends up looking like, it would be an improvement over the unstable status quo.

There have been various proposals from Republicans and Democrats — many of which contain provisions that are likely bad ideas — but in the end, a bill passed with bipartisan input should have the virtue of capturing an open public debate on the issue. Legislation won’t be perfect, but it will be tremendously better than the advocacy playground that net neutrality has become.

What would the CRA accomplish?

Regardless of what one thinks of the substantive merits of TechFreedom’s arguments on the CRA and the arcana of legislative language distinguishing between agency “rules” and “orders,” if the CRA resolution is successful (a prospect that is a bit more likely following the Senate vote to pass it) what follows is pretty clear.

The only certain result of the the CRA resolution becoming law would be to void the transparency provisions that the FCC introduced in the RIFO — the one part of the Order that is pretty clearly a “rule” subject to CRA review — and it would disable the FCC from offering another transparency rule in its place. Everything else is going to end up — surprise! — before the courts, which would serve only to keep the issues surrounding net neutrality unsettled for another several years. (A cynic might suggest that this is, in fact, the goal of net neutrality proponents, for whom net neutrality has been and continues to have important political valence.)

And if the CRA resolution withstands the inevitable legal challenge to its rescision of the rest of the RIFO, it would also (once again) remove broadband privacy from the FTC’s purview, placing it back into the FCC’s lap — which is already prohibited from adopting privacy rules following last year’s successful CRA resolution undoing the Wheeler FCC’s broadband privacy regulations. The result is that we could be left without any broadband privacy regulator at all — presumably not the outcome strong net neutrality proponents want — but they persevere nonetheless.

Moreover, TechFreedom’s argument that the CRA may not apply to all parts of the RIFO could have a major effect on whether or not Congress is even accomplishing anything at all (other than scoring political points) with this vote. It could be the case that the CRA applies only to “rules” and not “orders,” or it could be the case that even if the CRA does apply to the RIFO, its passage would not force the FCC to revive the abrogated 2015 Open Internet Order, as proponents of the CRA vote hope.

Whatever one thinks of these arguments, however, they are based on a sound reading of the law and present substantial enough questions to sustain lengthy court challenges. Thus, far from a CRA vote actually putting to rest the net neutrality issue, it is likely to spawn litigation that will drag out the classification uncertainty question for at least another year (and probably more, with appeals).

Stop playing net neutrality games — they aren’t fun

Congress needs to stop trying to score easy political points on this issue while avoiding the hard and divisive work of reaching a compromise on actual net neutrality legislation. Despite how the CRA is presented in the popular media, a CRA vote is the furthest thing from a simple vote for net neutrality: It’s a political calculation to avoid accountability.

I had the pleasure last month of hosting the first of a new annual roundtable discussion series on closing the rural digital divide through the University of Nebraska’s Space, Cyber, and Telecom Law Program. The purpose of the roundtable was to convene a diverse group of stakeholders — from farmers to federal regulators; from small municipal ISPs to billion dollar app developers — for a discussion of the on-the-ground reality of closing the rural digital divide.

The impetus behind the roundtable was, quite simply, that in my five years living in Nebraska I have consistently found that the discussions that we have here about the digital divide in rural America are wholly unlike those that the federally-focused policy crowd has back in DC. Every conversation I have with rural stakeholders further reinforces my belief that those of us who approach the rural digital divide from the “DC perspective” fail to appreciate the challenges that rural America faces or the drive, innovation, and resourcefulness that rural stakeholders bring to the issue when DC isn’t looking. So I wanted to bring these disparate groups together to see what was driving this disconnect, and what to do about it.

The unfortunate reality of the rural digital divide is that it is an existential concern for much of America. At the same time, the positive news is that closing this divide has become an all-hands-on-deck effort for stakeholders in rural America, one that defies caricatured political, technological, and industry divides. I have never seen as much agreement and goodwill among stakeholders in any telecom community as when I speak to rural stakeholders about digital divides. I am far from an expert in rural broadband issues — and I don’t mean to hold myself out as one — but as I have engaged with those who are, I am increasingly convinced that there are far more and far better ideas about closing the rural digital divide to be found outside the beltway than within.

The practical reality is that most policy discussions about the rural digital divide over the past decade have been largely irrelevant to the realities on the ground: The legal and policy frameworks focus on the wrong things, and participants in these discussions at the federal level rarely understand the challenges that define the rural divide. As a result, stakeholders almost always fall back on advocating stale, entrenched, viewpoints that have little relevance to the on-the-ground needs. (To their credit, both Chairman Pai and Commissioner Carr have demonstrated a longstanding interest in understanding the rural digital divide — an interest that is recognized and appreciated by almost every rural stakeholder I speak to.)

Framing Things Wrong

It is important to begin by recognizing that contemporary discussion about the digital divide is framed in terms of, and addressed alongside, longstanding federal Universal Service policy. This policy, which has its roots in the 20th century project of ensuring that all Americans had access to basic telephone service, is enshrined in the first words of the Communications Act of 1934. It has not significantly evolved from its origins in the analog telephone system — and that’s a problem.

A brief history of Universal Service

The Communications Act established the FCC

for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States … a rapid, efficient, Nation-wide, and world-wide wire and radio communication service ….

The historic goal of “universal service” has been to ensure that anyone in the country is able to connect to the public switched telephone network. In the telephone age, that network provided only one primary last-mile service: transmitting basic voice communications from the customer’s telephone to the carrier’s switch. Once at the switch various other services could be offered — but providing them didn’t require more than a basic analog voice circuit to the customer’s home.

For most of the 20th century, this form of universal service was ensured by fiat and cost recovery. Regulated telephone carriers (that is, primarily, the Bell operating companies under the umbrella of AT&T) were required by the FCC to provide service to all comers, at published rates, no matter the cost of providing that service. In exchange, the carriers were allowed to recover the cost of providing service to high-cost areas through the regulated rates charged to all customers. That is, the cost of ensuring universal service was spread across and subsidized by the entire rate base.

This system fell apart following the break-up of AT&T in the 1980s. The separation of long distance from local exchange service meant that the main form of cross subsidy — from long distance to local callers — could no longer be handled implicitly. Moreover, as competitive exchange services began entering the market, they tended to compete first, and most, over the high-revenue customers who had supported the rate base. To accommodate these changes, the FCC transitioned from a model of implicit cross-subsidies to one of explicit cross-subsidies, introducing long distance access charges and termination fees that were regulated to ensure money continued to flow to support local exchange carriers’ costs of providing services to high-cost users.

The 1996 Telecom Act forced even more dramatic change. The goal of the 1996 Telecom Act was to introduce competition throughout the telecom ecosystem — but the traditional cross-subsidy model doesn’t work in a competitive market. So the 1996 Telecom Act further evolved the FCC’s universal service mechanism, establishing the Universal Service Fund (USF), funded by fees charged to all telecommunications carriers, which would be apportioned to cover the costs incurred by eligible telecommunications carriers in providing high-cost (and other “universal”) services.

The problematic framing of Universal Service

For present purposes, we need not delve into these mechanisms. Rather, the very point of this post is that the interminable debates about these mechanisms — who pays into the USF and how much; who gets paid out of the fund and how much; and what services and technologies the fund covers — simply don’t match the policy challenges of closing the digital divide.

What the 1996 Telecom Act does offer is a statement of the purposes of Universal Service. In 47 USC 254(b)(3), the Act states the purpose of ensuring “Access in rural and high cost areas”:

Consumers in all regions of the Nation, including low-income consumers and those in rural, insular, and high cost areas, should have access to telecommunications and information services … that are reasonably comparable to those services provided in urban areas ….

This is a problematic framing. (I would actually call it patently offensive…). It is a framing that made sense in the telephone era, when ensuring last-mile service meant providing only basic voice telephone service. In that era, having any service meant having all service, and the primary obstacles to overcome were the high-cost of service to remote areas and the lower revenues expected from lower-income areas. But its implicit suggestion is that the goal of federal policy should be to make rural America look like urban America.

Today universal service, at least from the perspective of closing the digital divide, means something different, however. The technological needs of rural America are different than those of urban America; the technological needs of poor and lower-income America are different than those of rich America. Framing the goal in terms of making sure rural and lower-income America have access to the same services as urban and wealthy America is, by definition, not responsive to (or respectful of) the needs of those who are on the wrong side of one of this country’s many digital divides. Indeed, that goal almost certainly distracts from and misallocates resources that could be better leveraged towards closing these divides.

The Demands of Rural Broadband

Rural broadband needs are simultaneously both more and less demanding than the services we typically focus on when discussing universal service. The services that we fund, and the way that we approach how to close digital divides, needs to be based in the first instance on the actual needs of the community that connectivity is meant to serve. Take just two of the prototypical examples: precision and automated farming, and telemedicine.

Assessing rural broadband needs

Precision agriculture requires different networks than does watching Netflix, web surfing, or playing video games. Farms with hundreds or thousands of sensors and other devices per acre can put significant load on networks — but not in terms of bandwidth. The load is instead measured in terms of packets and connections per second. Provisioning networks to handle lots of small packets is very different from provisioning them to handle other, more-typical (to the DC crowd), use cases.

On the other end of the agricultural spectrum, many farms don’t own their own combines. Combines cost upwards of a million dollars. One modern combine is sufficient to tend to several hundred acres in a given farming season. It is common for many farmers to hire someone who owns a combine to service their fields. During harvest season, for instance, one combine service may operate on a dozen farms during harvest season. Prior to operation, modern precision systems need to download a great deal of GIS, mapping, weather, crop, and other data. High-speed Internet can literally mean the difference between letting a combine sit idle for many days of a harvest season while it downloads data and servicing enough fields to cover the debt payments on a million dollar piece of equipment.

Going to the other extreme, rural health care relies upon Internet connectivity — but not in the ways it is usually discussed. The stories one hears on the ground aren’t about the need for particularly high-speed connections or specialized low-latency connections to allow remote doctors to control surgical robots. While tele-surgery and access to highly specialized doctors are important applications of telemedicine, the urgent needs today are far more modest: simple video consultations with primary care physicians for routine care, requiring only a moderate-speed Internet connection capable of basic video conferencing. In reality, literally megabits per second (not even 10 mbps) can mean the difference between a remote primary care physician being able to provide basic health services to a rural community and that community going entirely unserved by a doctor.

Efforts to run gigabit connections and dedicated fiber to rural health care facilities may be a great long-term vision — but the on-the-ground need could be served by a reliable 4G wireless connection or DSL line. (Again, to their credit, this is a point that Chairman Pai and Commissioner Carr have been highlighting in their recent travels through rural parts of the country.)

Of course, rural America faces many of the same digital divides faced elsewhere. Even in the wealthiest cities in Nebraska, for instance, significant numbers of students are eligible for free or reduced price school lunches — a metric that corresponds with income — and rely on anchor institutions for Internet access. The problem is worse in much of rural Nebraska, where there may simply be no Internet access at all.

Addressing rural broadband needs

Two things in particular have struck me as I have spoken to rural stakeholders about the digital divide. The first is that this is an “all hands on deck” problem. Everyone I speak to understands the importance of the issue. Everyone is willing to work with and learn from others. Everyone is willing to commit resources and capital to improve upon the status quo, including by undertaking experiments and incurring risks.

The discussions I have in DC, however, including with and among key participants in the DC policy firmament, are fundamentally different. These discussions focus on tweaking contribution factors and cost models to protect or secure revenues; they are, in short, missing the forest for the trees. Meanwhile, the discussion on the ground focuses on how to actually deploy service and overcome obstacles. No amount of cost-model tweaking will do much at all to accomplish either of these.

The second striking, and rather counterintuitive, thing that I have often heard is that closing the rural digital divide isn’t (just) about money. I’ve heard several times the lament that we need to stop throwing more money at the problem and start thinking about where the money we already have needs to go. Another version of this is that it isn’t about the money, it’s about the business case. Money can influence a decision whether to execute upon a project for which there is a business case — but it rarely creates a business case where there isn’t one. And where it has created a business case, that case was often for building out relatively unimportant networks while increasing the opportunity costs of building out more important networks. The networks we need to build are different from those envisioned by the 1996 Telecom Act or FCC efforts to contort that Act to fund Internet build-out.

Rural Broadband Investment

There is, in fact, a third particularly striking thing I have gleaned from speaking with rural stakeholders, and rural providers in particular: They don’t really care about net neutrality, and don’t see it as helpful to closing the digital divide.  

Rural providers, it must be noted, are generally “pro net neutrality,” in the sense that they don’t think that ISPs should interfere with traffic going over their networks; in the sense that they don’t have any plans themselves to engage in “non-neutral” conduct; and also in the sense that they don’t see a business case for such conduct.

But they are also wary of Title II regulation, or of other rules that are potentially burdensome or that introduce uncertainty into their business. They are particularly concerned that Title II regulation opens the door to — and thus creates significant uncertainty about the possibility of — other forms of significant federal regulation of their businesses.

More than anything else, they want to stop thinking, talking, and worrying about net neutrality regulations. Ultimately, the past decade of fights about net neutrality has meant little other than regulatory cost and uncertainty for them, which makes planning and investment difficult — hardly a boon to closing the digital divide.

The basic theory of the Wheeler-era FCC’s net neutrality regulations was the virtuous cycle — that net neutrality rules gave edge providers the certainty they needed in order to invest in developing new applications that, in turn, would drive demand for, and thus buildout of, new networks. But carriers need certainty, too, if they are going to invest capital in building these networks. Rural ISPs are looking for the business case to justify new builds. Increasing uncertainty has only negative effects on the business case for closing the rural digital divide.

Most crucially, the logic of the virtuous cycle is virtually irrelevant to driving demand for closing the digital divide. Edge innovation isn’t going to create so much more value that users will suddenly demand that networks be built; rather, the applications justifying this demand already exist, and most have existed for many years. What stands in the way of the build-out required to service under- or un-served rural areas is the business case for building these (expensive) networks. And the uncertainty and cost associated with net neutrality only exacerbate this problem.

Indeed, rural markets are an area where the virtuous cycle very likely turns in the other direction. Rural communities are actually hotbeds of innovation. And they know their needs far better than Silicon Valley edge companies, so they are likely to build apps and services that better cater to the unique needs of rural America. But these apps and services aren’t going to be built unless their developers have access to the broadband connections needed to build and maintain them, and, most important of all, unless users have access to the broadband connections needed to actually make use of them. The upshot is that, in rural markets, connectivity precedes and drives the supply of edge services not, as the Wheeler-era virtuous cycle would have it, the other way around.

The effect of Washington’s obsession with net neutrality these past many years has been to increase uncertainty and reduce the business case for building new networks. And its detrimental effects continue today with politicized and showboating efforts to to invoke the Congressional Review Act in order to make a political display of the 2017 Restoring Internet Freedom Order. Back in the real world, however, none of this helps to provide rural communities with the type of broadband services they actually need, and the effect is only to worsen the rural digital divide, both politically and technologically.

The Road Ahead …?

The story told above is not a happy one. Closing digital divides, and especially closing the rural digital divide, is one of the most important legal, social, and policy challenges this country faces. Yet the discussion about these issues in DC reflects little of the on-the-ground reality. Rather advocates in DC attack a strawman of the rural digital divide, using it as a foil to protect and advocate for their pet agendas. If anything, the discussion in DC distracts attention and diverts resources from productive ideas.

To end on a more positive note, some are beginning to recognize the importance and direness of the situation. I have noted several times the work of Chairman Pai and Commissioner Carr. Indeed, the first time I met Chairman Pai was when I had the opportunity to accompany him, back when he was Commissioner Pai, on a visit through Diller, Nebraska (pop. 287). More recently, there has been bipartisan recognition of the need for new thinking about the rural digital divide. In February, for instance, a group of Democratic senators asked President Trump to prioritize rural broadband in his infrastructure plans. And the following month Congress enacted, and the President signed, legislation that among other things funded a $600 million pilot program to award grants and loans for rural broadband built out through the Department of Agriculture’s Rural Utilities Service. But both of these efforts rely too heavily on throwing money at the rural divide (speaking of the recent legislation, the head of one Nebraska-based carrier building out service in rural areas lamented that it’s just another effort to give carriers cheap money, which doesn’t do much to help close the divide!). It is, nonetheless, good to see urgent calls for and an interest in experimenting with new ways to deliver assistance in closing the rural digital divide. We need more of this sort of bipartisan thinking and willingness to experiment with new modes of meeting this challenge — and less advocacy for stale, entrenched, viewpoints that have little relevance to the on-the-ground reality of rural America.