Archives For truth on the market

This has been a big year for business in the courts. A U.S. district court approved the AT&T-Time Warner merger, the Supreme Court upheld Amex’s agreements with merchants, and a circuit court pushed back on the Federal Trade Commission’s vague and heavy handed policing of companies’ consumer data safeguards.

These three decisions mark a new era in the intersection of law and economics.

AT&T-Time Warner

AT&T-Time Warner is a vertical merger, a combination of firms with a buyer-seller relationship. Time Warner creates and broadcasts content via outlets such as HBO, CNN, and TNT. AT&T distributes content via services such as DirecTV.

Economists see little risk to competition from vertical mergers, although there are some idiosyncratic circumstances in which competition could be harmed. Nevertheless, the U.S. Department of Justice went to court to block the merger.

The last time the goverment sued to block a merger was more than 40 years ago, and the government lost. Since then, the government relied on the threat of litigation to extract settlements from the merging parties. For example, in the 1996 merger between Time Warner and Turner, the FTC required limits on how the new company could bundle HBO with less desirable channels and eliminated agreements that allowed TCI (a cable company that partially owned Turner) to carry Turner channels at preferential rates.

With AT&T-Time Warner, the government took a big risk, and lost. It was a big risk because (1) it’s a vertical merger, and (2) the case against the merger was weak. The government’s expert argued consumers would face an extra 45 cents a month on their cable bills if the merger went through, but under cross-examination, conceded it might be as little as 13 cents a month. That’s a big difference and raised big questions about the reliability of the expert’s model.

Judge Richard J. Leon’s 170+ page ruling agreed that the government’s case was weak and its expert was not credible. While it’s easy to cheer a victory of big business over big government, the real victory was the judge’s heavy reliance on facts, data, and analysis rather than speculation over the potential for consumer harm. That’s a big deal and may make the way for more vertical mergers.

Ohio v. American Express

The Supreme Court’s ruling in Amex may seem obscure. The court backed American Express Co.’s policy of preventing retailers from offering customers incentives to pay with cheaper cards.

Amex charges higher fees to merchants than do other cards, such as Visa, MasterCard, and Discover. Amex cardholders also have higher incomes and tend to spend more at stores than those associated with other networks. And, Amex offers its cardholders better benefits, services, and rewards than the other cards. Merchants don’t like Amex because of the higher fees, customers prefer Amex because of the card’s perks.

Amex, and other card companies, operate in what is known as a two-sided market. Put simply, they have two sets of customers: merchants who pay swipe fees, and consumers who pay fees and interest.

Part of Amex’s agreement with merchants is an “anti-steering” provision that bars merchants from offering discounts for using non-Amex cards. The U.S. Justice Department and a group of states sued the company, alleging the Amex rules limited merchants’ ability to reduce their costs from accepting credit cards, which meant higher retail prices. Amex argued that the higher prices charged to merchants were kicked back to its cardholders in the form of more and better perks.

The Supreme Court found that the Justice Department and states focused exclusively on one side (merchant fees) of the two-sided market. The courts says the government can’t meet its burden by showing some effect on some part of the market. Instead, they must demonstrate, “increased cost of credit card transactions … reduced number of credit card transactions, or otherwise stifled competition.” The government could not prove any of those things.

We live in a world two-sided markets. Amazon may be the biggest two-sided market in the history of the world, linking buyers and sellers. Smartphones such as iPhones and Android devices are two-sided markets, linking consumers with app developers. The Supreme Court’s ruling in Amex sets a standard for how antitrust law should treat the economics of two-sided markets.

LabMD

LabMD is another matter that seems obscure, but could have big impacts on the administrative state.

Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.

Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.

Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.

The 11th Circuit Court of Appeals ruled the FTC’s approach to developing security standards violates basic principles of due process. The court said the FTC’s basic approach—in which the FTC tries to improve general security practices by suing companies that experience security breaches—violates the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic.

My colleague at ICLE observes the lesson to learn from LabMD isn’t about the illegitimacy of the FTC’s approach to internet privacy and security. Instead, it says legality of the administrative state is premised on courts placing a check on abusive regulators.

The lessons learned from these three recent cases reflect a profound shift in thinkging about the laws governing economic activity:

  • AT&T-Time Warner indicates that facts matter. Mere speculation of potential harms will not satisfy the court.
  • Amex highlights the growing role two-sided markets play in our economy and provides framework for evaluating competition in these markets.
  • LabMD is a small step in reining in the administrative state. Regulations must be scrutinized before they are imposed and enforced.

In some ways none of these decisions are revolutionary. Instead, they reflect an evolution toward greater transparency in how the law is to be applied and greater scrutiny over how the regulations are imposed.

 

Big is bad, part 1: Kafka, Coase, and Brandeis walk into a bar … There’s a quip in a well-known textbook that Nobel laureate Ronald Coase said he’d grown weary of antitrust because when prices went up, the judges said it was monopoly; when the prices went down, they said it was predatory pricing; and when they stayed the same, they said it was tacit collusion. ICLE’s Geoffrey Manne and Gus Hurwitz worry that with the rise of the neo-Brandeisians, not much has changed since Coase’s time:

[C]ompetition, on its face, is virtually indistinguishable from anticompetitive behavior. Every firm strives to undercut its rivals, to put its rivals out of business, to increase its rivals’ costs, or to steal its rivals’ customers. The consumer welfare standard provides courts with a concrete mechanism for distinguishing between good and bad conduct, based not on the effect on rival firms but on the effect on consumers. Absent such a standard, any firm could potentially be deemed to violate the antitrust laws for any act it undertakes that could impede its competitors.

Big is bad, part 2. A working paper published by researchers from Denmark and the University of California at Berkeley suggest that companies such as Google, Apple, Facebook, and Nike are taking advantage of so-called “tax havens” to cause billions of dollars of income go “missing.” There’s a lot of mumbo jumbo in this one, but it’s getting lots of attention.

We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places—in effect stealing revenue from each other.

Big is bad, part 3: Can any country survive with debt-to-GDP of more than 100 percent? Apparently, the answer is “yes.” The U.K. went 80 years, from 1779 to 1858. Then, it went 47 years from 1916 to 1962. Tim Harford has a fascinating story about an effort to clear the country’s debt in that second run.

In 1928, an anonymous donor resolved to clear the UK’s national debt and gave £500,000 with that end in mind. It was a tidy sum — almost £30m at today’s prices — but not nearly enough to pay off the debt. So it sat in trust, accumulating interest, for nearly a century.

How do you make a small fortune? Begin with a big one. A lesson from Johnny Depp.

Will we ever stop debating the Trolley Problem? Apparently the answer is “no.” Also, TIL there’s a field of research that relies on “notions.”

For so long, moral psychology has relied on the notion that you can extrapolate from people’s decisions in hypothetical thought experiments to infer something meaningful about how they would behave morally in the real world. These new findings challenge that core assumption of the field.

 

The week that was on Truth on the Market

LabMD.

[T]argets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

Google Android.

Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

AT&T-Time Warner. First this:

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

Then this:

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content.

 

 

The EC’s Android decision is expected sometime in the next couple of weeks. Current speculation is that the EC may issue a fine exceeding last year’s huge 2.4B EU fine for Google’s alleged antitrust violations related to the display of general search results. Based on the statement of objections (“SO”), I expect the Android decision will be a muddle of legal theory that not only fails to connect to facts and marketplace realities, but also will  perversely incentivize platform operators to move toward less open ecosystems.

As has been amply demonstrated (see, e.g., here and here), the Commission has made fundamental errors with its market definition analysis in this case. Chief among its failures is the EC’s incredible decision to treat the relevant market as licensable mobile operating systems, which notably excludes the largest smartphone player by revenue, Apple.

This move, though perhaps expedient for the EC, leads the Commission to view with disapproval an otherwise competitively justifiable set of licensing requirements that Google imposes on its partners. This includes anti-fragmentation and app-bundling provisions (“Provisions”) in the agreements that partners sign in order to be able to distribute Google Mobile Services (“GMS”) with their devices. Among other things, the Provisions guarantee that a basic set of Google’s apps and services will be non-exclusively featured on partners’ devices.

The Provisions — when viewed in a market in which Apple is a competitor — are clearly procompetitive. The critical mass of GMS-flavored versions of Android (as opposed to vanilla Android Open Source Project (“AOSP”) devices) supplies enough predictability to an otherwise unruly universe of disparate Android devices such that software developers will devote the sometimes considerable resources necessary for launching successful apps on Android.

Open source software like AOSP is great, but anyone with more than a passing familiarity with Linux recognizes that the open source movement often fails to produce consumer-friendly software. In order to provide a critical mass of users that attract developers to Android, Google provides a significant service to the Android market as a whole by using the Provisions to facilitate a predictable user (and developer) experience.

Generativity on platforms is a complex phenomenon

To some extent, the EC’s complaint is rooted in a bias that Android act as a more “generative” platform such that third-party developers are relatively better able to reach users of Android devices. But this effort by the EC to undermine the Provisions will be ultimately self-defeating as it will likely push mobile platform providers to converge on similar, relatively more closed business models that provide less overall consumer choice.

Even assuming that the Provisions somehow prevent third-party app installs or otherwise develop a kind of path-dependency among users such that they never seek out new apps (which the data clearly shows is not happening), focusing on third-party developers as the sole or primary source of innovation on Android is a mistake.

The control that platform operators like Apple and Google exert over their respective ecosystems does not per se create more or less generativity on the platforms. As Gus Hurwitz has noted, “literature and experience amply demonstrate that ‘open’ platforms, or general-purpose technologies generally, can promote growth and increase social welfare, but they also demonstrate that open platforms can also limit growth and decrease welfare.” Conversely, tighter vertical integration (the Apple model) can also produce more innovation than open platforms.

What is important is the balance between control and freedom, and the degree to which third-party developers are able to innovate within the context of a platform’s constraints. The existence of constraints — either Apple’s more tightly controlled terms, or Google’s more generous Provisions — themselves facilitate generativity.

In short, it is overly simplistic to view generativity as something that happens at the edges without respect to structural constraints at the core. The interplay between platform and developer is complex and complementary, and needs to be viewed as a dynamic process.

Whither platform diversity?

I love Apple’s devices and I am quite happy living within its walled garden. But I certainly do not believe that Apple’s approach is the only one that makes sense. Yet, in its SO, the EC blesses Apple’s approach as the proper way to manage a mobile ecosystem. It explicitly excluded Apple from a competitive analysis, and attacked Google on the basis that it imposed restrictions in the context of licensing its software. Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

With this SO, the EC is basically asserting that Google is anticompetitively bundling without being able to plausibly assert foreclosure (because, again, third-party app installs are easy to do and are easily shown to number in the billions). I’m sure Google doesn’t want to move in the direction of having a more closed system, but the lesson of this case will loom large for tomorrow’s innovators.

In the face of eager antitrust enforcers like those in the EU, the easiest path for future innovators will be to keep everything tightly controlled so as to prevent both fragmentation and misguided regulatory intervention.

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

AT&T’s merger with Time Warner has lead to one of the most important, but least interesting, antitrust trials in recent history.

The merger itself is somewhat unimportant to consumers. It’s about a close to a “pure” vertical merger as we can get in today’s world and would not lead to a measurable increase in prices paid by consumers. At the same time, Richard J. Leon’s decision to approve the merger may have sent a signal regarding how the anticipated Fox-Disney (or Comcast), CVS-Aetna, and Cigna-Express Scripts mergers might proceed.

Judge Leon of the United States District Court in Washington, said the U.S. Department of Justice had not proved that AT&T’s acquisition of Time Warner would lead to fewer choices for consumers and higher prices for television and internet services.

As shown in the figure below, there is virtually no overlap in services provided by Time Warner (content creation and broadcasting) and AT&T (content distribution). We say “virtually” because, through it’s ownership of DirecTV, AT&T has an ownership stake in several channels such as the Game Show Network, the MLB Network, and Root Sports. So, not a “pure” vertical merger, but pretty close. Besides no one seems to really care about GSN, MLB, or Root.

Infographic: What's at Stake in the Proposed AT&T - Time Warner Merger | Statista

The merger trial was one of the least interesting because the government’s case opposing the merger was so weak.

The Justice Department’s economic expert, University of California, Berkeley, professor Carl Shapiro, argued the merger would harm consumers and competition in three ways:

  1. AT&T would raise the price of content to other cable companies, driving up their costs which would be passed on consumers.
  2. Across more than 1,000 subscription television markets, AT&T could benefit by drawing customers away from rival content distributors in the event of a “blackout,” in which the distributor chooses not to carry Time Warner content over a pricing dispute. In addition, AT&T could also use its control over Time Warner content to retain customers by discouraging consumers from switching to providers that don’t carry the Time Warner content. Those two factors, according to Shapiro, could cause rival cable companies to lose between 9 and 14 percent of their subscribers over the long term.
  3. AT&T and competitor Comcast could coordinate to restrict access to popular Time Warner and NBC content in ways that could stifle competition from online cable alternatives such as Dish Network’s Sling TV or Sony’s PlayStation Vue. Even tacit coordination of this type would impair consumer choices, Shapiro opined.

Price increases and blackouts

Shapiro initially indicated the merger would cause consumers to pay an additional $436 million year, which amounts to an average of 45 cents a month per customer, or a 0.4 percent increase. At trial, he testified the amount might be closer to 27 cents a month and conceded it could be a low as 13 cents a month.

The government’s “blackout” arguments seemed to get lost in the shifting sands of shifting survey results. Blackouts mattered, according to Shapiro, because “Even though they don’t happen very much, that’s the key to leverage.” His testimony on the potential for price hikes relied heavily on a study commissioned by Charter Communications Inc., which opposes the merger. Stefan Bewley, a director at consulting firm Altman Vilandrie & Co., which produced the study, testified the report predicted Charter would lose 9 percent of its subscribers if it lost access to Turner programming.

Under cross-examination by AT&T’s lawyer, Bewley acknowledged what was described as a “final” version of the study presented to Charter in April last year put the subscriber loss estimate at 5 percent. When confronted with his own emails about the change to 9 percent, Bewley said he agreed to the update after meeting with Charter. At the time of the change from 5 percent to 9 percent, Charter was discussing its opposition to the merger with the Justice Department.

Bewley noted that the change occurred because he saw that some of the figures his team had gathered about Turner networks were outliers, with a range of subcriber losses of 5 percent on the low end and 14 percent on the high end. He indicated his team came up with a “weighted average” of 9 percent.

This 5/9/14 percent distinction seems to be critical to the government’s claim the merger would raise consumer prices. Referring to Shapiro’s analysis, AT&T-Time Warner’s lead counsel, Daniel Petrocelli, asked Bewley: “Are you aware that if he’d used 5 percent there would have been a price increase of zero?” Bewley said he was not aware.

At trial, AT&T and Turner executives testified that they couldn’t credibly threaten to withhold Turner programming from rivals because the networks’ profitability depends on wide distribution. In addition, one of AT&T’s expert witnesses, University of California, Berkeley business and economics professor Michael Katz, testified about what he said were the benefits of AT&T’s offer to use “baseball style” arbitration with rival pay TV distributors if the two sides couldn’t agree on what fees to pay for Time Warner’s Turner networks. With baseball style arbitration, both sides submit their final offer to an arbitrator, who determines which of the two offers is most appropriate.

Under the terms of the arbitration offer, AT&T has agreed not to black out its networks for the duration of negotiations with distributors. Dennis Carlton, an economics professor at the University of Chicago, said Shapiro’s model was unreliable because he didn’t account for that. Shapiro conceded he did not factor that into his study, saying that he would need to use an entirely different model to study how the arbitration agreement would affect the merger.

Coordination with Comcast/NBCUniversal

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

At trial, the Justice Department seemed to abandon any claim that the merged company would unilaterally restrict access to online “virtual MVPDs.” The government’s case, made by its expert Shapiro, ended up being there would be a “risk” and “danger” that AT&T and Comcast would “coordinate” to withhold programming in a way to harm emerging online multichannel distributors. However, under cross examination, he conceded that his opinions were not based on a “quantifiable model.” Shapiro testified that he had no opinion whether the odds of such coordination would be greater than 1 percent.

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content. Emerging online multichannel distributors pitch their offerings as “skinny bundles” with a limited selection of the more popular channels. By forcing these providers to take more channels, the government argued, the skinny bundle business model is undermined in a version of raising rivals costs. This theory did not get much play at trial, but seems to suggest the government was trying to have its cake and eat it, too.

Except in this case, as with much of the government’s case in this matter, the cake was not completely baked.

 

A few weeks ago I posted a preliminary assessment of the relative antitrust risk of a Comcast vs Disney purchase of 21st Century Fox assets. (Also available in pdf as an ICLE Issue brief, here). On the eve of Judge Leon’s decision in the AT&T/Time Warner merger case, it seems worthwhile to supplement that assessment by calling attention to Assistant Attorney General Makan Delrahim’s remarks at The Deal’s Corporate Governance Conference last week. Somehow these remarks seem to have passed with little notice, but, given their timing, they deserve quite a bit more attention.

In brief, Delrahim spent virtually the entirety of his short remarks making and remaking the fundamental point at the center of my own assessment of the antitrust risk of a possible Comcast/Fox deal: The DOJ’s challenge of the AT&T/Time Warner merger tells you nothing about the likelihood that the agency would challenge a Comcast/Fox merger.

To begin, in my earlier assessment I pointed out that most vertical mergers are approved by antitrust enforcers, and I quoted Bruce Hoffman, Director of the FTC’s Bureau of Competition, who noted that:

[V]ertical merger enforcement is still a small part of our merger workload….

* * *

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

I may not have made it very clear in that post, but, of course, most horizontal mergers are approved by enforcers, as well.

Well, now we have the head of the DOJ Antitrust Division making the same point:

I’d say 95 or 96 percent of mergers — horizontal or vertical — are cleared — routinely…. Most mergers — horizontal or vertical — are procompetitive, or have no adverse effect.

Delrahim reinforced the point in an interview with The Street in advance of his remarks. Asked by a reporter, “what are your concerns with vertical mergers?,” Delrahim quickly corrected the questioner: “Well, I don’t have any concerns with most vertical mergers….”

But Delrahim went even further, noting that nothing about the Division’s approach to vertical mergers has changed since the AT&T/Time Warner case was brought — despite the efforts of some reporters to push a different narrative:

I understand that some journalists and observers have recently expressed concern that the Antitrust Division no longer believes that vertical mergers can be efficient and beneficial to competition and consumers. Some point to our recent decision to challenge some aspects of the AT&T/Time Warner merger as a supposed bellwether for a new vertical approach. Rest assured: These concerns are misplaced…. We have long recognized that vertical integration can and does generate efficiencies that benefit consumers. Indeed, most vertical mergers are procompetitive or competitively neutral. The same is of course true in horizontal transactions. To the extent that any recent action points to a closer review of vertical mergers, it’s not new…. [But,] to reiterate, our approach to vertical mergers has not changed, and our recent enforcement efforts are consistent with the Division’s long-standing, bipartisan approach to analyzing such mergers. We’ll continue to recognize that vertical mergers, in general, can yield significant economic efficiencies and benefit to competition.

Delrahim concluded his remarks by criticizing those who assume that the agency’s future enforcement decisions can be inferred from past cases with different facts, stressing that the agency employs an evidence-based, case-by-case approach to merger review:

Lumping all vertical transactions under the same umbrella, by comparison, obscures the reality that we conduct a vigorous investigation, aided by over 50 PhD economists in these markets, to make sure that we as lawyers don’t steer too far without the benefits of their views in each of these instances.

Arguably this was a rebuke directed at those, like Disney and Fox’s board, who are quick to ascribe increased regulatory risk to a Comcast/Fox tie-up because the DOJ challenged the AT&T/Time Warner merger. Recall that, in its proxy statement, the Fox board explained that it rejected Comcast’s earlier bid in favor of Disney’s in part because of “the regulatory risks presented by the DOJ’s unanticipated opposition to the proposed vertical integration of the AT&T / Time Warner transaction.”

I’ll likely have more to add once the AT&T/Time Warner decision is out. But in the meantime (and with apologies to Mark Twain), the takeaway is clear: Reports of the death of vertical mergers have been greatly exaggerated.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

Weekend Reads

Eric Fruits —  8 June 2018

Innovation dies in darkness. Well, actually, it thrives in the light, according to this new research:

We find that after a patent library opens, local patenting increases by 17% relative to control regions that have Federal Depository Libraries. … [T]]he library boost ceases to be present after the introduction of the Internet. We find that library opening is also associated with an increase in local business formation and job creation [especially for small business -ed.], which suggests that the impact of libraries is not limited to patenting outcomes.

Patent-Libraries

Don’t drink the Kool-Aid of bad data. Have a SPRITE. From the article published by self-described “data thugs“.

Scientific publications have not traditionally been accompanied by data, either during the peer review process or when published. Concern has arisen that the literature in many fields may contain inaccuracies or errors that cannot be detected without inspecting the original data. Here, we introduce SPRITE (Sample Parameter Reconstruction via Interative TEchniques), a heuristic method for reconstructing plausible samples from descriptive statistics of granular data, allowing reviewers, editors, readers, and future researchers to gain insights into the possible distributions of item values in the original data set.

Gig economy, it’s a good thing: 6.9% of all workers are independent contractors; 79% of them prefer their arrangement over a traditional job.

Gig economy, it’s a bad thing. Maybe.

[C]ensus divisions with relatively weak wage inflation also tend to have more “low-wage” informal FTE—that is, more hours of informal work performed at a wage that is less than formal pay.

Broetry. It’s a LinkedIn thing. I don’t get it.

 

 

A recent exchange between Chris Walker and Philip Hamburger about Walker’s ongoing empirical work on the Chevron doctrine (the idea that judges must defer to reasonable agency interpretations of ambiguous statutes) gives me a long-sought opportunity to discuss what I view as the greatest practical problem with the Chevron doctrine: it increases both politicization and polarization of law and policy. In the interest of being provocative, I will frame the discussion below by saying that both Walker & Hamburger are wrong (though actually I believe both are quite correct in their respective critiques). In particular, I argue that Walker is wrong that Chevron decreases politicization (it actually increases it, vice his empirics); and I argue Hamburger is wrong that judicial independence is, on its own, a virtue that demands preservation. Rather, I argue, Chevron increases overall politicization across the government; and judicial independence can and should play an important role in checking legislative abdication of its role as a politically-accountable legislature in a way that would moderate that overall politicization.

Walker, along with co-authors Kent Barnett and Christina Boyd, has done some of the most important and interesting work on Chevron in recent years, empirically studying how the Chevron doctrine has affected judicial behavior (see here and here) as well as that of agencies (and, I would argue, through them the Executive) (see here). But the more important question, in my mind, is how it affects the behavior of Congress. (Walker has explored this somewhat in his own work, albeit focusing less on Chevron than on how the role agencies play in the legislative process implicitly transfers Congress’s legislative functions to the Executive).

My intuition is that Chevron dramatically exacerbates Congress’s worst tendencies, encouraging Congress to push its legislative functions to the executive and to do so in a way that increases the politicization and polarization of American law and policy. I fear that Chevron effectively allows, and indeed encourages, Congress to abdicate its role as the most politically-accountable branch by deferring politically difficult questions to agencies in ambiguous terms.

One of, and possibly the, best ways to remedy this situation is to reestablish the role of judge as independent decisionmaker, as Hamburger argues. But the virtue of judicial independence is not endogenous to the judiciary. Rather, judicial independence has an instrumental virtue, at least in the context of Chevron. Where Congress has problematically abdicated its role as a politically-accountable decisionmaker by deferring important political decisions to the executive, judicial refusal to defer to executive and agency interpretations of ambiguous statutes can force Congress to remedy problematic ambiguities. This, in turn, can return the responsibility for making politically-important decisions to the most politically-accountable branch, as envisioned by the Constitution’s framers.

A refresher on the Chevron debate

Chevron is one of the defining doctrines of administrative law, both as a central concept and focal debate. It stands generally for the proposition that when Congress gives agencies ambiguous statutory instructions, it falls to the agencies, not the courts, to resolve those ambiguities. Thus, if a statute is ambiguous (the question at “step one” of the standard Chevron analysis) and the agency offers a reasonable interpretation of that ambiguity (“step two”), courts are to defer to the agency’s interpretation of the statute instead of supplying their own.

This judicially-crafted doctrine of deference is typically justified on several grounds. For instance, agencies generally have greater subject-matter expertise than courts so are more likely to offer substantively better constructions of ambiguous statutes. They have more resources that they can dedicate to evaluating alternative constructions. They generally have a longer history of implementing relevant Congressional instructions so are more likely attuned to Congressional intent – both of the statute’s enacting and present Congresses. And they are subject to more direct Congressional oversight in their day-to-day operations and exercise of statutory authority than the courts so are more likely concerned with and responsive to Congressional direction.

Chief among the justifications for Chevron deference is, as Walker says, “the need to reserve political (or policy) judgments for the more politically accountable agencies.” This is at core a separation-of-powers justification: the legislative process is fundamentally a political process, so the Constitution assigns responsibility for it to the most politically-accountable branch (the legislature) instead of the least politically-accountable branch (the judiciary). In turn, the act of interpreting statutory ambiguity is an inherently legislative process – the underlying theory being that Congress intended to leave such ambiguity in the statute in order to empower the agency to interpret it in a quasi-legislative manner. Thus, under this view, courts should defer both to this Congressional intent that the agency be empowered to interpret its statute (and, should this prove problematic, it is up to Congress to change the statute or to face political ramifications), and the courts should defer to the agency interpretation of that statute because agencies, like Congress, are more politically accountable than the courts.

Chevron has always been an intensively studied and debated doctrine. This debate has grown more heated in recent years, to the point that there is regularly scholarly discussion about whether Chevron should be repealed or narrowed and what would replace it if it were somehow curtailed – and discussion of the ongoing vitality of Chevron has entered into Supreme Court opinions and the appointments process with increasing frequency. These debates generally focus on a few issues. A first issue is that Chevron amounts to a transfer of the legislature’s Constitutional powers and responsibilities over creating the law to the executive, where the law ordinarily is only meant to be carried out. This has, the underlying concern is, contributed to the increase in the power of the executive compared to the legislature. A second, related, issue is that Chevron contributes to the (over)empowerment of independent agencies – agencies that are already out of favor with many of Chevron’s critics as Constitutionally-infirm entities whose already-specious power is dramatically increased when Chevron limits the judiciary’s ability to check their use of already-broad Congressionally-delegated authority.

A third concern about Chevron, following on these first two, is that it strips the judiciary of its role as independent arbiter of judicial questions. That is, it has historically been the purview of judges to answer statutory ambiguities and fill in legislative interstices.

Chevron is also a focal point for more generalized concerns about the power of the modern administrative state. In this context, Chevron stands as a representative of a broader class of cases – State Farm, Auer, Seminole Rock, Fox v. FCC, and the like – that have been criticized as centralizing legislative, executive, and judicial powers in agencies, allowing Congress to abdicate its role as politically-accountable legislator, abdicating the judiciary’s role in interpreting the law, as well as raising due process concerns for those subject to rules promulgated by federal agencies..

Walker and his co-authors have empirically explored the effects of Chevron in recent years, using robust surveys of federal agencies and judicial decisions to understand how the doctrine has affected the work of agencies and the courts. His most recent work (with Kent Barnett and Christina Boyd) has explored how Chevron affects judicial decisionmaking. Framing the question by explaining that “Chevron deference strives to remove politics from judicial decisionmaking,” they ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?” They find that, empirically speaking, “the Chevron Court’s objective to reduce partisan judicial decision-making has been quite effective.” By instructing judges to defer to the political judgments (or just statutory interpretations) of agencies, judges are less political in their own decisionmaking.

Hamburger responds to this finding somewhat dismissively – and, indeed, the finding is almost tautological: “of course, judges disagree less when the Supreme Court bars them from exercising their independent judgment about what the law is.” (While a fair critique, I would temper it by arguing that it is nonetheless an important empirical finding – empirics that confirm important theory are as important as empirics that refute it, and are too often dismissed.)

Rather than focus on concerns about politicized decisionmaking by judges, Hamburger focuses instead on the importance of judicial independence – on it being “emphatically the duty of the Judicial Department to say what the law is” (quoting Marbury v. Madison). He reframes Walker’s results, arguing that “deference” to agencies is really “bias” in favor of the executive. “Rather than reveal diminished politicization, Walker’s numbers provide strong evidence of diminished judicial independence and even of institutionalized judicial bias.”

So which is it? Does Chevron reduce bias by de-politicizing judicial decisionmaking? Or does it introduce new bias in favor of the (inherently political) executive? The answer is probably that it does both. The more important answer, however, is that neither is the right question to ask.

What’s the correct measure of politicization? (or, You get what you measure)

Walker frames his study of the effects of Chevron on judicial decisionmaking by explaining that “Chevron deference strives to remove politics from judicial decisionmaking. Such deference to the political branches has long been a bedrock principle for at least some judicial conservatives.” Based on this understanding, his project is to ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?”

This framing, that one of Chevron’s goals is to remove politics from judicial decisionmaking, is not wrong. But this goal may be more accurately stated as being to prevent the judiciary from encroaching upon the political purposes assigned to the executive and legislative branches. This restatement offers an important change in focus. It emphasizes the concern about politicizing judicial decisionmaking as a separation of powers issue. This is in apposition to concern that, on consequentialist grounds, judges should not make politicized decisions – that is, judges should avoid political decisions because it leads to substantively worse outcomes.

It is of course true that, as unelected officials with lifetime appointments, judges are the least politically accountable to the polity of any government officials. Judges’ decisions, therefore, can reasonably be expected to be less representative of, or responsive to, the concerns of the voting public than decisions of other government officials. But not all political decisions need to be directly politically accountable in order to be effectively politically accountable. A judicial interpretation of an ambiguous law, for instance, can be interpreted as a request, or even a demand, that Congress be held to political account. And where Congress is failing to perform its constitutionally-defined role as a politically-accountable decisionmaker, it may do less harm to the separation of powers for the judiciary to make political decisions that force politically-accountable responses by Congress than for the judiciary to respect its constitutional role while the Congress ignores its role.

Before going too far down this road, I should pause to label the reframing of the debate that I have impliedly proposed. To my mind, the question isn’t whether Chevron reduces political decisionmaking by judges; the question is how Chevron affects the politicization of, and ultimately accountability to the people for, the law. Critically, there is no “conservation of politicization” principle. Institutional design matters. One could imagine a model of government where Congress exercises very direct oversight over what the law is and how it is implemented, with frequent elections and a Constitutional prohibition on all but the most express and limited forms of delegation. One can also imagine a more complicated form of government in which responsibilities for making law, executing law, and interpreting law, are spread across multiple branches (possibly including myriad agencies governed by rules that even many members of those agencies do not understand). And one can reasonably expect greater politicization of decisions in the latter compared to the former – because there are more opportunities for saying that the responsibility for any decision lies with someone else (and therefore for politicization) in the latter than in the “the buck stops here” model of the former.

In the common-law tradition, judges exercised an important degree of independence because their job was, necessarily and largely, to “say what the law is.” For better or worse, we no longer live in a world where judges are expected to routinely exercise that level of discretion, and therefore to have that level of independence. Nor do I believe that “independence” is necessarily or inherently a criteria for the judiciary, at least in principle. I therefore somewhat disagree with Hamburger’s assertion that Chevron necessarily amounts to a problematic diminution in judicial independence.

Again, I return to a consequentialist understanding of the purposes of judicial independence. In my mind, we should consider the need for judicial independence in terms of whether “independent” judicial decisionmaking tends to lead to better or worse social outcomes. And here I do find myself sympathetic to Hamburger’s concerns about judicial independence. The judiciary is intended to serve as a check on the other branches. Hamburger’s concern about judicial independence is, in my mind, driven by an overwhelmingly correct intuition that the structure envisioned by the Constitution is one in which the independence of judges is an important check on the other branches. With respect to the Congress, this means, in part, ensuring that Congress is held to political account when it does legislative tasks poorly or fails to do them at all.

The courts abdicate this role when they allow agencies to save poorly drafted statutes through interpretation of ambiguity.

Judicial independence moderates politicization

Hamburger tells us that “Judges (and academics) need to wrestle with the realities of how Chevron bias and other administrative power is rapidly delegitimizing our government and creating a profound alienation.” Huzzah. Amen. I couldn’t agree more. Preach! Hear-hear!

Allow me to present my personal theory of how Chevron affects our political discourse. In the vernacular, I call this Chevron Step Three. At Step Three, Congress corrects any mistakes made by the executive or independent agencies in implementing the law or made by the courts in interpreting it. The subtle thing about Step Three is that it doesn’t exist – and, knowing this, Congress never bothers with the politically costly and practically difficult process of clarifying legislation.

To the contrary, Chevron encourages the legislature expressly not to legislate. The more expedient approach for a legislator who disagrees with a Chevron-backed agency action is to campaign on the disagreement – that is, to politicize it. If the EPA interprets the Clean Air Act too broadly, we need to retake the White House to get a new administrator in there to straighten out the EPA’s interpretation of the law. If the FCC interprets the Communications Act too narrowly, we need to retake the White House to change the chair so that we can straighten out that mess! And on the other side, we need to keep the White House so that we can protect these right-thinking agency interpretations from reversal by the loons on the other side that want to throw out all of our accomplishments. The campaign slogans write themselves.

So long as most agencies’ governing statutes are broad enough that those agencies can keep the ship of state afloat, even if drifting rudderless, legislators have little incentive to turn inward to engage in the business of government with their legislative peers. Rather, they are freed to turn outward towards their next campaign, vilifying or deifying the administrative decisions of the current government as best suits their electoral prospects.

The sharp-eyed observer will note that I’ve added a piece to the Chevron puzzle: the process described above assumes that a new administration can come in after an election and simply rewrite all of the rules adopted by the previous administration. Not to put too fine a point on the matter, but this is exactly what administrative law allows (see Fox v. FCC and State Farm). The underlying logic, which is really nothing more than an expansion of Chevron, is that statutory ambiguity delegates to agencies a “policy space” within which they are free to operate. So long as agency action stays within that space – which often allows for diametrically-opposed substantive interpretations – the courts say that it is up to Congress, not the Judiciary, to provide course corrections. Anything else would amount to politically unaccountable judges substituting their policy judgments (this is, acting independently) for those of politically-accountable legislators and administrators.

In other words, the politicization of law seen in our current political moment is largely a function of deference and a lack of stare decisis combined. A virtue of stare decisis is that it forces Congress to act to directly address politically undesirable opinions. Because agencies are not bound by stare decisis, an alternative, and politically preferable, way for Congress to remedy problematic agency decisions is to politicize the issue – instead of addressing the substantive policy issue through legislation, individual members of Congress can campaign on it. (Regular readers of this blog will be familiar with one contemporary example of this: the recent net neutrality CRA vote, which is widely recognized as having very little chance of ultimate success but is being championed by its proponents as a way to influence the 2018 elections.) This is more directly aligned with the individual member of Congress’s own incentives, because, by keeping and placing more members of her party in Congress, her party will be able to control the leadership of the agency which will thus control the shape of that agency’s policy. In other words, instead of channeling the attention of individual Congressional actors inwards to work together to develop law and policy, it channels it outwards towards campaigning on the ills and evils of the opposing administration and party vice the virtues of their own party.

The virtue of judicial independence, of judges saying what they think the law is – or even what they think the law should be – is that it forces a politically-accountable decision. Congress can either agree, or disagree; but Congress must do something. Merely waiting for the next administration to come along will not be sufficient to alter the course set by the judicial interpretation of the law. Where Congress has abdicated its responsibility to make politically-accountable decisions by deferring those decisions to the executive or agencies, the political-accountability justification for Chevron deference fails. In such cases, the better course for the courts may well be to enforce Congress’s role under the separation of powers by refusing deference and returning the question to Congress.

 

Weekend reads

Eric Fruits —  1 June 2018

Good government dies in the darkness. This article is getting a lot of attention on Wonk Twitter and what’s left of the blogosphere. From the abstract:

We examine the effect of local newspaper closures on public finance for local governments. Following a newspaper closure, we find municipal borrowing costs increase by 5 to 11 basis points in the long run …. [T]hese results are not being driven by deteriorating local economic conditions. The loss of monitoring that results from newspaper closures is associated with increased government inefficiencies, including higher likelihoods of costly advance refundings and negotiated issues, and higher government wages, employees, and tax revenues.

What the hell happened at GE? This guy blames Jeff Immelt’s buy-high/sell-low strategy. I blame Jack Welch.

Academic writing is terrible. Science journalist Anna Clemens wants to change that. (Plus she quotes one of my grad school professors, Paul Zak Here’s what Clemens says about turning your research into a story:

But – just as with any Hollywood success in the box office – your paper will not become a page-turner, if you don’t introduce an element of tension now. Your readers want to know what problem you are solving here. So, tell them what gap in the literature needs to be filled, why method X isn’t good enough to solve Y, or what still isn’t known about mechanism Z. To introduce the tension, words such as “however”, “despite”, “nevertheless”, “but”, “although” are your best friends. But don’t fool your readers with general statements, phrase the problem precisely.

Write for the busy reader. While you’re writing your next book, paper, or op-ed, check out what the readability robots think of your writing.

They tell me I’ll get more hits if I mention Bitcoin and blockchain. Um, OK. Here goes. The Seattle Times reports on the mind-blowing amount of power cryptocurrency miners are trying to buy in the electricity-rich Pacific Northwest:

In one case this winter, miners from China landed their private jet at the local airport, drove a rental car to the visitor center at the Rocky Reach Dam, just north of Wenatchee, and, according to Chelan County PUD officials, politely asked to see the “dam master because we want to buy some electricity.”

You will never find a more wretched hive of scum and villainy. The Wild West of regulating cryptocurrencies:

The government must show that the trader intended to artificially affect the price. The Federal District Court in Manhattan once explained that “entering into a legitimate transaction knowing that it will distort the market is not manipulation — only intent, not knowledge, can transform a legitimate transaction into manipulation.”

Tyler Cowen on what’s wrong with the Internet. Hint: It’s you.

And if you hate Twitter, it is your fault for following the wrong people (try hating yourself instead!).  Follow experts and people of substance, not people who seek to lower the status of others.

If that fails, “mute words” is your friend. Muting a few terms made my Twitter experience significantly more enjoyable and informative.

 

mute