Archives For

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]

The public policy community’s infatuation with digital privacy has grown by leaps and bounds since the enactment of GDPR and the CCPA, but COVID-19 may leave the most enduring mark on the actual direction that privacy policy takes. As the pandemic and associated lockdowns first began, there were interesting discussions cropping up about the inevitable conflict between strong privacy fundamentalism and the pragmatic steps necessary to adequately trace the spread of infection. 

Axiomatic of this controversy is the Apple/Google contact tracing system, software developed for smartphones to assist with the identification of individuals and populations that have likely been in contact with the virus. The debate sparked by the Apple/Google proposal highlights what we miss when we treat “privacy” (however defined) as an end in itself, an end that must necessarily  trump other concerns. 

The Apple/Google contact tracing efforts

Apple/Google are doing yeoman’s work attempting to produce a useful contact tracing API given the headwinds of privacy advocacy they face. Apple’s webpage describing its new contact tracing system is a testament to the extent to which strong privacy protections are central to its efforts. Indeed, those privacy protections are in the very name of the service: “Privacy-Preserving Contact Tracing” program. But, vitally, the utility of the Apple/Google API is ultimately a function of its efficacy as a tracing tool, not in how well it protects privacy.

Apple/Google — despite the complaints of some states — are rolling out their Covid-19-tracking services with notable limitations. Most prominently, the APIs will not allow collection of location data, and will only function when users explicitly opt-in. This last point is important because there is evidence that opt-in requirements, by their nature, tend to reduce the flow of information in a system, and when we are considering tracing solutions to an ongoing pandemic surely less information is not optimal. Further, all of the data collected through the API will be anonymized, preventing even healthcare authorities from identifying particular infected individuals.

These restrictions prevent the tool from being as effective as it could be, but it’s not clear how Apple/Google could do any better given the political climate. For years, the Big Tech firms have been villainized by privacy advocates that accuse them of spying on kids and cavalierly disregarding consumer privacy as they treat individuals’ data as just another business input. The problem with this approach is that, in the midst of a generational crisis, our best tools are being excluded from the fight. Which begs the question: perhaps we have privacy all wrong? 

Privacy is one value among many

The U.S. constitutional order explicitly protects our privacy as against state intrusion in order to guarantee, among other things, fair process and equal access to justice. But this strong presumption against state intrusion—far from establishing a fundamental or absolute right to privacy—only accounts for part of the privacy story. 

The Constitution’s limit is a recognition of the fact that we humans are highly social creatures and that privacy is one value among many. Properly conceived, privacy protections are themselves valuable only insofar as they protect other things we value. Jane Bambauer explored some of this in an earlier post where she characterized privacy as, at best, an “instrumental right” — that is a tool used to promote other desirable social goals such as “fairness, safety, and autonomy.”

Following from Jane’s insight, privacy — as an instrumental good — is something that can have both positive and negative externalities, and needs to be enlarged or attenuated as its ability to serve instrumental ends changes in different contexts. 

According to Jane:

There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy …  this same logic applies at scale to the collection and analysis of data during a pandemic.

Indeed, dealing with externalities is one of the most common and powerful justifications for regulation, and an extreme form of “privacy libertarianism” —in the context of a pandemic — is likely to be, on net, harmful to society.

Which brings us back to efforts of Apple/Google. Even if those firms wanted to risk the ire of  privacy absolutists, it’s not clear that they could do so without incurring tremendous regulatory risk, uncertainty and a popular backlash. As statutory matters, the CCPA and the GDPR chill experimentation in the face of potentially crippling fines. While the FTC Act’s Section 5 prohibition on “unfair or deceptive” practices is open to interpretation in manners which could result in existentially damaging outcomes. Further, some polling suggests that the public appetite for contact tracing is not particularly high – though, as is often the case, such pro-privacy poll outcomes rarely give appropriate shrift to the tradeoff required.

As a general matter, it’s important to think about the value of individual privacy, and how best to optimally protect it. But privacy does not stand above all other values in all contexts. It is entirely reasonable to conclude that, in a time of emergency, if private firms can devise more effective solutions for mitigating the crisis, they should have more latitude to experiment. Knee-jerk preferences for an amorphous “right of privacy” should not be used to block those experiments.

Much as with the Cosmic Turtle, its tradeoffs all the way down. Most of the U.S. is in lockdown, and while we vigorously protect our privacy, we risk frustrating the creation of tools that could put a light at the end of the tunnel. We are, in effect, trading liberty and economic self-determination for privacy.

Once the worst of the Covid-19 crisis has passed — hastened possibly by the use of contact tracing programs — we can debate the proper use of private data in exigent circumstances. For the immediate future, we should instead be encouraging firms like Apple/Google to experiment with better ways to control the pandemic. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]


The ongoing pandemic has been an opportunity to explore different aspects of the human condition. For myself, I have learned that, despite a deep commitment to philosophical (neo- or classical-) liberalism, at heart I am pragmatic. I would prefer a society that optimizes for more individual liberty, but I am emphatically not someone who would even entertain the idea of using crises to advance my agenda when it is not clearly in service to amelioration of immediate problems.

Sadly, I have also learned that there are those who are not similarly pragmatic, and are willing to advance their ideological agenda come hell or high water. In this regard, I was disappointed yesterday to see the Gurry IP/COVID Letter passing around Twitter calling for widespread, worldwide interference with the property rights of IPR holders. 

The letter calls for a scattershot set of “remedies” to the crisis that would open access to copyright- and patent-protected inventions and content, including (among other things): 

  • voluntary licensing and non-enforcement of IP;
  • abrogation of IPR by WIPO members using the  “flexibility” in the international IP regime; 
  • the removal of geographical restrictions on IP licenses;
  • forcing patents into COVID-19 patent pools; and 
  • the implementation of compulsory licensing. 

And, unlike many prior efforts to push the envelope on weakening IP protections, the Gurry Letter also calls for measures that would weaken trade secrets and expose confidential business information in order to “achieve universal and equitable access to COVID-19 medicines and medical technologies as soon as reasonably possible.”

Notably, nothing in the letter suggests that any of these measures should be regarded as temporary.

We all want treatments for infection, vaccines for prevention, and ample supply of personal protective equipment as soon as possible, but if all the demands in this letter were met, it would do little to increase the supply of any of these things in the short term, while undermining incentives to develop new treatments, vaccines and better preventative tools in the long run. 

Fundamentally, the letter  reflects a willingness to use the COVID-19 pandemic to pursue an agenda that lacks merit and would be dismissed in the normal course of affairs. 

What is most certainly the case is that we need more innovation now, and we need it faster. There is no reason to believe that mandating open source status or forcing compulsory licensing on the firms doing that work will encourage that work to proceed with all due haste—and every indication that the opposite is the case. 

Where there are short term shortages of certain products that might be produced in much larger quantities by relaxing IP, companies are responding by doing just that—voluntarily. But this is fundamentally different from the imposition of unlimited compulsory licenses.

Further, private actors have displayed an impressive willingness to provide free or low cost access to technologies and content—without government coercion. The following is a short list of some of the content and inventions that have been opened up:

Culture, Fitness & Entertainment

  • HBO Will Stream 500 Hours of Free Programming, Including Full Seasons of ‘Veep,’ ‘The Sopranos,’ ‘Silicon Valley’”
  • Dozens (or more) of artists, both famous and lesser known, are releasing free back catalog performances or are taking part in free live streaming sessions on social media platforms. Notably, viewers are often welcome to donate or “pay what they” want to help support these artists (more on this below).
  • The NBA, NFL, and NHL are offering free access to their back catalogue of games.
  • A large array of music production software can now be used free on extended trials for 3 months (or completely free and unlimited in some cases). 
  • CBS All Access expanded its free trial period.
  • Neil Gaiman and Harper Collins granted permission to Levar Burton to livestream readings from their catalogs.
  • Disney is releasing movies early onto its (paid) Disney+ services.
  • Gold’s Gym is providing free access to its app-based workouts.
  • The Met is streaming free recordings of its Live in HD series.
  • The Seattle Symphony is offering free access to some of its recorded performances.
  • The UK National Theater is streaming some of its most popular plays for free.
  • Andrew Lloyd Weber is streaming his shows online for free.

Science, News & Education

  • Scholastica released free content intended to help educate students stuck at home while sheltering-in-place. 
  • Nearly 100 academic journals, societies, institutes, and companies signed a commitment to make research and data on COVID-19 freely available, at least for the duration of the outbreak.
  • The Atlantic lifted paywall restrictions on access to its COVID-19-related content.
  • The New England Journal of Medicine is allowing free access to COVID-19-related resources.
  • The Lancet allows free access to research it publishes on COVID-19.
  • All material published by theBMJ on the coronavirus outbreak is freely available.
  • The AAAS-published Science allows free access to its coronavirus research and commentary.
  • Elsevier gave full access to its content on its COVID-19 Information Center for PubMed Central and other public health databases.
  • The American Economic Association announced open access to all of its journals until the end of June.
  • JSTOR expanded free access to some of its scholarship.

Medicine & Technology

  • The Global Center for Medical Design is developing license-free PPE designs that can be quickly implemented by manufacturers.
  • Medtronic published “design specifications for the Puritan Bennett 560 (PB560) to allow innovators, inventors, start-ups, and academic institutions to leverage their own expertise and resources to evaluate options for rapid ventilator manufacturing.” It additionally provided software licenses for this technology.
  • AbbVie announced it won’t enforce its patent rights for Kaletra—a drug that may provide treatment for COVID-19 infections. Israel had earlier indicated it would impose compulsory licenses for the drug, but AbbVie is allowing use worldwide. The company, moreover, had donated supplies of the drug to China earlier in the year when the outbreak first became apparent.
  • Google is working with health researchers to provide anonymized and aggregated user location data. 
  • Cisco has extended free licenses and expanded usage counts at no extra charge for three of its security technologies to help strained IT teams and partners ready themselves and their clients for remote work.”
  • Microsoft is offering free subscriptions to its Teams product for six months.
  • Zoom expanded its free access and other limitations for educational institutions around the world.

Incentivize innovation, now more than ever

In addition to undermining the short-term incentives to draw more research resources into the fight against COVID-19, using this crisis to weaken the IP regime will cause long-term damage to the economies of the world. We still will need creators making new cultural products and researchers developing new medicines and technologies; weakening the IP regime will undermine the delicate set of incentives that cultural and scientific production depends upon. 

Any clear-eyed assessment of the broader course of the pandemic and the response to it gives lie to the notion that IP rights are oppressive or counterproductive. It is the pharmaceutical industry—hated as they may be in some quarters—that will be able to marshall the resources and expertise to develop treatments and vaccines. And it is artists and educators producing cultural content who (theoretically) depend on the licensing revenues of their creations for survival. 

In fact, one of the things that the pandemic has exposed is the fragility of artists’ livelihoods and the callousness with which they are often treated. Shortly after the lockdowns began in the US, the well-established rock musician David Crosby said in an interview that, if he could not tour this year, he would face tremendous financial hardship. 

As unfortunate as that may be for Crosby, a world-famous musician, imagine how much harder it is for struggling musicians who can hardly hope to achieve a fraction of Crosby’s success for their own tours, let alone for licensing. If David Crosby cannot manage well for a few months on the revenue from his popular catalog, what hope do small artists have?

Indeed, the flood of unable-to-tour artists who are currently offering “donate what you can” streaming performances are a symptom of the destructive assault on IPR exemplified in the letter. For decades, these artists have been told that they can only legitimately make money through touring. Although the potential to actually make a living while touring is possibly out of reach for many or most artists,  those that had been scraping by have now been brought to the brink of ruin as the ability to tour is taken away. 

There are certainly ways the various IP regimes can be improved (like, for instance, figuring out how to help creators make a living from their creations), but now is not the time to implement wishlist changes to an otherwise broadly successful rights regime. 

And, critically, there is a massive difference between achieving wider distribution of intellectual property voluntarily as opposed to through government fiat. When done voluntarily the IP owner determines the contours and extent of “open sourcing” so she can tailor increased access to her own needs (including the need to eat and pay rent). In some cases this may mean providing unlimited, completely free access, but in other cases—where the particular inventor or creator has a different set of needs and priorities—it may be something less than completely open access. When a rightsholder opts to “open source” her property voluntarily, she still retains the right to govern future use (i.e. once the pandemic is over) and is able to plan for reductions in revenue and how to manage future return on investment. 

Our lawmakers can consider if a particular situation arises where a particular piece of property is required for the public good, should the need arise. Otherwise, as responsible individuals, we should restrain ourselves from trying to capitalize on the current crisis to ram through our policy preferences. 

The following is the first in a new blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available at https://truthonthemarket.com/symposia/the-law-economics-of-the-covid-19-pandemic/.

Continue Reading...

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.

[TOTM: The following is the sixth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Kristian Stout, Associate Director at the International Center for Law & Economics.

There is a push underway to punish big tech firms, both for alleged wrongdoing and in an effort to prevent future harm. But the movement to use antitrust law to punish big tech firms is far more about political expediency than it is about sound competition policy. 

For a variety of reasons, there is a current of dissatisfaction in society with respect to the big tech companies, some of it earned, and some of it unearned. Between March 2019 and September 2019, polls suggested that Americans were increasingly willing to entertain breaking up or otherwise increasing regulation on the big tech firms. No doubt, some significant share of this movement in popular opinion is inspired by increasingly negative reporting from major news outlets (see, for a small example, 1, 2, 3, 4, 5, 6, 7, 8). But, the fact that these companies make missteps does not require that any means at hand should be used to punish them. 

Further, not only is every tool not equal in dealing with the harms these companies could cause, we must be mindful that, even when some harm occurs, these companies generate a huge amount of social welfare. Our policy approaches to dealing with misconduct, therefore, must be appropriately measured. 

To listen to  the media, politicians, and activists, however, one wouldn’t know that anything except extreme action — often using antitrust law — is required. Presidential hopefuls want to smash the big tech companies, while activists and academics see evidence of anticompetitive conduct in every facet of these  companies’ behavior. Indeed, some claim that the firms themselves are per se a harm to democracy

The confluence of consumer dissatisfaction and activist zeal leads to a toxic result: not wanting to let a good crisis go to waste, activists and politicians push the envelope on the antitrust theories they want to apply to the detriment of the rule of law. 

Consumer concerns

Missteps by the big tech companies, both perceived and real, have led to some degree of consumer dissatisfaction. In terms of real harms data breaches and privacy scandals have gained more attention in recent years and are undoubtedly valid concerns of consumers. 

In terms of perceived harms, it has, for example, become increasingly popular to blame big tech companies for tilting the communications landscape in favor of one or another political preference. Ironically, the accusations leveled against big tech are frequently at odds. Some progressives blame big tech for helping Donald Trump to be elected president, while some conservatives believe a pervasive bias in Silicon Valley in favor of progressive policies harms conservative voices. 

But, at the same time, consumers are well familiar with the benefits that search engines, the smartphone revolution, and e-commerce have provided to society. The daily life of the average consumer is considerably better today than it was in past decades thanks to the digital services and low cost technology that is in reach of even the poorest among us. 

So why do consumers appear to be listening to the heated rhetoric of the antitrust populists?

Paul Seabright pointed to one of the big things that I think is motivating consumer willingness to listen to populist attacks on otherwise well-regarded digital services. In his keynote speech at ICLE’s “Dynamic Competition and Online Platforms” conference earlier this month, he discussed the role of trust in the platform ecosystem. According to Seabright, 

Large digital firms create anxiety in proportion to how much they meet our needs… They are strong complements to many of our talents and activities – but they also threaten to provide lots of easy substitutes for us and our talents and activities… The more we trust them the more we (rightly) fear the abuse of their trust.

Extending this insight, we imbue these platforms with a great deal of trust because they are so important to our daily lives. And we have a tendency to respond dramatically to (perceived or actual) violations of trust by these platforms because they do such a great job in nearly every respect. When a breach of that trust happens — even if its relative impact on our lives is small, and the platform continues to provide a large amount of value — we respond not in terms of its proportionate effect on our lives, but in the emotional terms of one who has been wronged by a confidant.

It is that emotional lever that populist activists and politicians are able to press. The populists can frame the failure of the firms as the sum total of their existence, and push for extreme measures that otherwise (and even a few short years ago) would have been unimaginable. 

Political opportunism

The populist crusade is fueled by the underlying sentiment of consumers, but has its own separate ends. Some critics of the state of antitrust law are seeking merely a realignment of priorities within existing doctrine. The pernicious crusade of antitrust populists, however, seeks much more. These activists (and some presidential hopefuls) want nothing short of breaking up big tech and of returning the country to some ideal of “democracy” imagined as having existed in the hazy past when antitrust laws were unevenly enforced.

It is a laudable goal to ensure that the antitrust laws are being properly administered on their own terms, but it is an entirely different project to crusade to make antitrust great again based on the flawed understandings from a century ago. 

In few areas of life would most of us actually yearn to reestablish the political and social order of times gone by — notwithstanding presidential rhetoric. The sepia-toned crusade to smash tech companies into pieces inherits its fervor from Louis Brandeis and his fellow travelers who took on the mustache-twisting villains of their time: Carnegie, Morgan, Mellon and the rest of the allegedly dirty crew of scoundrels. 

Matt Stoller’s recent book Goliath captures this populist dynamic well. He describes the history of antitrust passionately, as a morality play between the forces of light and those of darkness. On one side are heroes like Wright Patman, a scrappy poor kid from Texas who went to a no-name law school and rose to prominence in Washington as an anti-big-business crusader. On the other side are shadowy characters like Andrew Mellon, who cynically manipulated his way into government after growing bored with administering his vast, immorally acquired economic empire. 

A hundred years ago the populist antitrust quest was a response to the success of industrial titans, and their concentration of wealth in the hands of relatively few. Today, a similar set of arguments are directed at the so-called big tech companies. Stoller sees the presence of large tech firms as inimical to democracy itself — “If we don’t do something about big tech, something meaningful, we’ll just become a fascist society. It’s fairly simple.” Tim Wu has made similar claims (which my colleague Alec Stapp has ably rebutted).

In the imagination of the populists, there are good guys and bad guys and the optimal economy would approach atomistic competition. In such a world, the “little guy” can have his due and nefarious large corporations cannot become too economically powerful relative to the state.

Politicians enter this mix of consumer sentiment and populist activism with their own unique priors. On the one hand, consumer dissatisfaction makes big tech a ripe target to score easy political points. It’s a hot topic that fits easily into fundraising pitches. After all, who really cares if the billionaires lose a couple of million dollars through state intervention?

In truth, I suspect that politicians are ambivalent about what exactly to do to make good on their anti-big tech rhetoric. They will be forced to admit that these companies provide an enormous amount of social value, and if they destroy that value, fickle voters will punish them. The threat at hand is if politicians allow themselves to be seduced by the simplistic policy recommendations of the populists.

Applying the right tool to the job

Antitrust is a seductive tool to use against politically disfavored companies. It is an arcane area of law which, to the average observer, will be just so much legalese. It is, therefore, relatively easy to covertly import broader social preferences through antitrust action and pretend that the ends of the law aren’t being corrupted. 

But this would be a mistake. 

The complicated problem with the big tech companies is that they indeed could be part of a broader set of social ills mentioned above. Its complicated because it’s highly unlikely that these platforms cause the problems in society, or that any convenient legal tool like antitrust will do much to actually remedy the problems we struggle with. 

Antitrust is a goal-focused body of law, and the goal it seeks—optimizing consumer welfare—is distinctly outside of the populist agenda. The real danger in the populist campaign is not just the social losses we will incur if they successfully smash productive firms, but the long term harm to the rule of law. 

The American system of law is fundamentally predicated on an idea of promulgating rules of general applicability, and resorting to sector- or issue-specific regulations when those general bodies of law are found to be inapplicable or ineffective. 

Banking regulation is a prime example. Banks are subject to general regulation from entities like the FDIC and the Federal Reserve, but, for particular issues, are subject to other agencies and laws. Requirements for deterring money laundering, customer privacy obligations, and rules mandating the separation of commercial banking from investment activities all were enacted through specific legislation aimed to tailor the regulatory regime that banks faced. 

Under many of the same theories being propounded by the populists, antitrust should have been used for at least some of these ends. Couldn’t you frame the “problem” of mixing commercial banking and investment as one of impermissible integration that harms the competitive process? Wouldn’t concerns for the privacy of bank consumers sound in exactly the same manner as that proposed by advocates who claim that concentrated industries lack the incentive to properly include privacy as a dimension of product quality?

But if we hew to rigorous interpretation of competition policy, the problem for critics is that their claims that actually sound in antitrust – that Amazon predatorily prices, or Google engages in anticompetitive tying, for example – are highly speculative and not at all an easy play if pressed in litigation. So they try “new” theories of antitrust as a way to achieve preferred policy ends. Changing well accepted doctrine, such as removing the recoupement requirement from predatory pricing in order to favor small firms, or introducing broad privacy or data sharing obligations as part of competition “remedies”, is a terrible path for the stability of society and the health of the rule of law. 

Concerns about privacy, hate speech, and, more broadly, the integrity of the democratic process are critical issues to wrestle with. But these aren’t antitrust problems. If we lived in a different sort of society, where the rule of law meant far less than it does here, its conceivable that we could use whatever legal tool was at hand to right the wrongs of society. But this isn’t a good answer if you take seriously constitutional design; allowing antitrust law to “solve” broader social problems is to suborn Congress in giving away its power to a relatively opaque enforcement process. 

We should not have our constitution redesigned by antitrust lawyers.

Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing  technology resources available to Congress are relatively modest. 

Although a new OTA is unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails. 

The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem. 

The OTA is a poor fit for the modern world

The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls —  have sought to revive the OTA. 

To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA. 

Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process. 

A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”

The upshot?

A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges

The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties. 

The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.

According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive[] to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.

The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports. 

A new OTA unnecessarily increases the surface area for regulatory capture

Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:

This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.

Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.

The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable. 

The real problem to solve is a lack of modern governance

Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents. 

Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.

Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws. 

The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.  

It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that

[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.

Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing. 

Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood: 

Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.

The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory  bases covered —  has little relevance.

The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.

*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.

On March 19-20, 2020, the University of Nebraska College of Law will be hosting its third annual roundtable on closing the digital divide. UNL is expanding its program this year to include a one-day roundtable that focuses on the work of academics and researchers who are conducting empirical studies of the rural digital divide. 

Academics and researchers interested in having their work featured in this event are now invited to submit pieces for consideration. Submissions should be submitted by November 18th, 2019 using this form. The authors of papers and projects selected for inclusion will be notified by December 9, 2019. Research honoraria of up to $5,000 may be awarded for selected projects.

Example topics include cost studies of rural wireless deployments, comparative studies of the effects of ACAM funding, event studies of legislative interventions such as allowing customers unserved by carriers in their home exchange to request service from carriers in adjoining exchanges, comparative studies of the effectiveness of various federal and state funding mechanisms, and cost studies of different sorts of municipal deployments. This list is far from exhaustive.

Any questions about this event or the request for projects can be directed to Gus Hurwitz at ghurwitz@unl.edu or Elsbeth Magilton at elsbeth@unl.edu.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.

One of the main concerns I had during the IANA transition was the extent to which the newly independent organization would be able to behave impartially, implementing its own policies and bylaws in an objective and non-discriminatory manner, and not be unduly influenced by specific  “stakeholders”. Chief among my concerns at the time was the extent to which an independent ICANN would be able to resist the influence of governments: when a powerful government leaned on ICANN’s board, would it be able to adhere to its own policies and follow the process the larger multistakeholder community put in place?

It seems my concern was not unfounded. Amazon, Inc. has been in a long running struggle with the countries of the Amazonian Basin in South America over the use of the generic top-level domain (gTLD) .amazon. In 2014, the ICANN board (which was still nominally under the control of the US’s NTIA) uncritically accepted the nonbinding advice of the Government Advisory Committee (“GAC”) and denied Amazon Inc.’s application for .amazon. In 2017, an Independent Review Process panel reversed the board decision, because

[the board] failed in its duty to explain and give adequate reasons for its decision, beyond merely citing to its reliance on the GAC advice and the presumption, albeit a strong presumption, that it was based on valid and legitimate public policy concerns.  

Accordingly the board was directed to reconsider the .amazon petition and

make an objective and independent judgment regarding whether there are, in fact, well-founded, merits-based public policy reasons for denying Amazon’s applications.

In the two years since that decision, a number of proposals were discussed between Amazon Inc. and the Amazonian countries as they sought to reach a mutually agreeable resolution to the dispute, none of which were successful. In March of this year, the board acknowledged the failed negotiations and announced that the parties had four more weeks to try again and if no agreement were reached in that time, permitted Amazon Inc. to submit a proposal that would handle the Amazonian countries’ cultural protection concerns.

Predictably, that time elapsed and Amazon, Inc. submitted its proposal, which includes a public interest commitment that would allow the Amazonian countries access to certain second level domains under .amazon for cultural and noncommercial use. For example, Brazil could use a domain such as http://www.br.amazon to showcase the culturally relevant features of the portion of the Amazonian river that flows through its borders.

Prime facie, this seems like a reasonable way to ensure that the cultural interests of those living in the Amazonian region are adequately protected. Moreover, in its first stated objection to Amazon, Inc. having control of the gTLD, the GAC indicated that this was its concern:

[g]ranting exclusive rights to this specific gTLD to a private company would prevent the use of  this domain for purposes of public interest related to the protection, promotion and awareness raising on issues related to the Amazon biome. It would also hinder the possibility of use of this domain to congregate web pages related to the population inhabiting that geographical region.

Yet Amazon, Inc.’s proposal to protect just these interests was rejected by the Amazonian countries’ governments. The counteroffer from those governments was that they be permitted to co-own and administer the gTLD, that their governance interest be constituted in a steering committee on which Amazon, Inc. be given only a 1/9th vote, that they be permitted a much broader use of the gTLD generally and, judging by the conspicuous lack of language limiting use to noncommercial purposes, that they have the ability to use the gTLD for commercial purposes.

This last point certainly must be a nonstarter. Amazon, Inc.’s use of .amazon is naturally going to be commercial in nature. If eight other “co-owners” were permitted a backdoor to using the ‘.amazon’ name in commerce, trademark dilution seems like a predictable, if not inevitable, result. Moreover, the entire point of allowing brand gTLDs is to help responsible brand managers ensure that consumers are receiving the goods and services they expect on the Internet. Commercial use by the Amazonian countries could easily lead to a situation where merchants selling goods of unknown quality are able to mislead consumers by free riding on Amazon, Inc.’s name recognition.

This is a big moment for Internet governance

Theoretically, the ICANN board could decide this matter as early as this week — but apparently it has opted to treat this meeting as merely an opportunity for more discussion. That the board would consider not following through on its statement in March that it would finally put this issue to rest is not an auspicious sign that the board intends to take its independence seriously.

An independent ICANN must be able to stand up to powerful special interests when it comes to following its own rules and bylaws. This is the very concern that most troubled me before the NTIA cut the organization loose. Introducing more delay suggests that the board lacks the courage of its convictions. The Amazonian countries may end up irritated with the ICANN board, but ICANN is either an independent organization or its not.

Amazon, Inc. followed the prescribed procedures from the beginning; there is simply no good reason to draw out this process any further. The real fear here, I suspect, is that the board knows that this is a straightforward trademark case and is holding out hope that the Amazonian countries will make the necessary concessions that will satisfy Amazon, Inc. After seven years of this process, somehow I suspect that this is not likely and the board simply needs to make a decision on the proposals as submitted.

The truth is that these countries never even applied for use of the gTLD in the first place; they only became interested in the use of the domain once Amazon, Inc. expressed interest. All along, these countries maintained that they merely wanted to protect the cultural heritage of the region — surely a fine goal. Yet, when pressed to the edge of the timeline on the process, they produce a proposal that would theoretically permit them to operate commercial domains.

This is a test for ICANN’s board. If it doesn’t want to risk offending powerful parties, it shouldn’t open up the DNS to gTLDs because, inevitably, there will exist aggrieved parties that cannot be satisfied. Amazon, Inc. has submitted a solid proposal that allows it to protect both its own valid trademark interests in its brand as well as the cultural interests of the Amazonian countries. The board should vote on the proposal this week and stop delaying this process any further.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.