Archives For

Though details remain scant (and thus, any final judgment would be premature),  initial word on the new Trans-Atlantic Data Privacy Framework agreed to, in principle, by the White House and the European Commission suggests that it could be a workable successor to the Privacy Shield agreement that was invalidated by the Court of Justice of the European Union (CJEU) in 2020.

This new framework agreement marks the third attempt to create a lasting and stable legal regime to permit the transfer of EU citizens’ data to the United States. In the wake of the 2013 revelations by former National Security Agency contractor Edward Snowden about the extent of the United States’ surveillance of foreign nationals, the CJEU struck down (in its 2015 Schrems decision) the then-extant “safe harbor” agreement that had permitted transatlantic data flows. 

In the 2020 Schrems II decision (both cases were brought by Austrian privacy activist Max Schrems), the CJEU similarly invalidated the Privacy Shield, which had served as the safe harbor’s successor agreement. In Schrems II, the court found that U.S. foreign surveillance laws were not strictly proportional to the intelligence community’s needs and that those laws also did not give EU citizens adequate judicial redress.  

This new “Privacy Shield 2.0” agreement, announced during President Joe Biden’s recent trip to Brussels, is intended to address the issues raised in the Schrems II decision. In relevant part, the joint statement from the White House and European Commission asserts that the new framework will: “[s]trengthen the privacy and civil liberties safeguards governing U.S. signals intelligence activities; Establish a new redress mechanism with independent and binding authority; and Enhance its existing rigorous and layered oversight of signals intelligence activities.”

In short, the parties believe that the new framework will ensure that U.S. intelligence gathering is proportional and that there is an effective forum for EU citizens caught up in U.S. intelligence-gathering to vindicate their rights.

As I and my co-authors (my International Center for Law & Economics colleague Mikołaj Barczentewicz and Michael Mandel of the Progressive Policy Institute) detailed in an issue brief last fall, the stakes are huge. While the issue is often framed in terms of social-media use, transatlantic data transfers are implicated in an incredibly large swath of cross-border trade:

According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services. Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

The agreement will be implemented on this side of the Atlantic by a forthcoming executive order from the White House, at which point it will be up to EU courts to determine whether the agreement adequately restricts U.S. intelligence activities and protects EU citizens’ rights. For now, however, it appears at a minimum that the White House took the CJEU’s concerns seriously and made the right kind of concessions to reach agreement.

And now, once the framework is finalized, we just have to sit tight and wait for Mr. Schrems’ next case.

All too frequently, vocal advocates for “Internet Freedom” imagine it exists along just a single dimension: the extent to which it permits individuals and firms to interact in new and unusual ways.

But that is not the sum of the Internet’s social value. The technologies that underlie our digital media remain a relatively new means to distribute content. It is not just the distributive technology that matters, but also the content that is distributed. Thus, the norms and laws that facilitate this interaction of content production and distribution are critical.

Sens. Patrick Leahy (D-Vt.) and Thom Tillis (R-N.C.)—the chair and ranking member, respectively, of the Senate Judiciary Committee’s Subcommittee on Intellectual Property—recently introduced legislation that would require online service providers (OSPs) to comply with a slightly heightened set of obligations to deter copyright piracy on their platforms. This couldn’t come at a better time.

S. 3880, the SMART Copyright Act, would amend Section 512 of the Copyright Act, originally enacted as part of the Digital Millennium Copyright Act of 1998. Section 512, among other things, provides safe harbor for OSPs for copyright infringements by their users. The expectation at the time was that OSPs would work voluntarily with rights holders to develop industry best practices to deal with pirated content, while also allowing the continued growth of the commercial Internet.

Alas, it has become increasingly apparent in the nearly quarter-century since the DMCA was passed that the law has not adequately kept pace with the technological capabilities of digital piracy. In April 2020 alone, U.S. consumers logged 725 million visits to pirate sites for movies and television programming. Close to 90% of those visits were attributable to illegal streaming services that use internet protocol television to distribute pirated content. Such services now serve more than 9 million U.S. subscribers and generate more than $1 billion in annual revenue.

Globally, there are more than 26.6 billion annual illicit views of U.S.-produced movies and 126.7 billion views of U.S.-produced television episodes. A report produced for the U.S. Chamber of Commerce by NERA Economic Consulting estimates the annual impact to the United States to be $30 to $70 billion of lost revenue, 230,000 to 560,000 of lost jobs, and between $45 and $115 billion in lower GDP.

Thus far, the most effective preventative measures produced have been filtering solutions adopted by YouTube, Facebook, and Audible Magic, but neither filtering nor other solutions have been adopted industrywide. As the U.S. Copyright Office has observed:

Throughout the Study, the Office heard from participants that Congress’ intent to have multi-stakeholder consensus drive improvements to the system has not been borne out in practice. By way of example, more than twenty years after passage of the DMCA, although some individual OSPs have deployed DMCA+ systems that are primarily open to larger content owners, not a single technology has been designated a “standard technical measure” under section 512(i). While numerous potential reasons were cited for this failure— from a lack of incentives for ISPs to participate in standards to the inappropriateness of one-size-fits-all technologies—the end result is that few widely-available tools have been created and consistently implemented across the internet ecosystem. Similarly, while various voluntary initiatives have been undertaken by different market participants to address the volume of true piracy within the system, these initiatives, although initially promising, likewise have suffered from various shortcomings, from limited participation to ultimate ineffectiveness.

Given the lack of standard technical measures (STMs), the Leahy-Tillis bill would empower the Office of the Librarian of Congress (LOC) broad latitude to recommend STMs for everything from off-the-shelf software to open-source software to general technical strategies that can be applied to a wide variety of systems. This would include the power to initiate public rulemakings in which it could either propose new STMs or revise or rescind existing STMs. The STMs could be as broad or as narrow as the LOC deems appropriate, including being tailored to specific types of content and specific types of providers. Following rulemaking, subject firms would have at least one year to adopt a given STM.

Critically, the SMART Copyright Act would not hold OSPs liable for the infringing content itself, but for failure to make reasonable efforts to accommodate the STM (or for interference with the STM). Courts finding an OSP to have violated their obligation for good-faith compliance could award an injunction, damages, and costs.

The SMART Copyright Act is a directionally correct piece of legislation with two important caveats: it all depends on the kinds of STMs that the LOC recommends and on how a “violation” is determined for the purposes of awarding damages.

The law would magnify the incentive for private firms to work together with rights holders to develop STMs that more reasonably recruit OSPs into the fight against online piracy. In this sense, the LOC would be best situated as a convener, encouraging STMs to emerge from the broad group of OSPs and rights holders. The fact that the LOC would be able to adopt STMs with or without stakeholders’ participation should provide more incentive for collaboration among the relevant parties.

Short of a voluntary set of STMs, the LOC could nonetheless rely on the technical suggestions and concerns of the multistakeholder community to discern a minimum viable set of practices that constitute best efforts to control piracy. The least desirable outcome—and, I suspect, the one most susceptible to failure—would be for the LOC to examine and select specific technologies. If implemented sensibly, the SMART Copyright Act would create a mechanism to enforce the original goals of Section 512.

The damages provisions are likewise directionally correct but need more clarity. Repeat “violations” allow courts to multiply damages awards. But there is no definition of what counts as a “violation,” nor is there adequate clarity about how a “violation” interacts with damages. For example, is a single infringement on a platform a “violation” such that if three occur, the platform faces treble damages for all the infringements in a single case? That seems unlikely.

More reasonable would be to interpret the provision as saying that a final adjudication that the platform behaved unreasonably is what counts for the purposes of calculating whether damages are multiplied. Then, within each adjudication, damages are calculated for all infringements, up to the statutory damages cap. This interpretation would put teeth in the law, but it’s just one possible interpretation. Congress would need to ensure the final language is clear.

An even better would be to make Section 512’s safe harbor contingent on an OSP’s reasonable compliance. Unreasonable behavior, in that case, provides a much more straightforward way to assess damages, without needing to leave it up to court interpretations about what counts as a “violation.” Particularly since courts have historically tended to interpret the DMCA in ways that are unfavorable to rights holders (e.g., “red flag” knowledge), it would be much better to create a simple standard here.

This is not to say there are no potential problems. Among the concerns that surround promulgating new STMs are potentially creating cybersecurity vulnerabilities, sources for privacy leaks, or accidentally chilling speech. Of course, it’s possible that there will be costs to implementing an STM, just as there are costs when private firms operate their own content-protection mechanisms. But just because harms can happen doesn’t mean they will happen, or that they are insurmountable when they do. The criticisms that have emerged have so far taken on the breathless quality of the empirically unfounded claims that 2012’s SOPA/PIPA legislation would spell doom for the Internet. If Section 512 reforms are well-calibrated and sufficiently flexible to adapt to the market realities, I think we can reasonably expect them to be, on net, beneficial.

Toward this end, the SMART Copyright Act contemplates, for each proposed STM, a public comment period and at least one meeting with relevant stakeholders, to allow time to understand its likely costs and benefits. This process would provide ample opportunities to alert the LOC to potential shortcomings.

But the criticisms do suggest a potentially valuable change to the bill’s structure. If a firm does indeed discover that a particular STM, in practice, leads to unacceptable security or privacy risks, or is systematically biased against lawful content, there should be a legal mechanism that would allow for good-faith compliance while also mitigating STMs’ unforeseen flaws. Ideally, this would involve working with the LOC in an iterative process to refine relevant compliance obligations.

Congress will soon be wrapped up in the volatile midterm elections, which could make it difficult for relatively low-salience issues like copyright to gain traction. Nonetheless, the Leahy-Tillis bill marks an important step toward addressing online piracy, and Congress should move deliberatively toward that goal.

Activists who railed against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA) a decade ago today celebrate the 10th anniversary of their day of protest, which they credit with sending the bills down to defeat.

Much of the anti-SOPA/PIPA campaign was based on a gauzy notion of “realizing [the] democratizing potential” of the Internet. Which is fine, until it isn’t.

But despite the activists’ temporary legislative victory, the methods of combating digital piracy that SOPA/PIPA contemplated have been employed successfully around the world. It may, indeed, be time for the United States to revisit that approach, as the very real problems the legislation sought to combat haven’t gone away.

From the perspective of rightsholders, the bill’s most important feature was also its most contentious: the ability to enforce judicial “site-blocking orders.” A site-blocking order is a type of remedy sometimes referred to as a no-fault injunction. Under SOPA/PIPA, a court would have been permitted to issue orders that could be used to force a range of firms—from financial providers to ISPs—to cease doing business with or suspend the service of a website that hosted infringing content.

Under current U.S. law, even when a court finds that a site has willfully engaged in infringement, stopping the infringement can be difficult, especially when the parties and their facilities are located outside the country. While Section 512 of the Digital Millennium Copyright Act does allow courts to issue injunctions, there is ambiguity as to whether it allows courts to issue injunctions that obligate online service providers (“OSP”) not directly party to a case to remove infringing material.

Section 512(j), for instance, provides for issuing injunctions “against a service provider that is not subject to monetary remedies under this section.” The “not subject to monetary remedies under this section” language could be construed to mean that such injunctions may be obtained even against OSPs that have not been found at fault for the underlying infringement. But as Motion Picture Association President Stanford K. McCoy testified in 2020:

In more than twenty years … these provisions of the DMCA have never been deployed, presumably because of uncertainty about whether it is necessary to find fault against the service provider before an injunction could issue, unlike the clear no-fault injunctive remedies available in other countries.

But while no-fault injunctions for copyright infringement have not materialized in the United States, this remedy has been used widely around the world. In fact, more than 40 countries—including Denmark, Finland, France, India, England, and Wales—have enacted or are under some obligation to enact rules allowing for no-fault injunctions that direct ISPs to disable access to websites that predominantly promote copyright infringement. 

In short, precisely the approach to controlling piracy that SOPA/PIPA envisioned has been in force around the world over the last decade. This demonstrates that, if properly tailored, no-fault injunctions are an ideal tool for courts to use in the fight to combat piracy.

If anything, we should be using the anniversary of SOPA/PIPA as an opportunity to reflect on a missed opportunity. Congress should take this opportunity to amend Section 512 to grant U.S. courts authority to issue no-fault injunctions that require OSPs to block access to sites that willfully engage in mass infringement.

We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (“CJEU”) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.

In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.

The IDPC’s decision employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then standard contractual clauses—an alternative mechanism for firms for transferring data—are incapable of satisfying the requirements of EU law.

The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms practically incapable doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.

But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.

It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.

After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.

That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.

  1. U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
  2. The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.

As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.

Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.

Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.

(note: edited to clarify that the Irish High Court is not reviewing SCC’s directly and that the CLOUD Act would not impose legal barriers for firms, but practical ones).

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

One of the themes that has run throughout this symposium has been that, throughout his tenure as both a commissioner and as chairman, Ajit Pai has brought consistency and careful analysis to the Federal Communications Commission (McDowell, Wright). The reflections offered by the various authors in this symposium make one thing clear: the next administration would do well to learn from the considered, bipartisan, and transparent approach to policy that characterized Chairman Pai’s tenure at the FCC.

The following are some of the more specific lessons that can be learned from Chairman Pai. In an important sense, he laid the groundwork for his successful chairmanship when he was still a minority commissioner. His thoughtful dissents were rooted in consistent, clear policy arguments—a practice that both charted how he would look at future issues as chairman and would help the public to understand exactly how he would approach new challenges before the FCC (McDowell, Wright).

One of the most public instances of Chairman Pai’s consistency (and, as it turns out, his bravery) was with respect to net neutrality. From his dissent in the Title II Order, through his commission’s Restoring Internet Freedom Order, Chairman Pai focused on the actual welfare of consumers and the factors that drive network growth and adoption. As Brent Skorup noted, “Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition.” The result of giving in to the Title II advocates would have been to draw the FCC into a quagmire of mass-media regulation that would ultimately harm free expression and broadband deployment in the United States.

Chairman Pai’s vision worked out (Skorup, May, Manne, Hazlett). Despite prognostications of the “death of the internet” because of the Restoring Internet Freedom Order, available evidence suggests that industry investment grew over Chairman Pai’s term. More Americans are connected to broadband than ever before.

Relatedly, Chairman Pai was a strong supporter of liberalizing media-ownership rules that long had been rooted in 20th century notions of competition (Manne). Such rules systematically make it harder for smaller media outlets to compete with large news aggregators and social-media platforms. As Geoffrey Manne notes: 

Consistent with his unwavering commitment to promote media competition… Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers.

This was a bold move for Chairman Pai—in essence, he permitted more local concentration by, e.g., allowing the purchase of a newspaper by a local television station that previously would have been forbidden. By allowing such combinations, the FCC enabled failing local news outlets to shore up their losses and continue to compete against larger, better-resourced organizations. The rule changes are in a case pending before the Supreme Court; should the court find for the FCC, the competitive outlook for local media looks much better thanks to Chairman Pai’s vision.

Chairman Pai’s record on spectrum is likewise impressive (Cooper, Hazlett). The FCC’s auctions under Chairman Pai raised more money and freed more spectrum for higher value uses than any previous commission (Feld, Hazlett). But there is also a lesson in how subsequent administrations can continue what Chairman Pai started. Unlicensed use, for instance, is not free or costless in its maintenance, and Tom Hazlett believes that there is more work to be done in further liberalizing access to the related spectrum—liberalizing in the sense of allowing property rights and market processes to guide spectrum to its highest use:

The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models.

And to a large extent this is the model that Chairman Pai set down, from the issuance of the 12 GHZ NPRM to consider whether those spectrum bands could be opened up for wireless use, to the L-Band Order, where the commission worked hard to reallocate spectrum rights in ways that would facilitate more productive uses.

The controversial L-Band Order was another example of where Chairman Pai displayed both political acumen as well as an apolitical focus on improving spectrum policy (Cooper). Political opposition was sharp and focused after the commission finalized its order in April 2020. Nonetheless, Chairman Pai was deftly able to shepherd the L-Band Order and guarantee that important spectrum was made available for commercial wireless use.

As a native of Kansas, rural broadband rollout ranked highly in the list of priorities at the Pai FCC, and his work over the last four years is demonstrative of this pride of place (Hurwitz, Wright). As Gus Hurwitz notes, “the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity.”

Further, other work, like the recently completed Rural Digital Opportunity Fund auction and the 5G fund provide the necessary policy framework with which to extend greater connectivity to rural America. As Josh Wright notes, “Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind.” This focus on closing the digital divide yielded gains in connectivity in places outside of traditional rural American settings, such as tribal lands, the U.S. Virgin Islands, and Puerto Rico (Wright).

But perhaps one of Chairman Pai’s best and (hopefully) most lasting contributions will be de-politicizing the FCC and increasing the transparency with which it operated. In contrast to previous administrations, the Pai FCC had an overwhelmingly bipartisan nature, with many bipartisan votes being regularly taken at monthly meetings (Jamison). In important respects, it was this bipartisan (or nonpartisan) nature that was directly implicated by Chairman Pai championing the Office of Economics and Analytics at the commission. As many of the commentators have noted (Jamison, Hazlett, Wright, Ellig) the OEA was a step forward in nonpolitical, careful cost-benefit analysis at the commission. As Wright notes, Chairman Pai was careful to not just hire a bunch of economists, but rather to learn from other agencies that have better integrated economics, and to establish a structure that would enable the commission’s economists to materially contribute to better policy.

We were honored to receive a post from Jerry Ellig just a day before he tragically passed away. As chief economist at the FCC from 2017-2018, he was in a unique position to evaluate past practice and participate in the creation of the OEA. According to Ellig, past practice tended to treat the work of the commission’s economists as a post-hoc gloss on the work of the agency’s attorneys. Once conclusions were reached, economics would often be backfilled in to support those conclusions. With the establishment of the OEA, economics took a front-seat role, with staff of that office becoming a primary source for information and policy analysis before conclusions were reached. As Wright noted, the Federal Trade Commission had adopted this approach. With the FCC moving to do this as well, communications policy in the United States is on much sounder footing thanks to Chairman Pai.

Not only did Chairman Pai push the commission in the direction of nonpolitical, sound economic analysis but, as many commentators note, he significantly improved the process at the commission (Cooper, Jamison, Lyons). Chief among his contributions was making it a practice to publish proposed orders weeks in advance, breaking with past traditions of secrecy around draft orders, and thereby giving the public an opportunity to see what the commission intended to do.

Critics of Chairman Pai’s approach to transparency feared that allowing more public view into the process would chill negotiations between the commissioners behind the scenes. But as Daniel Lyons notes, the chairman’s approach was a smashing success:

The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan compared to 33% and 69.9%, respectively, under Chairman Wheeler.

Other reforms from Chairman Pai helped open the FCC to greater scrutiny and a more transparent process, including limiting editorial privileges on staff on an order’s text, and by introducing the use of a simple “fact sheet” to explain orders (Lyons).

I found one of the most interesting insights into the character of Chairman Pai, was his willingness to reverse course and take risks to ensure that the FCC promoted innovation instead of obstructing it by relying on received wisdom (Nachbar). For instance, although he was initially skeptical of the prospects of Space X to introduce broadband through its low-Earth-orbit satellite systems, under Chairman Pai, the Starlink beta program was included in the RDOF auction. It is not clear whether this was a good bet, Thomas Nachbar notes, but it was a statement both of the chairman’s willingness to change his mind, as well as to not allow policy to remain in a comfortable zone that excludes potential innovation.

The next chair has an awfully big pair of shoes (or one oversized coffee mug) to fill. Chairman Pai established an important legacy of transparency and process improvement, as well as commitment to careful, economic analysis in the business of the agency. We will all be well-served if future commissions follow in his footsteps.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

We’re delighted to welcome Jonathan M. Barnett as our newest blogger at Truth on the Market.

Jonathan Barnett is director of the USC Gould School of Law Media, Entertainment and Technology Law Program. Barnett specializes in intellectual property, contracts, antitrust, and corporate law. He has published in the Harvard Law Review, Yale Law Journal, Journal of Legal Studies, Review of Law & Economics, Journal of Corporation Law and other scholarly journals.

He joined USC Law in fall 2006 and was a visiting professor at New York University School of Law in fall 2010. Prior to academia, Barnett practiced corporate law as a senior associate at Cleary Gottlieb Steen & Hamilton in New York, specializing in private equity and mergers and acquisitions transactions. He was also a visiting assistant professor at Fordham University School of Law in New York. A magna cum laude graduate of University of Pennsylvania, Barnett received a MPhil from Cambridge University and a JD from Yale Law School.

You can find his scholarship at SSRN.

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]

The public policy community’s infatuation with digital privacy has grown by leaps and bounds since the enactment of GDPR and the CCPA, but COVID-19 may leave the most enduring mark on the actual direction that privacy policy takes. As the pandemic and associated lockdowns first began, there were interesting discussions cropping up about the inevitable conflict between strong privacy fundamentalism and the pragmatic steps necessary to adequately trace the spread of infection. 

Axiomatic of this controversy is the Apple/Google contact tracing system, software developed for smartphones to assist with the identification of individuals and populations that have likely been in contact with the virus. The debate sparked by the Apple/Google proposal highlights what we miss when we treat “privacy” (however defined) as an end in itself, an end that must necessarily  trump other concerns. 

The Apple/Google contact tracing efforts

Apple/Google are doing yeoman’s work attempting to produce a useful contact tracing API given the headwinds of privacy advocacy they face. Apple’s webpage describing its new contact tracing system is a testament to the extent to which strong privacy protections are central to its efforts. Indeed, those privacy protections are in the very name of the service: “Privacy-Preserving Contact Tracing” program. But, vitally, the utility of the Apple/Google API is ultimately a function of its efficacy as a tracing tool, not in how well it protects privacy.

Apple/Google — despite the complaints of some states — are rolling out their Covid-19-tracking services with notable limitations. Most prominently, the APIs will not allow collection of location data, and will only function when users explicitly opt-in. This last point is important because there is evidence that opt-in requirements, by their nature, tend to reduce the flow of information in a system, and when we are considering tracing solutions to an ongoing pandemic surely less information is not optimal. Further, all of the data collected through the API will be anonymized, preventing even healthcare authorities from identifying particular infected individuals.

These restrictions prevent the tool from being as effective as it could be, but it’s not clear how Apple/Google could do any better given the political climate. For years, the Big Tech firms have been villainized by privacy advocates that accuse them of spying on kids and cavalierly disregarding consumer privacy as they treat individuals’ data as just another business input. The problem with this approach is that, in the midst of a generational crisis, our best tools are being excluded from the fight. Which begs the question: perhaps we have privacy all wrong? 

Privacy is one value among many

The U.S. constitutional order explicitly protects our privacy as against state intrusion in order to guarantee, among other things, fair process and equal access to justice. But this strong presumption against state intrusion—far from establishing a fundamental or absolute right to privacy—only accounts for part of the privacy story. 

The Constitution’s limit is a recognition of the fact that we humans are highly social creatures and that privacy is one value among many. Properly conceived, privacy protections are themselves valuable only insofar as they protect other things we value. Jane Bambauer explored some of this in an earlier post where she characterized privacy as, at best, an “instrumental right” — that is a tool used to promote other desirable social goals such as “fairness, safety, and autonomy.”

Following from Jane’s insight, privacy — as an instrumental good — is something that can have both positive and negative externalities, and needs to be enlarged or attenuated as its ability to serve instrumental ends changes in different contexts. 

According to Jane:

There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy …  this same logic applies at scale to the collection and analysis of data during a pandemic.

Indeed, dealing with externalities is one of the most common and powerful justifications for regulation, and an extreme form of “privacy libertarianism” —in the context of a pandemic — is likely to be, on net, harmful to society.

Which brings us back to efforts of Apple/Google. Even if those firms wanted to risk the ire of  privacy absolutists, it’s not clear that they could do so without incurring tremendous regulatory risk, uncertainty and a popular backlash. As statutory matters, the CCPA and the GDPR chill experimentation in the face of potentially crippling fines. While the FTC Act’s Section 5 prohibition on “unfair or deceptive” practices is open to interpretation in manners which could result in existentially damaging outcomes. Further, some polling suggests that the public appetite for contact tracing is not particularly high – though, as is often the case, such pro-privacy poll outcomes rarely give appropriate shrift to the tradeoff required.

As a general matter, it’s important to think about the value of individual privacy, and how best to optimally protect it. But privacy does not stand above all other values in all contexts. It is entirely reasonable to conclude that, in a time of emergency, if private firms can devise more effective solutions for mitigating the crisis, they should have more latitude to experiment. Knee-jerk preferences for an amorphous “right of privacy” should not be used to block those experiments.

Much as with the Cosmic Turtle, its tradeoffs all the way down. Most of the U.S. is in lockdown, and while we vigorously protect our privacy, we risk frustrating the creation of tools that could put a light at the end of the tunnel. We are, in effect, trading liberty and economic self-determination for privacy.

Once the worst of the Covid-19 crisis has passed — hastened possibly by the use of contact tracing programs — we can debate the proper use of private data in exigent circumstances. For the immediate future, we should instead be encouraging firms like Apple/Google to experiment with better ways to control the pandemic. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]


The ongoing pandemic has been an opportunity to explore different aspects of the human condition. For myself, I have learned that, despite a deep commitment to philosophical (neo- or classical-) liberalism, at heart I am pragmatic. I would prefer a society that optimizes for more individual liberty, but I am emphatically not someone who would even entertain the idea of using crises to advance my agenda when it is not clearly in service to amelioration of immediate problems.

Sadly, I have also learned that there are those who are not similarly pragmatic, and are willing to advance their ideological agenda come hell or high water. In this regard, I was disappointed yesterday to see the Gurry IP/COVID Letter passing around Twitter calling for widespread, worldwide interference with the property rights of IPR holders. 

The letter calls for a scattershot set of “remedies” to the crisis that would open access to copyright- and patent-protected inventions and content, including (among other things): 

  • voluntary licensing and non-enforcement of IP;
  • abrogation of IPR by WIPO members using the  “flexibility” in the international IP regime; 
  • the removal of geographical restrictions on IP licenses;
  • forcing patents into COVID-19 patent pools; and 
  • the implementation of compulsory licensing. 

And, unlike many prior efforts to push the envelope on weakening IP protections, the Gurry Letter also calls for measures that would weaken trade secrets and expose confidential business information in order to “achieve universal and equitable access to COVID-19 medicines and medical technologies as soon as reasonably possible.”

Notably, nothing in the letter suggests that any of these measures should be regarded as temporary.

We all want treatments for infection, vaccines for prevention, and ample supply of personal protective equipment as soon as possible, but if all the demands in this letter were met, it would do little to increase the supply of any of these things in the short term, while undermining incentives to develop new treatments, vaccines and better preventative tools in the long run. 

Fundamentally, the letter  reflects a willingness to use the COVID-19 pandemic to pursue an agenda that lacks merit and would be dismissed in the normal course of affairs. 

What is most certainly the case is that we need more innovation now, and we need it faster. There is no reason to believe that mandating open source status or forcing compulsory licensing on the firms doing that work will encourage that work to proceed with all due haste—and every indication that the opposite is the case. 

Where there are short term shortages of certain products that might be produced in much larger quantities by relaxing IP, companies are responding by doing just that—voluntarily. But this is fundamentally different from the imposition of unlimited compulsory licenses.

Further, private actors have displayed an impressive willingness to provide free or low cost access to technologies and content—without government coercion. The following is a short list of some of the content and inventions that have been opened up:

Culture, Fitness & Entertainment

  • HBO Will Stream 500 Hours of Free Programming, Including Full Seasons of ‘Veep,’ ‘The Sopranos,’ ‘Silicon Valley’”
  • Dozens (or more) of artists, both famous and lesser known, are releasing free back catalog performances or are taking part in free live streaming sessions on social media platforms. Notably, viewers are often welcome to donate or “pay what they” want to help support these artists (more on this below).
  • The NBA, NFL, and NHL are offering free access to their back catalogue of games.
  • A large array of music production software can now be used free on extended trials for 3 months (or completely free and unlimited in some cases). 
  • CBS All Access expanded its free trial period.
  • Neil Gaiman and Harper Collins granted permission to Levar Burton to livestream readings from their catalogs.
  • Disney is releasing movies early onto its (paid) Disney+ services.
  • Gold’s Gym is providing free access to its app-based workouts.
  • The Met is streaming free recordings of its Live in HD series.
  • The Seattle Symphony is offering free access to some of its recorded performances.
  • The UK National Theater is streaming some of its most popular plays for free.
  • Andrew Lloyd Weber is streaming his shows online for free.

Science, News & Education

  • Scholastica released free content intended to help educate students stuck at home while sheltering-in-place. 
  • Nearly 100 academic journals, societies, institutes, and companies signed a commitment to make research and data on COVID-19 freely available, at least for the duration of the outbreak.
  • The Atlantic lifted paywall restrictions on access to its COVID-19-related content.
  • The New England Journal of Medicine is allowing free access to COVID-19-related resources.
  • The Lancet allows free access to research it publishes on COVID-19.
  • All material published by theBMJ on the coronavirus outbreak is freely available.
  • The AAAS-published Science allows free access to its coronavirus research and commentary.
  • Elsevier gave full access to its content on its COVID-19 Information Center for PubMed Central and other public health databases.
  • The American Economic Association announced open access to all of its journals until the end of June.
  • JSTOR expanded free access to some of its scholarship.

Medicine & Technology

  • The Global Center for Medical Design is developing license-free PPE designs that can be quickly implemented by manufacturers.
  • Medtronic published “design specifications for the Puritan Bennett 560 (PB560) to allow innovators, inventors, start-ups, and academic institutions to leverage their own expertise and resources to evaluate options for rapid ventilator manufacturing.” It additionally provided software licenses for this technology.
  • AbbVie announced it won’t enforce its patent rights for Kaletra—a drug that may provide treatment for COVID-19 infections. Israel had earlier indicated it would impose compulsory licenses for the drug, but AbbVie is allowing use worldwide. The company, moreover, had donated supplies of the drug to China earlier in the year when the outbreak first became apparent.
  • Google is working with health researchers to provide anonymized and aggregated user location data. 
  • Cisco has extended free licenses and expanded usage counts at no extra charge for three of its security technologies to help strained IT teams and partners ready themselves and their clients for remote work.”
  • Microsoft is offering free subscriptions to its Teams product for six months.
  • Zoom expanded its free access and other limitations for educational institutions around the world.

Incentivize innovation, now more than ever

In addition to undermining the short-term incentives to draw more research resources into the fight against COVID-19, using this crisis to weaken the IP regime will cause long-term damage to the economies of the world. We still will need creators making new cultural products and researchers developing new medicines and technologies; weakening the IP regime will undermine the delicate set of incentives that cultural and scientific production depends upon. 

Any clear-eyed assessment of the broader course of the pandemic and the response to it gives lie to the notion that IP rights are oppressive or counterproductive. It is the pharmaceutical industry—hated as they may be in some quarters—that will be able to marshall the resources and expertise to develop treatments and vaccines. And it is artists and educators producing cultural content who (theoretically) depend on the licensing revenues of their creations for survival. 

In fact, one of the things that the pandemic has exposed is the fragility of artists’ livelihoods and the callousness with which they are often treated. Shortly after the lockdowns began in the US, the well-established rock musician David Crosby said in an interview that, if he could not tour this year, he would face tremendous financial hardship. 

As unfortunate as that may be for Crosby, a world-famous musician, imagine how much harder it is for struggling musicians who can hardly hope to achieve a fraction of Crosby’s success for their own tours, let alone for licensing. If David Crosby cannot manage well for a few months on the revenue from his popular catalog, what hope do small artists have?

Indeed, the flood of unable-to-tour artists who are currently offering “donate what you can” streaming performances are a symptom of the destructive assault on IPR exemplified in the letter. For decades, these artists have been told that they can only legitimately make money through touring. Although the potential to actually make a living while touring is possibly out of reach for many or most artists,  those that had been scraping by have now been brought to the brink of ruin as the ability to tour is taken away. 

There are certainly ways the various IP regimes can be improved (like, for instance, figuring out how to help creators make a living from their creations), but now is not the time to implement wishlist changes to an otherwise broadly successful rights regime. 

And, critically, there is a massive difference between achieving wider distribution of intellectual property voluntarily as opposed to through government fiat. When done voluntarily the IP owner determines the contours and extent of “open sourcing” so she can tailor increased access to her own needs (including the need to eat and pay rent). In some cases this may mean providing unlimited, completely free access, but in other cases—where the particular inventor or creator has a different set of needs and priorities—it may be something less than completely open access. When a rightsholder opts to “open source” her property voluntarily, she still retains the right to govern future use (i.e. once the pandemic is over) and is able to plan for reductions in revenue and how to manage future return on investment. 

Our lawmakers can consider if a particular situation arises where a particular piece of property is required for the public good, should the need arise. Otherwise, as responsible individuals, we should restrain ourselves from trying to capitalize on the current crisis to ram through our policy preferences. 

The following is the first in a new blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available at https://truthonthemarket.com/symposia/the-law-economics-of-the-covid-19-pandemic/.

Continue Reading...

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.