Archives For

Announcement

Truth on the Market is pleased to announce its next blog symposium:

Is Amazon’s Appetite Bottomless?

The Whole Foods Merger After One Year

August 28, 2018

One year ago tomorrow the Amazon/Whole Foods merger closed, following its approval by the FTC. The merger was something of a flashpoint in the growing populist antitrust movement, raising some interesting questions — and a host of objections from a number of scholars, advocates, journalists, antitrust experts, and others who voiced a range of possible problematic outcomes.

Under settled antitrust law — evolved over the last century-plus — the merger between Amazon and Whole Foods was largely uncontroversial. But the size and scope of Amazon’s operation and ambition has given some pause. And despite the apparent inapplicability of antitrust law to the array of populist concerns about large tech companies, advocates nonetheless contend that antitrust should be altered to deal with new threats posed by companies like Amazon.  

For something of a primer on the antitrust debate surrounding Amazon, listen to ICLE’s Geoffrey Manne and Open Markets’ Lina Khan on Season 2 Episode 1 of Briefly, a podcast produced by the University of Chicago Law Review.  

Beginning tomorrow, August 28, Truth on the Market and the International Center for Law & Economics will host a blog symposium discussing the impact of the merger.

One year on, we asked antitrust scholars and other experts to consider:

  • What has been the significance of the Amazon/Whole Foods merger?
  • How has the merger affected various markets and the participants within them (e.g., grocery stores, food delivery services, online retailers, workers, grocery suppliers, etc.)?
  • What, if anything, does the merger and its aftermath tell us about current antitrust doctrine and our understanding of platform markets?
  • Has a year of experience borne out any of the objections to the merger?
  • Have the market changes since the merger undermined or reinforced the populist antitrust arguments regarding this or other conduct?

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues.

Participants

The symposium posts will be collected here. We hope you’ll join us!

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

Senator Mark Warner has proposed 20 policy prescriptions for bringing “big tech” to heel. The proposals — which run the gamut from policing foreign advertising on social networks to regulating feared competitive harms — provide much interesting material for Congress to consider.

On the positive side, Senator Warner introduces the idea that online platforms may be able to function as least-cost avoiders with respect to certain tortious behavior of their users. He advocates for platforms to implement technology that would help control the spread of content that courts have found violated certain rights of third-parties.

Yet, on other accounts — specifically the imposition of an “interoperability” mandate on platforms — his proposals risk doing more harm than good.

The interoperability mandate was included by Senator Warner in order to “blunt [tech platforms’] ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.” According to Senator Warner, such a measure would enable startups to offset the advantages that arise from network effects on large tech platforms by building their services more easily on the backs of successful incumbents.

Whatever you think of the moats created by network effects, the example of “successful” previous regulation on this issue that Senator Warner relies upon is perplexing:

A prominent template for [imposing interoperability requirements] was in the AOL/Time Warner merger, where the FCC identified instant messaging as the ‘killer app’ – the app so popular and dominant that it would drive consumers to continue to pay for AOL service despite the existence of more innovative and efficient email and internet connectivity services. To address this, the FCC required AOL to make its instant messaging service (AIM, which also included a social graph) interoperable with at least one rival immediately and with two other rivals within 6 months.

But the AOL/Time Warner merger and the FCC’s conditions provide an example that demonstrates the exact opposite of what Senator Warner suggests. The much-feared 2001 megamerger prompted, as the Senator notes, fears that the new company would be able to leverage its dominance in the nascent instant messaging market to extend its influence into adjacent product markets.

Except, by 2003, despite it being unclear that AOL had developed interoperable systems, two large competitors had arisen that did not run interoperable IM networks (Yahoo! and Microsoft). In that same period, AOL’s previously 100% IM market share had declined by about half. By 2009, after eight years of heavy losses, Time Warner shed AOL, and by last year AIM was completely dead.

Not only was it not clear that AOL was able to make AIM interoperable, AIM was never able to catch up once better, rival services launched. What the conditions did do, however, was prevent AOL from launching competitive video chat services as it flailed about in the wake of the deal, thus forcing it to miss out on a market opportunity available to unencumbered competitors like Microsoft and Yahoo!

And all of this of course ignores the practical impossibility entailed in interfering in highly integrated technology platforms.

The AOL/Time Warner merger conditions are no template for successful tech regulation. Congress would be ill-advised to rely upon such templates for crafting policy around tech and innovation.

The EC’s Android decision is expected sometime in the next couple of weeks. Current speculation is that the EC may issue a fine exceeding last year’s huge 2.4B EU fine for Google’s alleged antitrust violations related to the display of general search results. Based on the statement of objections (“SO”), I expect the Android decision will be a muddle of legal theory that not only fails to connect to facts and marketplace realities, but also will  perversely incentivize platform operators to move toward less open ecosystems.

As has been amply demonstrated (see, e.g., here and here), the Commission has made fundamental errors with its market definition analysis in this case. Chief among its failures is the EC’s incredible decision to treat the relevant market as licensable mobile operating systems, which notably excludes the largest smartphone player by revenue, Apple.

This move, though perhaps expedient for the EC, leads the Commission to view with disapproval an otherwise competitively justifiable set of licensing requirements that Google imposes on its partners. This includes anti-fragmentation and app-bundling provisions (“Provisions”) in the agreements that partners sign in order to be able to distribute Google Mobile Services (“GMS”) with their devices. Among other things, the Provisions guarantee that a basic set of Google’s apps and services will be non-exclusively featured on partners’ devices.

The Provisions — when viewed in a market in which Apple is a competitor — are clearly procompetitive. The critical mass of GMS-flavored versions of Android (as opposed to vanilla Android Open Source Project (“AOSP”) devices) supplies enough predictability to an otherwise unruly universe of disparate Android devices such that software developers will devote the sometimes considerable resources necessary for launching successful apps on Android.

Open source software like AOSP is great, but anyone with more than a passing familiarity with Linux recognizes that the open source movement often fails to produce consumer-friendly software. In order to provide a critical mass of users that attract developers to Android, Google provides a significant service to the Android market as a whole by using the Provisions to facilitate a predictable user (and developer) experience.

Generativity on platforms is a complex phenomenon

To some extent, the EC’s complaint is rooted in a bias that Android act as a more “generative” platform such that third-party developers are relatively better able to reach users of Android devices. But this effort by the EC to undermine the Provisions will be ultimately self-defeating as it will likely push mobile platform providers to converge on similar, relatively more closed business models that provide less overall consumer choice.

Even assuming that the Provisions somehow prevent third-party app installs or otherwise develop a kind of path-dependency among users such that they never seek out new apps (which the data clearly shows is not happening), focusing on third-party developers as the sole or primary source of innovation on Android is a mistake.

The control that platform operators like Apple and Google exert over their respective ecosystems does not per se create more or less generativity on the platforms. As Gus Hurwitz has noted, “literature and experience amply demonstrate that ‘open’ platforms, or general-purpose technologies generally, can promote growth and increase social welfare, but they also demonstrate that open platforms can also limit growth and decrease welfare.” Conversely, tighter vertical integration (the Apple model) can also produce more innovation than open platforms.

What is important is the balance between control and freedom, and the degree to which third-party developers are able to innovate within the context of a platform’s constraints. The existence of constraints — either Apple’s more tightly controlled terms, or Google’s more generous Provisions — themselves facilitate generativity.

In short, it is overly simplistic to view generativity as something that happens at the edges without respect to structural constraints at the core. The interplay between platform and developer is complex and complementary, and needs to be viewed as a dynamic process.

Whither platform diversity?

I love Apple’s devices and I am quite happy living within its walled garden. But I certainly do not believe that Apple’s approach is the only one that makes sense. Yet, in its SO, the EC blesses Apple’s approach as the proper way to manage a mobile ecosystem. It explicitly excluded Apple from a competitive analysis, and attacked Google on the basis that it imposed restrictions in the context of licensing its software. Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

With this SO, the EC is basically asserting that Google is anticompetitively bundling without being able to plausibly assert foreclosure (because, again, third-party app installs are easy to do and are easily shown to number in the billions). I’m sure Google doesn’t want to move in the direction of having a more closed system, but the lesson of this case will loom large for tomorrow’s innovators.

In the face of eager antitrust enforcers like those in the EU, the easiest path for future innovators will be to keep everything tightly controlled so as to prevent both fragmentation and misguided regulatory intervention.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

There are some who view a host of claimed negative social ills allegedly related to the large size of firms like Amazon as an occasion to call for the company’s break up. And, unfortunately, these critics find an unlikely ally in President Trump, whose tweet storms claim that tech platforms are too big and extract unfair rents at the expense of small businesses. But these critics are wrong: Amazon is not a dangerous monopoly, and it certainly should not be broken up.  

Of course, no one really spells out what it means for these companies to be “too big.” Even Barry Lynn, a champion of the neo-Brandeisian antitrust movement, has shied away from specifics. The best that emerges when probing his writings is that he favors something like a return to Joe Bain’s “Structure-Conduct-Performance” paradigm (but even here, the details are fuzzy).

The reality of Amazon’s impact on the market is quite different than that asserted by its critics. Amazon has had decades to fulfill a nefarious scheme to suddenly raise prices and reap the benefits of anticompetive behavior. Yet it keeps putting downward pressure on prices in a way that seems to be commoditizing goods instead of building anticompetitive moats.

Amazon Does Not Anticompetitively Exercise Market Power

Twitter rants aside, more serious attempts to attack Amazon on antitrust grounds argue that it is engaging in pricing that is “predatory.” But “predatory pricing” requires a specific demonstration of factors — which, to date, have not been demonstrated — in order to justify legal action. Absent a showing of these factors, it has long been understood that seemingly “predatory” conduct is unlikely to harm consumers and often actually benefits consumers.

One important requirement that has gone unsatisfied is that a firm engaging in predatory pricing must have market power. Contrary to common characterizations of Amazon as a retail monopolist, its market power is less than it seems. By no means does it control retail in general. Rather, less than half of all online commerce (44%) takes place on its platform (and that number represents only 4% of total US retail commerce). Of that 44 percent, a significant portion is attributable to the merchants who use Amazon as a platform for their own online retail sales. Rather than abusing a monopoly market position to predatorily harm its retail competitors, at worst Amazon has created a retail business model that puts pressure on other firms to offer more convenience and lower prices to their customers. This is what we want and expect of competitive markets.

The claims leveled at Amazon are the intellectual kin of the ones made against Walmart during its ascendancy that it was destroying main street throughout the nation. In 1993, it was feared that Walmart’s quest to vertically integrate its offerings through Sam’s Club warehouse operations meant that “[r]etailers could simply bypass their distributors in favor of Sam’s — and Sam’s could take revenues from local merchants on two levels: as a supplier at the wholesale level, and as a competitor at retail.” This is a strikingly similar accusation to those leveled against Amazon’s use of its Seller Marketplace to aggregate smaller retailers on its platform.

But, just as in 1993 with Walmart, and now with Amazon, the basic fact remains that consumer preferences shift. Firms need to alter their behavior to satisfy their customers, not pretend they can change consumer preferences to suit their own needs. Preferring small, local retailers to Amazon or Walmart is a decision for individual consumers interacting in their communities, not for federal officials figuring out how best to pattern the economy.

All of this is not to say that Amazon is not large, or important, or that, as a consequence of its success it does not exert influence over the markets it operates in. But having influence through success is not the same as anticompetitively asserting market power.

Other criticisms of Amazon focus on its conduct in specific vertical markets in which it does have more significant market share. For instance, a UK Liberal Democratic leader recently claimed that “[j]ust as Standard Oil once cornered 85% of the refined oil market, today… Amazon accounts for 75% of ebook sales … .”

The problem with this concern is that Amazon’s conduct in the ebook market has had, on net, pro-competitive, not anti-competitive, effects. Amazon’s behavior in the ebook market has actually increased demand for books overall (and expanded output), increased the amount that consumers read, and decreased the price of theses books. Amazon is now even opening physical bookstores. Lina Khan made much hay in her widely cited article last year that this was all part of a grand strategy to predatorily push competitors out of the market:

The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws.

But it’s hard to allege predation in a market when over the past twenty years Amazon has consistently expanded output and lowered overall prices in the book market. Courts and lawmakers have sought to craft laws that encourage firms to provide consumers with more choices at lower prices — a feat that Amazon repeatedly accomplishes. To describe this conduct as anticompetitive is asking for a legal requirement that is at odds with the goal of benefiting consumers. It is to claim that Amazon has a contradictory duty to both benefit consumers and its shareholders, while also making sure that all of its less successful competitors also stay in business.

But far from creating a monopoly, the empirical reality appears to be that Amazon is driving categories of goods, like books, closer to the textbook model of commodities in a perfectly competitive market. Hardly an antitrust violation.

Amazon Should Not Be Broken Up

“Big is bad” may roll off the tongue, but, as a guiding ethic, it makes for terrible public policy. Amazon’s size and success are a direct result of its ability to enter relevant markets and to innovate. To break up Amazon, or any other large firm, is to punish it for serving the needs of its consumers.

None of this is to say that large firms are incapable of causing harm or acting anticompetitively. But we should accept calls for dramatic regulatory intervention  — especially from those in a position to influence regulatory or market reactions to such calls — to be supported by substantial factual evidence and legal and economic theory.

This tendency to go after large players is nothing new. As noted above, Walmart triggered many similar concerns thirty years ago. Thinking about Walmart then, pundits feared that direct competition with Walmart was fruitless:

In the spring of 1992 Ken Stone came to Maine to address merchant groups from towns in the path of the Wal-Mart advance. His advice was simple and direct: don’t compete directly with Wal-Mart; specialize and carry harder-to-get and better-quality products; emphasize customer service; extend your hours; advertise more — not just your products but your business — and perhaps most pertinent of all to this group of Yankee individualists, work together.

And today, some think it would be similarly pointless to compete with Amazon:

Concentration means it is much harder for someone to start a new business that might, for example, try to take advantage of the cheap housing in Minneapolis. Why bother when you know that if you challenge Amazon, they will simply dump your product below cost and drive you out of business?

The interesting thing to note, of course, is that Walmart is now desperately trying to compete with Amazon. But despite being very successful in its own right, and having strong revenues, Walmart doesn’t seem able to keep up.

Some small businesses will close as new business models emerge and consumer preferences shift. This is to be expected in a market driven by creative destruction. Once upon a time Walmart changed retail and improved the lives of many Americans. If our lawmakers can resist the urge to intervene without real evidence of harm, Amazon just might do the same.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

The Internet is a modern miracle: from providing all varieties of entertainment, to facilitating life-saving technologies, to keeping us connected with distant loved ones, the scope of the Internet’s contribution to our daily lives is hard to overstate. Moving forward there is undoubtedly much more that we can and will do with the Internet, and part of that innovation will, naturally, require a reconsideration of existing laws and how new Internet-enabled modalities fit into them.

But when undertaking such a reconsideration, the goal should not be simply to promote Internet-enabled goods above all else; rather, it should be to examine the law’s effect on the promotion of new technology within the context of other, competing social goods. In short, there are always trade-offs entailed in changing the legal order. As such, efforts to reform, clarify, or otherwise change the law that affects Internet platforms must be balanced against other desirable social goods, not automatically prioritized above them.

Unfortunately — and frequently with the best of intentions — efforts to promote one good thing (for instance, more online services) inadequately take account of the balance of the larger legal realities at stake. And one of the most important legal realities that is too often readily thrown aside in the rush to protect the Internet is that policy be established through public, (relatively) democratically accountable channels.

Trade deals and domestic policy

Recently a letter was sent by a coalition of civil society groups and law professors asking the NAFTA delegation to incorporate U.S.-style intermediary liability immunity into the trade deal. Such a request is notable for its timing in light of the ongoing policy struggles over SESTA —a bill currently working its way through Congress that seeks to curb human trafficking through online platforms — and the risk that domestic platform companies face of losing (at least in part) the immunity provided by Section 230 of the Communications Decency Act. But this NAFTA push is not merely about a tradeoff between less trafficking and more online services, but between promoting policies in a way that protects the rule of law and doing so in a way that undermines the rule of law.

Indeed, the NAFTA effort appears to be aimed at least as much at sidestepping the ongoing congressional fight over platform regulation as it is aimed at exporting U.S. law to our trading partners. Thus, according to EFF, for example, “[NAFTA renegotiation] comes at a time when Section 230 stands under threat in the United States, currently from the SESTA and FOSTA proposals… baking Section 230 into NAFTA may be the best opportunity we have to protect it domestically.”

It may well be that incorporating Section 230 into NAFTA is the “best opportunity” to protect the law as it currently stands from efforts to reform it to address conflicting priorities. But that doesn’t mean it’s a good idea. In fact, whatever one thinks of the merits of SESTA, it is not obviously a good idea to use a trade agreement as a vehicle to override domestic reforms to Section 230 that Congress might implement. Trade agreements can override domestic law, but that is not the reason we engage in trade negotiations.

In fact, other parts of NAFTA remain controversial precisely for their ability to undermine domestic legal norms, in this case in favor of guaranteeing the expectations of foreign investors. EFF itself is deeply skeptical of this “investor-state” dispute process (“ISDS”), noting that “[t]he latest provisions would enable multinational corporations to undermine public interest rules.” The irony here is that ISDS provides a mechanism for overriding domestic policy that is a close analogy for what EFF advocates for in the Section 230/SESTA context.

ISDS allows foreign investors to sue NAFTA signatories in a tribunal when domestic laws of that signatory have harmed investment expectations. The end result is that the signatory could be responsible for paying large sums to litigants, which in turn would serve as a deterrent for the signatory to continue to administer its laws in a similar fashion.

Stated differently, NAFTA currently contains a mechanism that favors one party (foreign investors) in a way that prevents signatory nations from enacting and enforcing laws approved of by democratically elected representatives. EFF and others disapprove of this.

Yet, at the same time, EFF also promotes the idea that NAFTA should contain a provision that favors one party (Internet platforms) in a way that would prevent signatory nations from enacting and enforcing laws like SESTA that (might be) approved of by democratically elected representatives.

A more principled stance would be skeptical of the domestic law override in both contexts.

Restating Copyright or creating copyright policy?

Take another example: Some have suggested that the American Law Institute (“ALI”) is being used to subvert Congressional will. Since 2013, ALI has taken upon itself the project to “restate” the law of copyright. ALI is well known and respected for its common law restatements, but it may be that something more than mere restatement is going on here. As the NY Bar Association recently observed:

The Restatement as currently drafted appears inconsistent with the ALI’s long-standing goal of promoting clarity in the law: indeed, rather than simply clarifying or restating that law, the draft offers commentary and interpretations beyond the current state of the law that appear intended to shape current and future copyright policy.  

It is certainly odd that ALI (or any other group) would seek to restate a body of law that is already stated in the form of an overarching federal statute. The point of a restatement is to gather together the decisions of disparate common law courts interpreting different laws and precedent in order to synthesize a single, coherent framework approximating an overall consensus. If done correctly, a restatement of a federal statute would, theoretically, end up with the exact statute itself along with some commentary about how judicial decisions have filled in the blanks differently — a state of affairs that already exists with the copious academic literature commenting on federal copyright law.

But it seems that merely restating judicial interpretations was not the only objective behind the copyright restatement effort. In a letter to ALI, one of the scholars responsible for the restatement project noted that:

While congressional efforts to improve the Copyright Act… may be a welcome and beneficial development, it will almost certainly be a long and contentious process… Register Pallante… [has] not[ed] generally that “Congress has moved slowly in the copyright space.”

Reform of copyright law, in other words, and not merely restatement of it, was an important impetus for the project. As an attorney for the Copyright Office observed, “[a]lthough presented as a “Restatement” of copyright law, the project would appear to be more accurately characterized as a rewriting of the law.” But “rewriting” is a job for the legislature. And even if Congress moves slowly, or the process is frustrating, the democratic processes that produce the law should still be respected.

Pyrrhic Policy Victories

Attempts to change copyright or entrench liability immunity through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.

It’s no surprise why some may be frustrated and concerned about intermediary liability and copyright issues: On the margin, it’s definitely harder to operate an Internet platform if it faces sweeping liability for the actions of third parties (whether for human trafficking or infringing copyrights). Maybe copyright law needs to be reformed and perhaps intermediary liability must be maintained exactly as it is (or expanded). But the right way to arrive at these policy outcomes is not through backdoors — and it is not to begin with the assertion that such outcomes are required.

Congress and the courts can be frustrating vehicles through which to enact public policy, but they have the virtue of being relatively open to public deliberation, and of having procedural constraints that can circumscribe excesses and idiosyncratic follies. We might get bad policy from Congress. We might get bad cases from the courts. But the theory of our system is that, on net, having a frustratingly long, circumscribed, and public process will tend to weed out most of the bad ideas and impulses that would otherwise result from unconstrained decision making, even if well-intentioned.

We should meet efforts like these to end-run Congress and the courts with significant skepticism. Short term policy “victories” are likely not worth the long-run consequences. These are important, complicated issues. If we surreptitiously adopt idiosyncratic solutions to them, we risk undermining the rule of law itself.

A panelist brought up an interesting tongue-in-cheek observation about the rising populist antitrust movement at a Heritage antitrust event this week. To the extent that the new populist antitrust movement is broadly concerned about effects on labor and wage depression, then, in principle, it should also be friendly to cartels. Although counterintuitive, employees have long supported and benefited from cartels, because cartels generally afford both job security and higher wages than competitive firms. And, of course, labor itself has long sought the protection of cartels – in the form of unions – to secure the same benefits.   

For instance, in the days before widespread foreign competition in domestic auto markets, native unionized workers of the big three producers enjoyed a relatively higher wage for relatively less output. Competition from abroad changed the economic landscape for both producers and workers with the end result being a reduction in union power and relatively lower overall wages for workers. The union model — a labor cartel — can guarantee higher wages to those workers.

The same story can be seen on other industries, as well, from telecommunications to service workers to public sector employees. Generally, market power on the labor demand side (employers) tends to facilitate market power on the labor supply side: firms with market power — with supracompetitive profits — can afford to pay more for labor and often are willing to do so in order to secure political support (and also to make it more expensive for potential competitors to hire skilled employees). Labor is a substantial cost for firms in competitive markets, however, so firms without market power are always looking to economize on labor (that is, have low wages, as few employees as needed, and to substitute capital for labor wherever efficient to do so).

Therefore, if broad labor effects should be a prime concern of antitrust, perhaps enforcers should use antitrust laws to encourage cartel formation when it might increase wages, regardless of the effects on productivity, prices, and other efficiencies that may arise (or perhaps, as a possible trump card to hold against traditional efficiencies justifications).

No one will make a serious case for promoting cartels (although Former FTC Chairman Pertshuk sounded similar notes in the late 70s), but the comment makes a deeper point about ongoing efforts to undermine the consumer welfare standard. Fundamental contradictions exist in antitrust rhetoric that is unmoored from economic analysis. Professor Hovenkamp highlighted this in a recent paper as well:

The coherence problem [in antitrust populism] shows up in goals that are unmeasurable and fundamentally inconsistent, although with their contradictions rarely exposed. Among the most problematic contradictions is the one between small business protection and consumer welfare. In a nutshell, consumers benefit from low prices, high output and high quality and variety of products and services. But when a firm or a technology is able to offer these things they invariably injure rivals, typically smaller or dedicated to older technologies, who are unable to match them. Although movement antitrust rhetoric is often opaque about specifics, its general effect is invariably to encourage higher prices or reduced output or innovation, mainly for the protection of small business. Indeed, that has been a predominant feature of movement antitrust ever since the Sherman Act was passed, and it is a prominent feature of movement antitrust today. Indeed, some spokespersons for movement antitrust write as if low prices are the evil that antitrust law should be combatting.

To be fair, even with careful economic analysis, it is not always perfectly clear how to resolve the tensions between antitrust and other policy preferences.  For instance, Jonathan Adler described the collision between antitrust and environmental protection in cases where collusion might lead to better environmental outcomes. But even in cases like that, he noted it was essentially a free-rider problem and, as with intrabrand price agreements where consumer goodwill was a “commons” that had to be suitably maintained against possible free-riding retailers, what might be an antitrust violation in one context was not necessarily a violation in a second context.  

Moreover, when the purpose of apparently “collusive” conduct is to actually ensure long term, sustainable production of a good or service (like fish), the behavior may not actually be anticompetitive. Thus, antitrust remains a plausible means of evaluating economic activity strictly on its own terms (and any alteration to the doctrine itself might actually be to prefer rule of reason analysis over per se analysis when examining these sorts of mitigating circumstances).

And before contorting antitrust into a policy cure-all, it is important to remember that the consumer welfare standard evolved out of sometimes good (price fixing bans) and sometimes questionable (prohibitions on output contracts) doctrines that were subject to legal trial and error. This was an evolution that was triggered by “increasing economic sophistication” and as “the enforcement agencies and courts [began] reaching for new ways in which to weigh competing and conflicting claims.”

The vector of that evolution was toward the use of  antitrust as a reliable, testable, and clear set of legal principles that are ultimately subject to economic analysis. When the populists ask us, for instance, to return to a time when judges could “prevent the conversion of concentrated economic power into concentrated political power” via antitrust law, they are asking for much more than just adding a new gloss to existing doctrine. They are asking for us to unlearn all of the lessons of the twentieth century that ultimately led toward the maturation of antitrust law.

It’s perfectly reasonable to care about political corruption, worker welfare, and income inequality. It’s not perfectly reasonable to try to shoehorn goals based on these political concerns into a body of legal doctrine that evolved a set of tools wholly inappropriate for achieving those ends.

Canada’s large merchants have called on the government to impose price controls on interchange fees, claiming this would benefit not only merchants but also consumers. But experience elsewhere contradicts this claim.

In a recently released Macdonald Laurier Institute report, Julian Morris, Geoffrey A. Manne, Ian Lee, and Todd J. Zywicki detail how price controls on credit card interchange fees would result in reduced reward earnings and higher annual fees on credit cards, with adverse effects on consumers, many merchants and the economy as a whole.

This study draws on the experience with fee caps imposed in other jurisdictions, highlighting in particular the effects in Australia, where interchange fees were capped in 2003. There, the caps resulted in a significant decrease in the rewards earned per dollar spent and an increase in annual card fees. If similar restrictions were imposed in Canada, resulting in a 40 percent reduction in interchange fees, the authors of the report anticipate that:

  1. On average, each adult Canadian would be worse off to the tune of between $89 and $250 per year due to a loss of rewards and increase in annual card fees:
    1. For an individual or household earning $40,000, the net loss would be $66 to $187; and
    2. for an individual or household earning $90,000, the net loss would be $199 to $562.
  2. Spending at merchants in aggregate would decline by between $1.6 billion and $4.7 billion, resulting in a net loss to merchants of between $1.6 billion and $2.8 billion.
  3. GDP would fall by between 0.12 percent and 0.19 percent per year.
  4. Federal government revenue would fall by between 0.14 percent and 0.40 percent.

Moreover, tighter fee caps would “have a more dramatic negative effect on middle class households and the economy as a whole.”

You can read the full report here.