Archives For

There are some who view a host of claimed negative social ills allegedly related to the large size of firms like Amazon as an occasion to call for the company’s break up. And, unfortunately, these critics find an unlikely ally in President Trump, whose tweet storms claim that tech platforms are too big and extract unfair rents at the expense of small businesses. But these critics are wrong: Amazon is not a dangerous monopoly, and it certainly should not be broken up.  

Of course, no one really spells out what it means for these companies to be “too big.” Even Barry Lynn, a champion of the neo-Brandeisian antitrust movement, has shied away from specifics. The best that emerges when probing his writings is that he favors something like a return to Joe Bain’s “Structure-Conduct-Performance” paradigm (but even here, the details are fuzzy).

The reality of Amazon’s impact on the market is quite different than that asserted by its critics. Amazon has had decades to fulfill a nefarious scheme to suddenly raise prices and reap the benefits of anticompetive behavior. Yet it keeps putting downward pressure on prices in a way that seems to be commoditizing goods instead of building anticompetitive moats.

Amazon Does Not Anticompetitively Exercise Market Power

Twitter rants aside, more serious attempts to attack Amazon on antitrust grounds argue that it is engaging in pricing that is “predatory.” But “predatory pricing” requires a specific demonstration of factors — which, to date, have not been demonstrated — in order to justify legal action. Absent a showing of these factors, it has long been understood that seemingly “predatory” conduct is unlikely to harm consumers and often actually benefits consumers.

One important requirement that has gone unsatisfied is that a firm engaging in predatory pricing must have market power. Contrary to common characterizations of Amazon as a retail monopolist, its market power is less than it seems. By no means does it control retail in general. Rather, less than half of all online commerce (44%) takes place on its platform (and that number represents only 4% of total US retail commerce). Of that 44 percent, a significant portion is attributable to the merchants who use Amazon as a platform for their own online retail sales. Rather than abusing a monopoly market position to predatorily harm its retail competitors, at worst Amazon has created a retail business model that puts pressure on other firms to offer more convenience and lower prices to their customers. This is what we want and expect of competitive markets.

The claims leveled at Amazon are the intellectual kin of the ones made against Walmart during its ascendancy that it was destroying main street throughout the nation. In 1993, it was feared that Walmart’s quest to vertically integrate its offerings through Sam’s Club warehouse operations meant that “[r]etailers could simply bypass their distributors in favor of Sam’s — and Sam’s could take revenues from local merchants on two levels: as a supplier at the wholesale level, and as a competitor at retail.” This is a strikingly similar accusation to those leveled against Amazon’s use of its Seller Marketplace to aggregate smaller retailers on its platform.

But, just as in 1993 with Walmart, and now with Amazon, the basic fact remains that consumer preferences shift. Firms need to alter their behavior to satisfy their customers, not pretend they can change consumer preferences to suit their own needs. Preferring small, local retailers to Amazon or Walmart is a decision for individual consumers interacting in their communities, not for federal officials figuring out how best to pattern the economy.

All of this is not to say that Amazon is not large, or important, or that, as a consequence of its success it does not exert influence over the markets it operates in. But having influence through success is not the same as anticompetitively asserting market power.

Other criticisms of Amazon focus on its conduct in specific vertical markets in which it does have more significant market share. For instance, a UK Liberal Democratic leader recently claimed that “[j]ust as Standard Oil once cornered 85% of the refined oil market, today… Amazon accounts for 75% of ebook sales … .”

The problem with this concern is that Amazon’s conduct in the ebook market has had, on net, pro-competitive, not anti-competitive, effects. Amazon’s behavior in the ebook market has actually increased demand for books overall (and expanded output), increased the amount that consumers read, and decreased the price of theses books. Amazon is now even opening physical bookstores. Lina Khan made much hay in her widely cited article last year that this was all part of a grand strategy to predatorily push competitors out of the market:

The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws.

But it’s hard to allege predation in a market when over the past twenty years Amazon has consistently expanded output and lowered overall prices in the book market. Courts and lawmakers have sought to craft laws that encourage firms to provide consumers with more choices at lower prices — a feat that Amazon repeatedly accomplishes. To describe this conduct as anticompetitive is asking for a legal requirement that is at odds with the goal of benefiting consumers. It is to claim that Amazon has a contradictory duty to both benefit consumers and its shareholders, while also making sure that all of its less successful competitors also stay in business.

But far from creating a monopoly, the empirical reality appears to be that Amazon is driving categories of goods, like books, closer to the textbook model of commodities in a perfectly competitive market. Hardly an antitrust violation.

Amazon Should Not Be Broken Up

“Big is bad” may roll off the tongue, but, as a guiding ethic, it makes for terrible public policy. Amazon’s size and success are a direct result of its ability to enter relevant markets and to innovate. To break up Amazon, or any other large firm, is to punish it for serving the needs of its consumers.

None of this is to say that large firms are incapable of causing harm or acting anticompetitively. But we should accept calls for dramatic regulatory intervention  — especially from those in a position to influence regulatory or market reactions to such calls — to be supported by substantial factual evidence and legal and economic theory.

This tendency to go after large players is nothing new. As noted above, Walmart triggered many similar concerns thirty years ago. Thinking about Walmart then, pundits feared that direct competition with Walmart was fruitless:

In the spring of 1992 Ken Stone came to Maine to address merchant groups from towns in the path of the Wal-Mart advance. His advice was simple and direct: don’t compete directly with Wal-Mart; specialize and carry harder-to-get and better-quality products; emphasize customer service; extend your hours; advertise more — not just your products but your business — and perhaps most pertinent of all to this group of Yankee individualists, work together.

And today, some think it would be similarly pointless to compete with Amazon:

Concentration means it is much harder for someone to start a new business that might, for example, try to take advantage of the cheap housing in Minneapolis. Why bother when you know that if you challenge Amazon, they will simply dump your product below cost and drive you out of business?

The interesting thing to note, of course, is that Walmart is now desperately trying to compete with Amazon. But despite being very successful in its own right, and having strong revenues, Walmart doesn’t seem able to keep up.

Some small businesses will close as new business models emerge and consumer preferences shift. This is to be expected in a market driven by creative destruction. Once upon a time Walmart changed retail and improved the lives of many Americans. If our lawmakers can resist the urge to intervene without real evidence of harm, Amazon just might do the same.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

The Internet is a modern miracle: from providing all varieties of entertainment, to facilitating life-saving technologies, to keeping us connected with distant loved ones, the scope of the Internet’s contribution to our daily lives is hard to overstate. Moving forward there is undoubtedly much more that we can and will do with the Internet, and part of that innovation will, naturally, require a reconsideration of existing laws and how new Internet-enabled modalities fit into them.

But when undertaking such a reconsideration, the goal should not be simply to promote Internet-enabled goods above all else; rather, it should be to examine the law’s effect on the promotion of new technology within the context of other, competing social goods. In short, there are always trade-offs entailed in changing the legal order. As such, efforts to reform, clarify, or otherwise change the law that affects Internet platforms must be balanced against other desirable social goods, not automatically prioritized above them.

Unfortunately — and frequently with the best of intentions — efforts to promote one good thing (for instance, more online services) inadequately take account of the balance of the larger legal realities at stake. And one of the most important legal realities that is too often readily thrown aside in the rush to protect the Internet is that policy be established through public, (relatively) democratically accountable channels.

Trade deals and domestic policy

Recently a letter was sent by a coalition of civil society groups and law professors asking the NAFTA delegation to incorporate U.S.-style intermediary liability immunity into the trade deal. Such a request is notable for its timing in light of the ongoing policy struggles over SESTA —a bill currently working its way through Congress that seeks to curb human trafficking through online platforms — and the risk that domestic platform companies face of losing (at least in part) the immunity provided by Section 230 of the Communications Decency Act. But this NAFTA push is not merely about a tradeoff between less trafficking and more online services, but between promoting policies in a way that protects the rule of law and doing so in a way that undermines the rule of law.

Indeed, the NAFTA effort appears to be aimed at least as much at sidestepping the ongoing congressional fight over platform regulation as it is aimed at exporting U.S. law to our trading partners. Thus, according to EFF, for example, “[NAFTA renegotiation] comes at a time when Section 230 stands under threat in the United States, currently from the SESTA and FOSTA proposals… baking Section 230 into NAFTA may be the best opportunity we have to protect it domestically.”

It may well be that incorporating Section 230 into NAFTA is the “best opportunity” to protect the law as it currently stands from efforts to reform it to address conflicting priorities. But that doesn’t mean it’s a good idea. In fact, whatever one thinks of the merits of SESTA, it is not obviously a good idea to use a trade agreement as a vehicle to override domestic reforms to Section 230 that Congress might implement. Trade agreements can override domestic law, but that is not the reason we engage in trade negotiations.

In fact, other parts of NAFTA remain controversial precisely for their ability to undermine domestic legal norms, in this case in favor of guaranteeing the expectations of foreign investors. EFF itself is deeply skeptical of this “investor-state” dispute process (“ISDS”), noting that “[t]he latest provisions would enable multinational corporations to undermine public interest rules.” The irony here is that ISDS provides a mechanism for overriding domestic policy that is a close analogy for what EFF advocates for in the Section 230/SESTA context.

ISDS allows foreign investors to sue NAFTA signatories in a tribunal when domestic laws of that signatory have harmed investment expectations. The end result is that the signatory could be responsible for paying large sums to litigants, which in turn would serve as a deterrent for the signatory to continue to administer its laws in a similar fashion.

Stated differently, NAFTA currently contains a mechanism that favors one party (foreign investors) in a way that prevents signatory nations from enacting and enforcing laws approved of by democratically elected representatives. EFF and others disapprove of this.

Yet, at the same time, EFF also promotes the idea that NAFTA should contain a provision that favors one party (Internet platforms) in a way that would prevent signatory nations from enacting and enforcing laws like SESTA that (might be) approved of by democratically elected representatives.

A more principled stance would be skeptical of the domestic law override in both contexts.

Restating Copyright or creating copyright policy?

Take another example: Some have suggested that the American Law Institute (“ALI”) is being used to subvert Congressional will. Since 2013, ALI has taken upon itself the project to “restate” the law of copyright. ALI is well known and respected for its common law restatements, but it may be that something more than mere restatement is going on here. As the NY Bar Association recently observed:

The Restatement as currently drafted appears inconsistent with the ALI’s long-standing goal of promoting clarity in the law: indeed, rather than simply clarifying or restating that law, the draft offers commentary and interpretations beyond the current state of the law that appear intended to shape current and future copyright policy.  

It is certainly odd that ALI (or any other group) would seek to restate a body of law that is already stated in the form of an overarching federal statute. The point of a restatement is to gather together the decisions of disparate common law courts interpreting different laws and precedent in order to synthesize a single, coherent framework approximating an overall consensus. If done correctly, a restatement of a federal statute would, theoretically, end up with the exact statute itself along with some commentary about how judicial decisions have filled in the blanks differently — a state of affairs that already exists with the copious academic literature commenting on federal copyright law.

But it seems that merely restating judicial interpretations was not the only objective behind the copyright restatement effort. In a letter to ALI, one of the scholars responsible for the restatement project noted that:

While congressional efforts to improve the Copyright Act… may be a welcome and beneficial development, it will almost certainly be a long and contentious process… Register Pallante… [has] not[ed] generally that “Congress has moved slowly in the copyright space.”

Reform of copyright law, in other words, and not merely restatement of it, was an important impetus for the project. As an attorney for the Copyright Office observed, “[a]lthough presented as a “Restatement” of copyright law, the project would appear to be more accurately characterized as a rewriting of the law.” But “rewriting” is a job for the legislature. And even if Congress moves slowly, or the process is frustrating, the democratic processes that produce the law should still be respected.

Pyrrhic Policy Victories

Attempts to change copyright or entrench liability immunity through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.

It’s no surprise why some may be frustrated and concerned about intermediary liability and copyright issues: On the margin, it’s definitely harder to operate an Internet platform if it faces sweeping liability for the actions of third parties (whether for human trafficking or infringing copyrights). Maybe copyright law needs to be reformed and perhaps intermediary liability must be maintained exactly as it is (or expanded). But the right way to arrive at these policy outcomes is not through backdoors — and it is not to begin with the assertion that such outcomes are required.

Congress and the courts can be frustrating vehicles through which to enact public policy, but they have the virtue of being relatively open to public deliberation, and of having procedural constraints that can circumscribe excesses and idiosyncratic follies. We might get bad policy from Congress. We might get bad cases from the courts. But the theory of our system is that, on net, having a frustratingly long, circumscribed, and public process will tend to weed out most of the bad ideas and impulses that would otherwise result from unconstrained decision making, even if well-intentioned.

We should meet efforts like these to end-run Congress and the courts with significant skepticism. Short term policy “victories” are likely not worth the long-run consequences. These are important, complicated issues. If we surreptitiously adopt idiosyncratic solutions to them, we risk undermining the rule of law itself.

A panelist brought up an interesting tongue-in-cheek observation about the rising populist antitrust movement at a Heritage antitrust event this week. To the extent that the new populist antitrust movement is broadly concerned about effects on labor and wage depression, then, in principle, it should also be friendly to cartels. Although counterintuitive, employees have long supported and benefited from cartels, because cartels generally afford both job security and higher wages than competitive firms. And, of course, labor itself has long sought the protection of cartels – in the form of unions – to secure the same benefits.   

For instance, in the days before widespread foreign competition in domestic auto markets, native unionized workers of the big three producers enjoyed a relatively higher wage for relatively less output. Competition from abroad changed the economic landscape for both producers and workers with the end result being a reduction in union power and relatively lower overall wages for workers. The union model — a labor cartel — can guarantee higher wages to those workers.

The same story can be seen on other industries, as well, from telecommunications to service workers to public sector employees. Generally, market power on the labor demand side (employers) tends to facilitate market power on the labor supply side: firms with market power — with supracompetitive profits — can afford to pay more for labor and often are willing to do so in order to secure political support (and also to make it more expensive for potential competitors to hire skilled employees). Labor is a substantial cost for firms in competitive markets, however, so firms without market power are always looking to economize on labor (that is, have low wages, as few employees as needed, and to substitute capital for labor wherever efficient to do so).

Therefore, if broad labor effects should be a prime concern of antitrust, perhaps enforcers should use antitrust laws to encourage cartel formation when it might increase wages, regardless of the effects on productivity, prices, and other efficiencies that may arise (or perhaps, as a possible trump card to hold against traditional efficiencies justifications).

No one will make a serious case for promoting cartels (although Former FTC Chairman Pertshuk sounded similar notes in the late 70s), but the comment makes a deeper point about ongoing efforts to undermine the consumer welfare standard. Fundamental contradictions exist in antitrust rhetoric that is unmoored from economic analysis. Professor Hovenkamp highlighted this in a recent paper as well:

The coherence problem [in antitrust populism] shows up in goals that are unmeasurable and fundamentally inconsistent, although with their contradictions rarely exposed. Among the most problematic contradictions is the one between small business protection and consumer welfare. In a nutshell, consumers benefit from low prices, high output and high quality and variety of products and services. But when a firm or a technology is able to offer these things they invariably injure rivals, typically smaller or dedicated to older technologies, who are unable to match them. Although movement antitrust rhetoric is often opaque about specifics, its general effect is invariably to encourage higher prices or reduced output or innovation, mainly for the protection of small business. Indeed, that has been a predominant feature of movement antitrust ever since the Sherman Act was passed, and it is a prominent feature of movement antitrust today. Indeed, some spokespersons for movement antitrust write as if low prices are the evil that antitrust law should be combatting.

To be fair, even with careful economic analysis, it is not always perfectly clear how to resolve the tensions between antitrust and other policy preferences.  For instance, Jonathan Adler described the collision between antitrust and environmental protection in cases where collusion might lead to better environmental outcomes. But even in cases like that, he noted it was essentially a free-rider problem and, as with intrabrand price agreements where consumer goodwill was a “commons” that had to be suitably maintained against possible free-riding retailers, what might be an antitrust violation in one context was not necessarily a violation in a second context.  

Moreover, when the purpose of apparently “collusive” conduct is to actually ensure long term, sustainable production of a good or service (like fish), the behavior may not actually be anticompetitive. Thus, antitrust remains a plausible means of evaluating economic activity strictly on its own terms (and any alteration to the doctrine itself might actually be to prefer rule of reason analysis over per se analysis when examining these sorts of mitigating circumstances).

And before contorting antitrust into a policy cure-all, it is important to remember that the consumer welfare standard evolved out of sometimes good (price fixing bans) and sometimes questionable (prohibitions on output contracts) doctrines that were subject to legal trial and error. This was an evolution that was triggered by “increasing economic sophistication” and as “the enforcement agencies and courts [began] reaching for new ways in which to weigh competing and conflicting claims.”

The vector of that evolution was toward the use of  antitrust as a reliable, testable, and clear set of legal principles that are ultimately subject to economic analysis. When the populists ask us, for instance, to return to a time when judges could “prevent the conversion of concentrated economic power into concentrated political power” via antitrust law, they are asking for much more than just adding a new gloss to existing doctrine. They are asking for us to unlearn all of the lessons of the twentieth century that ultimately led toward the maturation of antitrust law.

It’s perfectly reasonable to care about political corruption, worker welfare, and income inequality. It’s not perfectly reasonable to try to shoehorn goals based on these political concerns into a body of legal doctrine that evolved a set of tools wholly inappropriate for achieving those ends.

Canada’s large merchants have called on the government to impose price controls on interchange fees, claiming this would benefit not only merchants but also consumers. But experience elsewhere contradicts this claim.

In a recently released Macdonald Laurier Institute report, Julian Morris, Geoffrey A. Manne, Ian Lee, and Todd J. Zywicki detail how price controls on credit card interchange fees would result in reduced reward earnings and higher annual fees on credit cards, with adverse effects on consumers, many merchants and the economy as a whole.

This study draws on the experience with fee caps imposed in other jurisdictions, highlighting in particular the effects in Australia, where interchange fees were capped in 2003. There, the caps resulted in a significant decrease in the rewards earned per dollar spent and an increase in annual card fees. If similar restrictions were imposed in Canada, resulting in a 40 percent reduction in interchange fees, the authors of the report anticipate that:

  1. On average, each adult Canadian would be worse off to the tune of between $89 and $250 per year due to a loss of rewards and increase in annual card fees:
    1. For an individual or household earning $40,000, the net loss would be $66 to $187; and
    2. for an individual or household earning $90,000, the net loss would be $199 to $562.
  2. Spending at merchants in aggregate would decline by between $1.6 billion and $4.7 billion, resulting in a net loss to merchants of between $1.6 billion and $2.8 billion.
  3. GDP would fall by between 0.12 percent and 0.19 percent per year.
  4. Federal government revenue would fall by between 0.14 percent and 0.40 percent.

Moreover, tighter fee caps would “have a more dramatic negative effect on middle class households and the economy as a whole.”

You can read the full report here.

This week, the International Center for Law & Economics filed an ex parte notice in the FCC’s Restoring Internet Freedom docket. In it, we reviewed two of the major items that were contained in our formal comments. First, we noted that

the process by which [the Commission] enacted the 2015 [Open Internet Order]… demonstrated scant attention to empirical evidence, and even less attention to a large body of empirical and theoretical work by academics. The 2015 OIO, in short, was not supported by reasoned analysis.

Further, on the issue of preemption, we stressed that

[F]ollowing the adoption of an Order in this proceeding, a number of states may enact their own laws or regulations aimed at regulating broadband service… The resulting threat of a patchwork of conflicting state regulations, many of which would be unlikely to further the public interest, is a serious one…

[T]he Commission should explicitly state that… broadband services may not be subject to certain forms of state regulations, including conduct regulations that prescribe how ISPs can use their networks. This position would also be consistent with the FCC’s treatment of interstate information services in the past.

Our full ex parte comments can be viewed here.

R Street’s Sasha Moss recently posted a piece on TechDirt describing the alleged shortcomings of the Register of Copyrights Selection and Accountability Act of 2017 (RCSAA) — proposed legislative adjustments to the Copyright Office, recently passed in the House and introduced in the Senate last month (with identical language).

Many of the article’s points are well taken. Nevertheless, they don’t support the article’s call for the Senate to “jettison [the bill] entirely,” nor the assertion that “[a]s currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.”

R Street’s main complaint with the legislation is that it doesn’t include other proposals in a House Judiciary Committee whitepaper on Copyright Office modernization. But condemning the RCSAA simply for failing to incorporate all conceivable Copyright Office improvements fails to adequately take account of the political realities confronting Congress — in other words, it lets the perfect be the enemy of the good. It also undermines R Street’s own stated preference for Copyright Office modernization effected through “targeted and immediately implementable solutions.”

Everyone — even R Street — acknowledges that we need to modernize the Copyright office. But none of the arguments in favor of a theoretical, “better” bill is undermined or impeded by passing this bill first. While there is certainly more that Congress can do on this front, the RCSAA is a sensible, targeted piece of legislation that begins to build the new foundation for a twenty-first century Copyright Office.

Process over politics

The proposed bill is simple: It would make the Register of Copyrights a nominated and confirmed position. For reasons almost forgotten over the last century and a half, the head of the Copyright Office is currently selected at the sole discretion of the Librarian of Congress. The Copyright Office was placed in the Library merely as a way to grow the Library’s collection with copies of copyrighted works.

More than 100 years later, most everyone acknowledges that the Copyright Office has lagged behind the times. And many think the problem lies with the Office’s placement within the Library, which is plagued with information technology and other problems, and has a distinctly different mission than the Copyright Office. The only real question is what to do about it.

Separating the the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization. And yet, somewhat inexplicably, R Street claims that the bill

amounts largely to a partisan battle over who will have the power to select the next Register: [Current Librarian of Congress] Hayden, who was appointed by Barack Obama, or President Donald Trump.

But this is a pretty farfetched characterization.

First, the House passed the bill 378-48, with 145 Democrats joining 233 Republicans in support. That’s more than three-quarters of the Democratic caucus.

Moreover, legislation to make the Register a nominated and confirmed position has been under discussion for more than four years — long before either Dr. Hayden was nominated or anyone knew that Donald Trump (or any Republican at all, for that matter) would be president.

R Street also claims that the legislation

will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, [and that] the nomination process could delay modernization efforts [because of Trump’s] confirmation backlog.

But precisely the opposite seems far more likely — as Sasha herself has previously recognized:

Clarifying the office’s lines of authority does have the benefit of making it more politically accountable…. The [House] bill takes a positive step forward in promoting accountability.

As far as I’m aware, no one claims that Dr. Hayden was “politicized” or that Librarians are vulnerable to capture because they are nominated and confirmed. And a Senate confirmation process will be more transparent than unilateral appointment by the Librarian, and will give the electorate a (nominal) voice in the Register’s selection. Surely unilateral selection of the Register by the Librarian is more susceptible to undue influence.

With respect to the modernization process, we should also not forget that the Copyright Office currently has an Acting Register in Karyn Temple Claggett, who is perfectly capable of moving the modernization process forward. And any limits on her ability to do so would arise from the very tenuousness of her position that the RCSAA is intended to address.

Modernizing the Copyright Office one piece at a time

It’s certainly true, as the article notes, that the legislation doesn’t include a number of other sensible proposals for Copyright Office modernization. In particular, it points to ideas like forming a stakeholder advisory board, creating new chief economist and technologist positions, upgrading the Office’s information technology systems, and creating a small claims court.

To be sure, these could be beneficial reforms, as ICLE (and many others) have noted. But I would take some advice from R Street’s own “pragmatic approach” to promoting efficient government “with the full realization that progress on the ground tends to be made one inch at a time.”

R Street acknowledges that the legislation’s authors have indicated that this is but a beginning step and that they plan to tackle the other issues in due course. At a time when passage of any legislation on any topic is a challenge, it seems appropriate to defer to those in Congress who affirmatively want more modernization about how big a bill to start with.

In any event, it seems perfectly sensible to address the Register selection process before tackling the other issues, which may require more detailed discussions of policy and cost. And with the Copyright Office currently lacking a permanent Register and discussions underway about finding a new one, addressing any changes Congress deems necessary in the selection process seems like the most pressing issue, if they are to be resolved prior to the next pick being made.

Further, because the Register would presumably be deeply involved in the selection and operation of any new advisory board, chief economist and technologist, IT system, or small claims process, Congress can also be forgiven for wanting to address the Register issue first. Moreover, a Register who can be summarily dismissed by the Librarian likely doesn’t have the needed autonomy to fully and effectively implement the other proposals from the whitepaper. Why build a house on a shaky foundation when you can fix the foundation first?

Process over substance

All of which leaves the question why R Street opposes a bill that was passed by a bipartisan supermajority in the House; that effects precisely the kind of targeted, incremental reform that R Street promotes; and that implements a specific reform that R Street favors.

The legislation has widespread support beyond Congress, although the TechDirt piece gives this support short shrift. Instead, it notes that “some” in the content industry support the legislation, but lists only the Motion Picture Association of America. There is a subtle undercurrent of the typical substantive copyright debate, in which “enlightened” thinking on copyright is set against the presumptively malicious overreach of the movie studios. But the piece neglects to mention the support of more than 70 large and small content creators, technology companies, labor unions, and free market and civil rights groups, among others.

Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates. But it’s difficult to escape. Copyright minimalists are skeptical of an effectual Copyright Office if it is more likely to promote policies that reinforce robust copyright, even if they support sensible process reforms and more-accountable government in the abstract. And, to be fair, copyright proponents are thrilled when their substantive positions might be bolstered by promotion of sensible process reforms.

But the truth is that no one really knows how an independent and accountable Copyright Office will act with respect to contentious, substantive issues. Perhaps most likely, increased accountability via nomination and confirmation will introduce more variance in its positions. In other words, on substance, the best guess is that greater Copyright Office accountability and modernization will be a wash — leaving only process itself as a sensible basis on which to assess reform. And on that basis, there is really no reason to oppose this widely supported, incremental step toward a modern US Copyright Office.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

The FCC’s blind, headlong drive to “unlock” the set-top box market is disconnected from both legal and market realities. Legally speaking, and as we’ve noted on this blog many times over the past few months (see here, here and here), the set-top box proposal is nothing short of an assault on contracts, property rights, and the basic freedom of consumers to shape their own video experience.

Although much of the impulse driving the Chairman to tilt at set-top box windmills involves a distrust that MVPDs could ever do anything procompetitive, Comcast’s recent decision (actually, long in the making) to include an app from Netflix — their alleged arch-rival — on the X1 platform highlights the FCC’s poor grasp of market realities as well. And it hardly seems that Comcast was dragged kicking and screaming to this point, as many of the features it includes have been long under development and include important customer-centered enhancements:

We built this experience on the core foundational elements of the X1 platform, taking advantage of key technical advances like universal search, natural language processing, IP stream processing and a cloud-based infrastructure.  We have expanded X1’s voice control to make watching Netflix content as simple as saying, “Continue watching Daredevil.”

Yet, on the topic of consumer video choice, Chairman Wheeler lives in two separate worlds. On the one hand, he recognizes that:

There’s never been a better time to watch television in America. We have more options than ever, and, with so much competition for eyeballs, studios and artists keep raising the bar for quality content.

But, on the other hand, he asserts that when it comes to set-top boxes, there is no such choice, and consumers have suffered accordingly.

Of course, this ignores the obvious fact that nearly all pay-TV content is already available from a large number of outlets, and that competition between devices and services that deliver this content is plentiful.

In fact, ten years ago — before Apple TV, Roku, Xfinity X1 and Hulu (among too many others to list) — Gigi Sohn, Chairman Wheeler’s chief legal counsel, argued before the House Energy and Commerce Committee that:

We are living in a digital gold age and consumers… are the beneficiaries.  Consumers have numerous choices for buying digital content and for buying devices on which to play that content. (emphasis added)

And, even on the FCC’s own terms, the multichannel video market is presumptively competitive nationwide with

direct broadcast satellite (DBS) providers’ market share of multi-channel video programming distributors (MVPDs) subscribers [rising] to 33.8%. “Telco” MVPDs increased their market share to 13% and their nationwide footprint grew by 5%. Broadband service providers such as Google Fiber also expanded their footprints. Meanwhile, cable operators’ market share fell to 52.8% of MVPD subscribers.

Online video distributor (OVD) services continue to grow in popularity with consumers. Netflix now has 47 million or more subscribers in the U.S., Amazon Prime has close to 60 million, and Hulu has close to 12 million. By contrast, cable MVPD subscriptions dropped to 53.7 million households in 2014.

The extent of competition has expanded dramatically over the years, and Comcast’s inclusion of Netflix in its ecosystem is only the latest indication of this market evolution.

And to further underscore the outdated notion of focusing on “boxes,” AT&T just announced that it would be offering a fully apps-based version of its Direct TV service. And what was one of the main drivers of AT&T being able to go in this direction? It was because the company realized the good economic sense of ditching boxes altogether:

The company will be able to give consumers a break [on price] because of the low cost of delivering the service. AT&T won’t have to send trucks to install cables or set-top boxes; customers just need to download an app. 

And lest you think that Comcast’s move was merely a cynical response meant to undermine the Commissioner (although, it is quite enjoyable on that score), the truth is that Comcast has no choice but to offer services like this on its platform — and it’s been making moves like this for quite some time (see here and here). Everyone knows, MVPDs included, that apps distributed on a range of video platforms are the future. If Comcast didn’t get on board the apps train, it would have been left behind at the station.

And there is other precedent for expecting just this convergence of video offerings on a platform. For instance, Amazon’s Fire TV gives consumers the Amazon video suite — available through the Prime Video subscription — but they also give you access to apps like Netflix, Hulu. (Of course Amazon is a so-called edge provider, so when it makes the exact same sort of moves that Comcast is now making, its easy for those who insist on old market definitions to miss the parallels.)

The point is, where Amazon and Comcast are going to make their money is in driving overall usage of their platform because, inevitably, no single service is going to have every piece of content a given user wants. Long term viability in the video market is necessarily going to be about offering consumers more choice, not less. And, in this world, the box that happens to be delivering the content is basically irrelevant; it’s the competition between platform providers that matters.

It’s not quite so simple to spur innovation. Just ask the EU as it resorts to levying punitive retroactive taxes on productive American companies in order to ostensibly level the playing field (among other things) for struggling European startups. Thus it’s truly confusing when groups go on a wholesale offensive against patent rights — one of the cornerstones of American law that has contributed a great deal toward our unparalleled success as an innovative economy.

Take EFF, for instance. The advocacy organization has recently been peddling sample state legislation it calls the “Reclaim Invention Act,” which it claims is targeted at reining in so-called “patent trolls.” Leaving aside potential ulterior motives (like making it impossible to get software patents at all), I am left wondering what EFF actually hopes to achieve.

“Troll” is a scary sounding word, but what exactly is wrapped up in EFF’s definition? According to EFF’s proposed legislation, a “patent assertion entity” (the polite term for “patent troll”) is any entity that primarily derives its income through the licensing of patents – as opposed to actually producing the invention for public consumption. But this is just wrong. As Zorina Khan has noted, the basic premise upon which patent law was constructed in the U.S. was never predicated upon whether an invention would actually be produced:

The primary concern was access to the new information, and the ability of other inventors to benefit from the discovery either through licensing, inventing around the idea, or at expiration of the patent grant. The emphasis was certainly not on the production of goods; in fact, anyone who had previously commercialized an invention lost the right of exclusion vested in patents. The decision about how or whether the patent should be exploited remained completely within the discretion of the patentee, in the same way that the owner of physical property is allowed to determine its use or nonuse.

Patents are property. As with other forms of property, patent holders are free to transfer them to whomever they wish, and are free to license them as they see fit. The mere act of exercising property rights simply cannot be the basis for punitive treatment by the state. And, like it or not, licensing inventions or selling the property rights to an invention is very often how inventors are compensated for their work. Whether one likes the Patent Act in particular or not is irrelevant; as long as we have patents, these are fundamental economic and legal facts.

Further, the view implicit in EFF’s legislative proposal completely ignores the fact that the people or companies that may excel at inventing things (the province of scientists, for example) may not be so skilled at commercializing things (the province of entrepreneurs). Moreover, inventions can be enormously expensive to commercialize. In such cases, it could very well be the most economically efficient result to allow some third party with the requisite expertise or the means to build it, to purchase and manage the rights to the patent, and to allow them to arrange for production of the invention through licensing agreements. Intermediaries are nothing new in society, and, despite popular epithets about “middlemen,” they actually provide a necessary function with respect to mobilizing capital and enabling production.

Granted, some companies will exhibit actual “troll” behavior, but the question is not whether some actors are bad, but whether the whole system overall optimizes innovation and otherwise contributes to greater social welfare. Licensing patents in itself is a benign practice, so long as the companies that manage the patents are not abusive. And, of course, among the entities that engage in patent licensing, one would assume that universities would be the most unobjectionable of all parties.

Thus, it’s extremely disappointing that EFF would choose to single out universities as aiders and abettors of “trolls” — and in so doing recommend punitive treatment. And what EFF recommends is shockingly draconian. It doesn’t suggest that there should be heightened review in IPR proceedings, or that there should be fee shifting or other case-by-case sanctions doled out for unwise partnership decisions. No, according to the model legislation, universities would be outright cut off from government financial aid or other state funding, and any technology transfers would be void, unless they:

determine whether a patent is the most effective way to bring a new invention to a broad user base before filing for a patent that covers that invention[;] … prioritize technology transfer that develops its inventions and scales their potential user base[;] … endeavor to nurture startups that will create new jobs, products, and services[;] … endeavor to assign and license patents only to entities that require such licenses for active commercialization efforts or further research and development[;] … foster agreements and relationships that include the sharing of know-how and practical experience to maximize the value of the assignment or license of the corresponding patents; and … prioritize the public interest in all patent transactions.

Never mind the fact that recent cases like Alice Corp., Octane Fitness, and Highmark — as well as the new inter partes review process — seem to be putting effective downward pressure on frivolous suits (as well as, potentially, non-frivolous suits, for that matter); apparently EFF thinks that putting the screws to universities is what’s needed to finally overcome the (disputed) problems of excessive patent litigation.

Perhaps reflecting that even EFF itself knows that its model legislation is more of a publicity stunt than a serious proposal, most of what it recommends is either so ill-defined as to be useless (e.g., “prioritize public interest in all patent transactions?” What does that even mean?) or is completely mixed up.

For instance, the entire point of a university technology transfer office is that educational institutions and university researchers are not themselves in a position to adequately commercialize inventions. Questions of how large a user base a given invention can reach, or how best to scale products, grow markets, or create jobs are best left to entrepreneurs and business people. The very reason a technology transfer office would license or sell its patents to a third party is to discover these efficiencies.

And if a university engages in a transfer that, upon closer scrutiny, runs afoul of this rather fuzzy bit of legislation, any such transfers will be deemed void. Which means that universities will either have to expend enormous resources to find willing partners, or will spend millions on lawsuits and contract restitution damages. Enacting these feel-good  mandates into state law is at best useless, and most likely a tool for crusading plaintiff’s attorneys to use to harass universities.

Universities: Don’t you dare commercialize that invention!

As I noted above, it’s really surprising that groups like EFF are going after universities, as their educational mission and general devotion to improving social welfare should make them the darlings of social justice crusaders. However, as public institutions with budgets and tax statuses dependent on political will, universities are both unable to route around organizational challenges (like losing student aid or preferred tax status) and are probably unwilling to engage in wholesale PR defensive warfare for fear of offending a necessary political constituency. Thus, universities are very juicy targets — particularly when they engage in “dirty” commercial activities of any sort, no matter how attenuated.

And lest you think that universities wouldn’t actually be harassed (other than in the abstract by the likes of EFF) over patents, it turns out that it’s happening even now, even without EFF’s proposed law.

For the last five years Princeton University has been locked in a lawsuit with some residents of Princeton, New Jersey who have embarked upon a transparently self-interested play to divert university funds to their own pockets. Their weapon of choice? A challenge to Princeton’s tax-exempt status based on the fact that the school licenses and sells its patented inventions.

The plaintiffs’ core argument in Fields v. Princeton is that the University should be  a taxpaying entity because it occasionally generates patent licensing revenues from a small fraction of the research that its faculty conducts in University buildings.

The Princeton case is problematic for a variety of reasons, one of which deserves special attention because it runs squarely up against a laudable federal law that is intended to promote research, development, and patent commercialization.

In the early 1980s Congress passed the Bayh-Dole Act, which made it possible for universities to retain ownership over discoveries made in campus labs. The aim of the law was to encourage essential basic research that had historically been underdeveloped. Previously, the rights to any such federally-funded discoveries automatically became the property of the federal government, which, not surprisingly, put a damper on universities’ incentives to innovate.

When universities collaborate with industry — a major aim of Bayh-Dole — innovation is encouraged, breakthroughs occur, and society as a whole is better off. About a quarter of the top drugs approved since 1981 came from university research, as did many life-changing products we now take for granted, like Google, web browsers, email, cochlear implants and major components of cell phones. Since the passage of the Act, a boom in commercialized patents has yielded billions of dollars of economic activity.

Under the Act innovators are also rewarded: Qualifying institutions like Princeton are required to share royalties with the researchers who make these crucial discoveries. The University has no choice in the matter; to refuse to share the revenues would constitute a violation of the terms of federal research funding. But the Fields suit ignores this reality an,d in much the same way as EFF’s proposed legislation, will force a stark choice upon Princeton University: engage with industry, increase social utility and face lawsuits, or keep your head down and your inventions to yourself.

A Hobson’s Choice

Thus, things like the Fields suit and EFF’s proposed legislation are worse than costly distractions for universities; they are major disincentives to the commercialization of university inventions. This may not be the intended consequence of these actions, but it is an entirely predictable one.

Faced with legislation that punishes them for being insufficiently entrepreneurial and suits that attack them for bothering to commercialize at all, universities will have to make a hobson’s choice: commercialize the small fraction of research that might yield licensing revenues and potentially face massive legal liability, or simply decide to forego commercialization (and much basic research) altogether.

The risk here, obviously, is that research institutions will choose the latter in order to guard against the significant organizational costs that could result from a change in their tax status or a thicket of lawsuits that emerge from voided technology transfers (let alone the risk of losing student aid money).

But this is not what we want as a society. We want the optimal level of invention, innovation, and commercialization. What anti-patent extremists and short-sighted state governments may obtain for us instead, however, is a status quo much like Europe where the legal and regulatory systems perpetually keep innovation on a low simmer.