Archives For entrepreneurship

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas B. Nachbar is a professor of law at the University of Virginia School of Law and a senior fellow at the Center for National Security Law.]

It would be impossible to describe Ajit Pai’s tenure as chair of the Federal Communications Commission as ordinary. Whether or not you thought his regulatory style or his policies were innovative, his relationship with the public has been singular for an FCC chair. His Reese’s mug, alone, has occupied more space in the American media landscape than practically any past FCC chair. From his first day, he has attracted consistent, highly visible criticism from a variety of media outlets, although at least John Oliver didn’t describe him as a dingo. Just today, I read that Ajit Pai single handedly ruined the internet, which when I got up this morning seemed to be working pretty much the same way it was four years ago.

I might be biased in my view of Ajit. I’ve known him since we were law school classmates, when he displayed the same zeal and good-humored delight in confronting hard problems that I’ve seen in him at the commission. So I offer my comments not as an academic and student of FCC regulation, but rather as an observer of the communications regulatory ecosystem that Ajit has dominated since his appointment. And while I do not agree with everything he’s done at the commission, I have admired his single-minded determination to pursue policies that he believes will expand access to advanced telecommunications services. One can disagree with how he’s pursued that goal—and many have—but characterizing his time as chair in any other way simply misses the point. Ajit has kept his eye on expanding access, and he has been unwavering in pursuit of that objective, even when doing so has opened him to criticism, which is the definition of taking political risk.

Thus, while I don’t think it’s going to be the most notable policy he’s participated in at the commission, I would like to look at Ajit’s tenure through the lens of one small part of one fairly specific proceeding: the commission’s decision to include SpaceX as a low-latency provider in the Rural Digital Opportunity Fund (RDOF) Auction.

The decision to include SpaceX is at one level unremarkable. SpaceX proposes to offer broadband internet access through low-Earth-orbit satellites, which is the kind of thing that is completely amazing but is becoming increasingly un-amazing as communications technology advances. SpaceX’s decision to use satellites is particularly valuable for initiatives like the RDOF, which specifically seek to provide services where previous (largely terrestrial) services have not. That is, in fact, the whole point of the RDOF, a point that sparked fiery debate over the FCC’s decision to focus the first phase of the RDOF on areas with no service rather than areas with some service. Indeed, if anything typifies the current tenor of the debate (at the center of which Ajit Pai has resided since his confirmation as chair), it is that a policy decision over which kind of under-served areas should receive more than $16 billion in federal funding should spark such strongly held views. In the end, SpaceX was awarded $885.5 million to participate in the RDOF, almost 10% of the first-round funds awarded.

But on a different level, the decision to include SpaceX is extremely remarkable. Elon Musk, SpaceX’s pot-smoking CEO, does not exactly fit regulatory stereotypes. (Disclaimer: I personally trust Elon Musk enough to drive my children around in one of his cars.) Even more significantly, SpaceX’s Starlink broadband service doesn’t actually exist as a commercial product. If you go to Starlink’s website, you won’t find a set of splashy webpages featuring products, services, testimonials, and a variety of service plans eager for a monthly assignation with your credit card or bank account. You will be greeted with a page asking for your email and service address in case you’d like to participate in Starlink’s beta program. In the case of my address, which is approximately 100 miles from the building where the FCC awarded SpaceX over $885 million to participate in the RDOF, Starlink is not yet available. I will, however, “be notified via email when service becomes available in your area,” which is reassuring but doesn’t get me any closer to watching cat videos.

That is perhaps why Chairman Pai was initially opposed to including SpaceX in the low-latency portion of the RDOF. SpaceX was offering unproven technology and previous satellite offerings had been high-latency, which is good for some uses but not others.

But then, an even more remarkable thing happened, at least in Washington: a regulator at the center of a controversial issue changed his mind and—even more remarkably—admitted his decision might not work out. When the final order was released, SpaceX was allowed to bid for low-latency RDOF funds even though the commission was “skeptical” of SpaceX’s ability to deliver on its low-latency promise. Many doubted that SpaceX would be able to effectively compete for funds, but as we now know, that decision led to SpaceX receiving a large share of the Phase I funds. Of course, that means that if SpaceX doesn’t deliver on its latency promises, a substantial part of the RDOF Phase I funds will fail to achieve their purpose, and the FCC will have backed the wrong horse.

I think we are unlikely to see such regulatory risk-taking, both technically and politically, in what will almost certainly be a more politically attuned commission in the coming years. Even less likely will be acknowledgments of uncertainty in the commission’s policies. Given the political climate and the popular attention policies like network neutrality have attracted, I would expect the next chair’s views about topics like network neutrality to exhibit more unwavering certainty than curiosity and more resolve than risk-taking. The most defining characteristic of modern communications technology and markets is change. We are all better off with a commission in which the other things that can change are minds.

With the COVID-19 vaccine made by Moderna joining the one from Pfizer and BioNTech in gaining approval from the U.S. Food and Drug Administration, it should be time to celebrate the U.S. system of pharmaceutical development. The system’s incentives—notably granting patent rights to firms that invest in new and novel discoveries—have worked to an astonishing degree, producing not just one but as many as three or four effective approaches to end a viral pandemic that, just a year ago, was completely unknown.

Alas, it appears not all observers agree. Now that we have the vaccines, some advocate suspending or limiting patent rights—for example, by imposing a compulsory licensing scheme—with the argument that this is the only way for the vaccines to be produced in mass quantities worldwide. Some critics even assert that abolishing or diminishing property rights in pharmaceuticals is needed to end the pandemic.

In truth, we can effectively and efficiently distribute the vaccines while still maintaining the integrity of our patent system. 

What the false framing ignores are the important commercialization and distribution functions that patents provide, as well as the deep, long-term incentives the patent system provides to create medical innovations and develop a robust pharmaceutical supply chain. Unless we are sure this is the last pandemic we will ever face, repealing intellectual property rights now would be a catastrophic mistake.

The supply chains necessary to adequately scale drug production are incredibly complex, and do not appear overnight. The coordination and technical expertise needed to support worldwide distribution of medicines depends on an ongoing pipeline of a wide variety of pharmaceuticals to keep the entire operation viable. Public-spirited officials may in some cases be able to piece together facilities sufficient to produce and distribute a single medicine in the short term, but over the long term, global health depends on profit motives to guarantee the commercialization pipeline remains healthy. 

But the real challenge is in maintaining proper incentives to develop new drugs. It has long been understood that information goods like intellectual property will be undersupplied without sufficient legal protections. Innovators and those that commercialize innovations—like researchers and pharmaceutical companies—have less incentive to discover and market new medicines as the likelihood that they will be able to realize a return for their efforts diminishes. Without those returns, it’s far less certain the COVID vaccines would have been produced so quickly, or at all. The same holds for the vaccines we will need for the next crisis or badly needed treatments for other deadly diseases.

Patents are not the only way to structure incentives, as can be seen with the current vaccines. Pharmaceutical companies also took financial incentives from various governments in the form of direct payment or in purchase guarantees. But this enhances, rather than diminishes, the larger argument. There needs to be adequate returns for those who engage in large, risky undertakings like creating a new drug. 

Some critics would prefer to limit pharmaceutical companies’ returns solely to those early government investments, but there are problems with this approach. It is difficult for governments to know beforehand what level of profit is needed to properly incentivize firms to engage in producing these innovations.  To the extent that direct government investment is useful, it often will be as an additional inducement that encourages new entry by multiple firms who might each pursue different technologies. 

Thus, in the case of coronavirus vaccines, government subsidies may have enticed more competitors to enter more quickly, or not to drop out as quickly, in hopes that they would still realize a profit, notwithstanding the risks. Where there might have been only one or two vaccines produced in the United States, it appears likely we will see as many as four.

But there will always be necessary trade-offs. Governments cannot know how to set proper incentives to encourage development of every possible medicine for every possible condition by every possible producer.  Not only do we not know which diseases and which firms to prioritize, but we have no idea how to determine which treatment approaches to encourage. 

The COVID-19 vaccines provide a clear illustration of this problem. We have seen development of both traditional vaccines and experimental mRNA treatments to combat the virus. Thankfully, both have shown positive results, but there was no way to know that in March. In this perennial state of ignorance,t markets generally have provided the best—though still imperfect—way to make decisions. 

The patent system’s critics sometimes claim that prizes would offer a better way to encourage discovery. But if we relied solely on government-directed prizes, we might never have had the needed research into the technology that underlies mRNA. As one recent report put it, “before messenger RNA was a multibillion-dollar idea, it was a scientific backwater.” Simply put, without patent rights as the backstop to purely academic or government-led innovation and commercialization, it is far less likely that we would have seen successful COVID vaccines developed as quickly.

It is difficult for governments to be prepared for the unknown. Abolishing or diminishing pharmaceutical patents would leave us even less prepared for the next medical crisis. That would only add to the lasting damage that the COVID-19 pandemic has already wrought on the world.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

The Apple E-Books Antitrust Case: Implications for Antitrust Law and for the Economy

February 15, 2016

truthonthemarket.com

The appellate court’s 2015 decision affirming the district court’s finding of per se liability in United States v. Apple provoked controversy over the legal and economic merits of the case, its significance for antitrust jurisprudence, and its implications for entrepreneurs, startups, and other economic actors throughout the economy. Apple has filed a cert petition with the Supreme Court, which will decide on February 19th whether to hear the case.

On Monday, February 15 and Tuesday February 16, Truth on the Market and the International Center for Law and Economics will present a blog symposium discussing the case and its implications.

We’ve lined up an outstanding and diverse group of scholars, practitioners and other experts to participate in the symposium. The full archive of symposium posts can be found at this link, and individual posts can be accessed by clicking on the author’s name below.

Also see our previous posts at Truth on the Market discussing the Apple e-books case for a preview of many of the issues to be discussed.

Shanker Singham of the Babson Global Institute (formerly a leading international trade lawyer and author of the most comprehensive one-volume work on the interplay between competition and international trade policy) has published a short article introducing the concept of “enterprise cities.”  This article, which outlines an incentives-based, market-oriented approach to spurring economic development, is well worth reading.  A short summary follows.

Singham points out that the transition away from socialist command-and-control economies, accompanied by international trade liberalization, too often failed to create competitive markets within developing countries.  Anticompetitive market distortions imposed by government and generated by politically-connected domestic rent-seekers continue to thrive – measures such as entry barriers that favor entrenched incumbent firms, and other regulatory provisions that artificially favor specific powerful domestic business interests (“crony capitalists”).  Such widespread distortions reduce competition and discourage inward investment, thereby retarding innovation and economic growth and reducing consumer welfare.  Political influence exercised by the elite beneficiaries of the distortions may prevent legal reforms that would remove these regulatory obstacles to economic development.  What, then, can be done to disturb this welfare-inimical state of affairs, when sweeping, nationwide legal reforms are politically impossible?

One incremental approach, advanced by Professor Paul Romer and others, is the establishment of “charter cities” – geographic zones within a country that operate under government-approved free market-oriented charters, rather than under restrictive national laws.  Building on this concept, Babson Global Institute has established a “Competitiveness and Enterprise Development Project” (CEDP) designed to promote the notion of “Enterprise Cities” (ECs) – geographically demarcated zones of regulatory autonomy within countries, governed by a Board.  ECs would be created through negotiations between a national government and a third party group, such as CEDP.  The negotiations would establish “Regulatory Framework Agreements” embodying legal rules (implemented through statutory or constitutional amendments by the host country) that would apply solely within the EC.  Although EC legal regimes would differ with respect to minor details (reflecting local differences that would affect negotiations), they would be consistent in stressing freedom of contract, flexible labor markets, and robust property rights, and in prohibiting special regulatory/legal favoritism (so as to avoid anticompetitive market distortions).  Protecting foreign investment through third party arbitration and related guarantees would be key to garnering foreign investor interest in ECs.   The goal would be to foster a business climate favorable to investment, job creation, innovation, and economic growth.  The EC Board would ensure that agreed-to rules would be honored and enforced by EC-specific legal institutions, such as courts.

Because market-oriented EC rules will not affect market-distortive laws elsewhere within the host country, well-organized rent-seeking elites may not have as strong an incentive to oppose creating ECs.  Indeed, to the extent that a share of EC revenues is transferred to the host country government (depending upon the nature of the EC’s charter), elites might directly benefit, using their political connections to share in the profits.  In short, although setting up viable ECs is no easy matter, their establishment need not be politically unfeasible.  Indeed, the continued success of Hong Kong as a free market island within China (Hong Kong places first in the Heritage Foundation’s Index of Economic Freedom), operating under the Basic Law of Hong Kong, suggests the potential for ECs to thrive, despite having very different rules than the parent state’s legal regime.  (Moreover, the success of Hong Kong may have proven contagious, as China is now promoting a new Shanghai Free Trade Zone thaw would compete with Hong Kong and Singapore.)

The CEDP is currently negotiating the establishment of ECs with a number of governments.  As Singham explains, successful launch of an EC requires:  (1) a committed developer; (2) land that can be used for a project; (3) a good external infrastructure connecting the EC with the rest of the country; and (4) “a government that recognizes the benefits to its reform agenda and to its own economic plan of such a designation of regulatory autonomy and is willing to confront its own challenges by thinking outside the box.”  While the fourth prerequisite may be the most difficult to achieve, internal pressures for faster economic growth and increased investment may lead jurisdictions with burdensome regulatory regimes to consider ECs.

Furthermore, as Singham stresses, by promoting competition on the merits, free from favoritism, a successful EC could stimulate successful entrepreneurship.  Scholarly work points to the importance of entrepreneurship to economic development.

Finally, the beneficial economic effects of ECs could give additional ammunition to national competition authorities as they advocate for less restrictive regulatory frameworks within their jurisdictions.  It could thereby render more effective the efforts of the many new national competition authorities, whose success in enhancing competitive conditions within their jurisdictions has been limited at best.

ECs are no panacea – they will not directly affect restrictive national regulatory laws that benefit privileged special interests but harm the overall economy.  However, to the extent they prove financial successes, over time they could play a crucial indirect role in enhancing competition, reducing inefficiency, and spurring economic growth within their host countries.

Last week a group of startup investors wrote a letter to protest what they assume FCC Chairman Tom Wheeler’s proposed, revised Open Internet NPRM will say.

Bear in mind that an NPRM is a proposal, not a final rule, and its issuance starts a public comment period. Bear in mind, as well, that the proposal isn’t public yet, presumably none of the signatories to this letter has seen it, and the devil is usually in the details. That said, the letter has been getting a lot of press.

I found the letter seriously wanting, and seriously disappointing. But it’s a perfect example of what’s so wrong with this interminable debate on net neutrality.

Below I reproduce the letter in full, in quotes, with my comments interspersed. The key take-away: Neutrality (or non-discrimination) isn’t what’s at stake here. What’s at stake is zero-cost access by content providers to broadband networks. One can surely understand why content providers and those who fund them want their costs of doing business to be lower. But the rhetoric of net neutrality is mismatched with this goal. It’s no wonder they don’t just come out and say it – it’s quite a remarkable claim.

Open Internet Investors Letter

The Honorable Tom Wheeler, Chairman
Federal Communications Commission
445 12th Street, SW
Washington D.C. 20554

May 8, 2014

Dear Chairman Wheeler:

We write to express our support for a free and open Internet.

We invest in entrepreneurs, investing our own funds and those of our investors (who are individuals, pension funds, endowments, and financial institutions).  We often invest at the earliest stages, when companies include just a handful of founders with largely unproven ideas. But, without lawyers, large teams or major revenues, these small startups have had the opportunity to experiment, adapt, and grow, thanks to equal access to the global market.

“Equal” access has nothing to do with it. No startup is inherently benefitted by being “equal” to others. Maybe this is just careless drafting. But frankly, as I’ll discuss, there are good reasons to think (contra the pro-net neutrality narrative) that startups will be helped by inequality (just like contra the (totally wrong) accepted narrative, payola helps new artists). It says more than they would like about what these investors really want that they advocate “equality” despite the harm it may impose on startups (more on this later).

Presumably what “equal” really means here is “zero cost”: “As long as my startup pays nothing for access to ISPs’ subscribers, it’s fine that we’re all on equal footing.” Wheeler has stated his intent that his proposal would require any prioritization to be available to any who want it, on equivalent, commercially reasonable terms. That’s “equal,” too, so what’s to complain about? But it isn’t really inequality that’s gotten everyone so upset.

Of course, access is never really “zero cost;” start-ups wouldn’t need investors if their costs were zero. In that sense, why is equality of ISP access any more important than other forms of potential equality? Why not mandate price controls on rent? Why not mandate equal rent? A cost is a cost. What’s really going on here is that, like Netflix, these investors want to lower their costs and raise their returns as much as possible, and they want the government to do it for them.

As a result, some of the startups we have invested in have managed to become among the most admired, successful, and influential companies in the world.

No startup became successful as a result of “equality” or even zero-cost access to broadband. No doubt some of their business models were predicated on that assumption. But it didn’t cause their success.

We have made our investment decisions based on the certainty of a level playing field and of assurances against discrimination and access fees from Internet access providers.

And they would make investment decisions based on the possibility of an un-level playing field if that were the status quo. More importantly, the businesses vying for investment dollars might be different ones if they built their business models in a different legal/economic environment. So what? This says nothing about the amount of investment, the types of businesses, the quality of businesses that would arise under a different set of rules. It says only that past specific investments might not have been made.

Unless the contention is that businesses would be systematically worse under a different rule, this is irrelevant. I have seen that claim made, and it’s implicit here, of course, but I’ve seen no evidence to actually support it. Businesses thrive in unequal, cost-ladened environments all the time. It costs about $4 million/30 seconds to advertise during the Super Bowl. Budweiser and PepsiCo paid multiple millions this year to do so; many of their competitors didn’t. With inequality like that, it’s a wonder Sierra Nevada and Dr. Pepper haven’t gone bankrupt.

Indeed, our investment decisions in Internet companies are dependent upon the certainty of an equal-opportunity marketplace.

Again, no they’re not. Equal opportunity is a euphemism for zero cost, or else this is simply absurd on its face. Are these investors so lacking in creativity and ability that they can invest only when there is certainty of equal opportunity? Don’t investors thrive – aren’t they most needed – in environments where arbitrage is possible, where a creative entrepreneur can come up with a risky, novel way to take advantage of differential conditions better than his competitors? Moreover, the implicit equating of “equal-opportunity marketplace” with net neutrality rules is far-fetched. Is that really all that matters?

This is a good time to make a point that is so often missed: The loudest voices for net neutrality are the biggest companies – Google, Netflix, Amazon, etc. That fact should give these investors and everyone else serious pause. Their claim rests on the idea that “equality” is needed, so big companies can’t use an Internet “fast lane” to squash them. Google is decidedly a big company. So why do the big boys want this so much?

The battle is often pitched as one of ISPs vs. (small) content providers. But content providers have far less to worry about and face far less competition from broadband providers than from big, incumbent competitors. It is often claimed that “Netflix was able to pay Comcast’s toll, but a small startup won’t have that luxury.” But Comcast won’t even notice or care about a small startup; its traffic demands will be inconsequential. Netflix can afford to pay for Internet access for precisely the same reason it came to Comcast’s attention: It’s hugely successful, and thus creates a huge amount of traffic.

Based on news reports and your own statements, we are worried that your proposed rules will not provide the necessary certainty that we need to make investment decisions and that these rules will stifle innovation in the Internet sector.

Now, there’s little doubt that legal certainty aids investment decisions. But “certainty” is not in danger here. The rules have to change because the court said so – with pretty clear certainty. And a new rule is not inherently any more or less likely to offer certainty than the previous Open Internet Order, which itself was subject to intense litigation (obviously) and would have been subject to interpretation and inconsistent enforcement (and would have allowed all kinds of paid prioritization, too!). Certainty would be good, but Wheeler’s proposed rule won’t likely do anything about the amount of certainty one way or the other.

If established companies are able to pay for better access speeds or lower latency, the Internet will no longer be a level playing field. Start-ups with applications that are advantaged by speed (such as games, video, or payment systems) will be unlikely to overcome that deficit no matter how innovative their service.

Again, it’s notable that some of the strongest advocates for net neutrality are established companies. Another letter sent out last week included signatures from a bunch of startups, but also Google, Microsoft, Facebook and Yahoo!, among others.

In truth it’s hard to see why startup investors would think this helps them. Non-neutrality offers the prospect that a startup might be able to buy priority access to overcome the inherent disadvantage of newness, and to better compete with an established company. Neutrality means that that competitive advantage is impossible, and the baseline relative advantages and disadvantages remain – which helps incumbents, not startups. With a neutral Internet – well, the advantages of the incumbent competitor can’t be dissipated by a startup buying a favorable leg-up in speed and the Netflix’s of the world will be more likely to continue to dominate.

Of course the claim is that incumbents will use their huge resources to gain even more advantage with prioritized access. Implicit in this must be the assumption that the advantage that could be gained by a startup buying priority offers less return for the startup than the cost imposed on it by the inherent disadvantages of reputation, brand awareness, customer base, etc. But that’s not plausible for all or even most startups. And investors exist precisely because they are able to provide funds for which there is a likelihood of a good return – so if paying for priority would help overcome inherent disadvantages, there would be money for it.

Also implicit is the claim that the benefits to incumbents (over and above their natural advantages) from paying for priority, in terms of hamstringing new entrants, will outweigh the cost. This is unlikely generally to be true, as well. They already have advantages. Sure, sometimes they might want to pay for more, but in precisely the cases where it would be worth it to do so, the new entrant would also be most benefited by doing so itself – ensuring, again, that investment funds will be available.

Of course if both incumbents and startups decide paying for priority is better, we’re back to a world of “equality,” so what’s to complain about, based on this letter? This puts into stark relief that what these investors really want is government-mandated, subsidized broadband access, not “equality.”

Now, it’s conceivable that that is the optimal state of affairs, but if it is, it isn’t for the reasons given here, nor has anyone actually demonstrated that it is the case.

Entrepreneurs will need to raise money to buy fast lane services before they have proven that consumers want their product. Investors will extract more equity from entrepreneurs to compensate for the risk.

Internet applications will not be able to afford to create a relationship with millions of consumers by making their service freely available and then build a business over time as they better understand the value consumers find in their service (which is what Facebook, Twitter, Tumblr, Pinterest, Reddit, Dropbox and virtually other consumer Internet service did to achieve scale).

In other words: “Subsidize us. We’re worth it.” Maybe. But this is probably more revealing than intended. The Internet cost something to someone to build. (Actually, it cost more than a trillion dollars to broadband providers). This just says “we shouldn’t have to pay them for it now.” Fine, but who, then, and how do you know that forcing someone else to subsidize these startup companies will actually lead to better results? Mightn’t we get less broadband investment such that there is little Internet available for these companies to take advantage of in the first place? If broadband consumers instead of content consumers foot the bill, is that clearly preferable, either from a social welfare perspective, or even the self interest of these investors who, after all, do ultimately rely on consumer spending to earn their return?

Moreover, why is this “build for free, then learn how to monetize over time” business model necessarily better than any other? These startup investors know better than anyone that enshrining existing business models just because they exist is the antithesis of innovation and progress. But that’s exactly what they’re saying – “the successful companies of the past did it this way, so we should get a government guarantee to preserve our ability to do it, too!”

This is the most depressing implication of this letter. These investors and others like them have been responsible for financing enormously valuable innovations. If even they can’t see the hypocrisy of these claims for net neutrality – and worse, choose to propagate it further – then we really have come to a sad place. When innovators argue passionately for stagnation, we’re in trouble.

Instead, creators will have to ask permission of an investor or corporate hierarchy before they can launch. Ideas will be vetted by committees and quirky passion projects will not get a chance. An individual in dorm room or a design studio will not be able to experiment out loud on the Internet. The result will be greater conformity, fewer surprises, and less innovation.

This is just a little too much protest. Creators already have to ask “permission” – or are these investors just opening up their bank accounts to whomever wants their money? The ones that are able to do it on a shoestring, with money saved up from babysitting gigs, may find higher costs, and the need to do more babysitting. But again, there is nothing special about the Internet in this. Let’s mandate zero cost office space and office supplies and developer services and design services and . . . etc. for all – then we’ll have way more “permission-less” startups. If it’s just a handout they want, they should say so, instead of pretending there is a moral or economic welfare basis for their claims.

Further, investors like us will be wary of investing in anything that access providers might consider part of their future product plans for fear they will use the same technical infrastructure to advantage their own services or use network management as an excuse to disadvantage competitive offerings.

This is crazy. For the same reasons I mentioned above, the big access provider (and big incumbent competitor, for that matter) already has huge advantages. If these investors aren’t already wary of investing in anything that Google or Comcast or Apple or… might plan to compete with, they must be terrible at their jobs.

What’s more, Wheeler’s much-reviled proposal (what we know about it, that is), to say nothing of antitrust law, clearly contemplates exactly this sort of foreclosure and addresses it. “Pure” net neutrality doesn’t add much, if anything, to the limits those laws already do or would provide.

Policing this will be almost impossible (even using a standard of “commercial reasonableness”) and access providers do not need to successfully disadvantage their competition; they just need to create a credible threat so that investors like us will be less inclined to back those companies.

You think policing the world of non-neutrality is hard – try policing neutrality. It’s not as easy as proponents make it out to be. It’s simply never been the case that all bits at all times have been treated “neutrally” on the Internet. Any version of an Open Internet Order (just like the last one, for example) will have to recognize this.

Larry Downes compiled a list of the exceptions included in the last Open Internet Order when he testified before the House Judiciary Committee on the rules in 2011. There are 16 categories of exemption, covering a wide range of fundamental components of broadband connectivity, from CDNs to free Wi-Fi at Starbucks. His testimony is a tour de force, and should be required reading for everyone involved in this debate.

But think about how the manifest advantages of these non-neutral aspects of broadband networks would be squared with “real” neutrality. On their face, if these investors are to be taken at their word, these arguments would preclude all of the Open Internet Order’s exemptions, too. And if any sort of inequality is going to be deemed ok, how accurately would regulators distinguish between “illegitimate” inequality and the acceptable kind that lets coffee shops subsidize broadband? How does the simplistic logic of net equality distinguish between, say, Netflix’s colocated servers and a startup like Uber being integrated into Google Maps? The simple answer is that it doesn’t, and the claims and arguments of this letter are woefully inadequate to the task.

We need simple, strong, enforceable rules against discrimination and access fees, not merely against blocking.

No, we don’t. Or, at least, no one has made that case. These investors want a handout; that is the only case this letter makes.

We encourage the Commission to consider all available jurisdictional tools at its disposal in ensuring a free and open Internet that rewards, not disadvantages, investment and entrepreneurship.

… But not investment in broadband, and not entrepreneurship that breaks with the business models of the past. In reality, this letter is simple rent-seeking: “We want to invest in what we know, in what’s been done before, and we don’t want you to do anything to make that any more costly for us. If that entails impairing broadband investment or imposing costs on others, so be it – we’ll still make our outsized returns, and they can write their own letter complaining about ‘inequality.’”

A final point I have to make. Although the investors don’t come right out and say it, many others have, and it’s implicit in the investors’ letter: “Content providers shouldn’t have to pay for broadband. Users already pay for the service, so making content providers pay would just let ISPs double dip.” The claim is deeply problematic.

For starters, it’s another form of the status quo mentality: “Users have always paid and content hasn’t, so we object to any deviation from that.” But it needn’t be that way. And of course models frequently coexist where different parties pay for the same or similar services. Some periodicals are paid for by readers and offer little or no advertising; others charge a subscription and offer paid ads; and still others are offered for free, funded entirely by ads. All of these models work. None is “better” than the other. There is no reason the same isn’t true for broadband and content.

Net neutrality claims that the only proper price to charge on the content side of the market is zero. (Congratulations: You’re in the same club as that cutting-edge, innovative technology, the check, which is cleared at par by government fiat. A subsidy that no doubt explains why checks have managed to last this long). As an economic matter, that’s possible; it could be that zero is the right price. But it most certainly needn’t be, and issues revolving around Netflix’s traffic and the ability of ISPs and Netflix cost-effectively to handle it are evidence that zero may well not be the right price.

The reality is that these sorts of claims are devoid of economic logic — which is presumably why they, like the whole net neutrality “movement” generally, appeal so gratuitously to emotion rather than reason. But it doesn’t seem unreasonable to hope for more from a bunch of savvy financiers.