Archives For Hayek

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

My new book, How to Regulate: A Guide for Policymakers, will be published in a few weeks.  A while back, I promised a series of posts on the book’s key chapters.  I posted an overview of the book and a description of the book’s chapter on externalities.  I then got busy on another writing project (on horizontal shareholdings—more on that later) and dropped the ball.  Today, I resume my book summary with some thoughts from the book’s chapter on public goods.

With most goods, the owner can keep others from enjoying what she owns, and, if one person enjoys the good, no one else can do so.  Consider your coat or your morning cup of Starbucks.  You can prevent me from wearing your coat or drinking your coffee, and if you choose to let me wear the coat or drink the coffee, it’s not available to anyone else.

There are some amenities, though, that are “non-excludable,” meaning that the owner can’t prevent others from enjoying them, and “non-rivalrous,” meaning that one person’s consumption of them doesn’t prevent others from enjoying them as well.  National defense and local flood control systems (levees, etc.) are like this.  So are more mundane things like public art projects and fireworks displays.  Amenities that are both non-excludable and non-rivalrous are “public goods.”

[NOTE:  Amenities that are either non-excludable or non-rivalrous, but not both, are “quasi-public goods.”  Such goods include excludable but non-rivalrous “club goods” (e.g., satellite radio programming) and non-excludable but rivalrous “commons goods” (e.g., public fisheries).  The public goods chapter of How to Regulate addresses both types of quasi-public goods, but I won’t discuss them here.]

The primary concern with public goods is that they will be underproduced.  That’s because the producer, who must bear all the cost of producing the good, cannot exclude benefit recipients who do not contribute to the good’s production and thus cannot capture many of the benefits of his productive efforts.

Suppose, for example, that a levee would cost $5 million to construct and would create $10 million of benefit by protecting 500 homeowners from expected losses of $20,000 each (i.e., the levee would eliminate a 10% chance of a big flood that would cause each homeowner a $200,000 loss).  To maximize social welfare, the levee should be built.  But no single homeowner has an incentive to build the levee.  At least 250 homeowners would need to combine their resources to make the levee project worthwhile for participants (250 * $20,000 in individual benefit = $5 million), but most homeowners would prefer to hold out and see if their neighbors will finance the levee project without their help.  The upshot is that the levee never gets built, even though its construction is value-enhancing.

Economists have often jumped from the observation that public goods are susceptible to underproduction to the conclusion that the government should tax people and use the revenues to provide public goods.  Consider, for example, this passage from a law school textbook by several renowned economists:

It is apparent that public goods will not be adequately supplied by the private sector. The reason is plain: because people can’t be excluded from using public goods, they can’t be charged money for using them, so a private supplier can’t make money from providing them. … Because public goods are generally not adequately supplied by the private sector, they have to be supplied by the public sector.

[Howell E. Jackson, Louis Kaplow, Steven Shavell, W. Kip Viscusi, & David Cope, Analytical Methods for Lawyers 362-63 (2003) (emphasis added).]

That last claim seems demonstrably false.   Continue Reading…

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

On Debating Imaginary Felds

Gus Hurwitz —  18 September 2013

Harold Feld, in response to a recent Washington Post interview with AEI’s Jeff Eisenach about AEI’s new Center for Internet, Communications, and Technology Policy, accused “neo-conservative economists (or, as [Feld] might generalize, the ‘Right’)” of having “stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.”

(Full disclosure: The Center for Internet, Communications, and Technology Policy includes TechPolicyDaily.com, to which I am a contributor.)

Perhaps to the surprise of many, I’m going to agree with Feld. But in so doing, I’m going to expand upon his point: The problem with anti-economics social activists (or, as we might generalize, the ‘Left’)[*] is that they have stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.

I don’t mean this to be snarky. Rather, it is a very real problem throughout modern political discourse, and one that we participants in telecom and media debates frequently contribute to. One of the reasons that I love – and sometimes hate – researching and teaching in this area is that fundamental tensions between government and market regulation lie at its core. These tensions present challenging and engaging questions, making work in this field exciting, but are sometimes intractable and often evoke passion instead of analysis, making work in this field seem Sisyphean.

One of these tensions is how to secure for consumers those things which the market does not (appear to) do a good job of providing. For instance, those of us on both the left and right are almost universally agreed that universal service is a desirable goal. The question – for both sides – is how to provide it. Feld reminds us that “real world economics is painfully complicated.” I would respond to him that “real world regulation is painfully complicated.”

I would point at Feld, while jumping up and down shouting “J’accuse! Nirvana Fallacy!” – but I’m certain that Feld is aware of this fallacy, just as I hope he’s aware that those of us who have spent much of our lives studying economics are bitterly aware that economics and markets are complicated things. Indeed, I think those of us who study economics are even more aware of this than is Feld – it is, after all, one of our mantras that “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” This mantra is particularly apt in telecommunications, where one of the most consistent and important lessons of the past century has been that the market tends to outperform regulation.

This isn’t because the market is perfect; it’s because regulation is less perfect. Geoff recently posted a salient excerpt from Tom Hazlett’s 1997 Reason interview of Ronald Coase, in which Coase recounted that “When I was editor of The Journal of Law and Economics, we published a whole series of studies of regulation and its effects. Almost all the studies – perhaps all the studies – suggested that the results of regulation had been bad, that the prices were higher, that the product was worse adapted to the needs of consumers, than it otherwise would have been.”

I don’t want to get into a tit-for-tat over individual points that Feld makes. But I will look at one as an example: his citation to The Market for Lemons. This is a classic paper, in which Akerlof shows that information asymmetries can cause rational markets to unravel. But does it, as Feld says, show “market failure in the presence of robust competition?” That is a hotly debated point in the economics literature. One view – the dominant view, I believe – is that it does not. See, e.g., the EconLib discussion (“Akerlof did not conclude that the lemon problem necessarily implies a role for government”). Rather, the market has responded through the formation of firms that service and certify used cars, document car maintenance, repairs and accidents, warranty cars, and suffer reputational harms for selling lemons. Of course, folks argue, and have long argued, both sides. As Feld says, economics is painfully complicated – it’s a shame he draws a simple and reductionist conclusion from one of the seminal articles is modern economics, and a further shame he uses that conclusion to buttress his policy position. J’accuse!

I hope that this is in no way taken as an attack on Feld – and I wish his piece was less of an attack on Jeff. Fundamentally, he raises a very important point, that there is a real disconnect between the arguments used by the “left” and “right” and how those arguments are understood by the other. Indeed, some of my current work is exploring this very disconnect and how it affects telecom debates. I’m really quite thankful to Feld for highlighting his concern that at least one side is blind to the views of the other – I hope that he’ll be receptive to the idea that his side is subject to the same criticism.

[*] I do want to respond specifically to what I think is an important confusion in Feld piece, which motivated my admittedly snarky labelling of the “left.” I think that he means “neoclassical economics,” not “neo-conservative economics” (which he goes on to dub “Neocon economics”). Neoconservativism is a political and intellectual movement, focused primarily on US foreign policy – it is rarely thought of as a particular branch of economics. To the extent that it does hold to a view of economics, it is actually somewhat skeptical of free markets, especially of lack of moral grounding and propensity to forgo traditional values in favor of short-run, hedonistic, gains.

In Part One, I addressed the argument by some libertarians that so-called “traditional property rights in land” are based in inductive, ground-up “common law court decisions,” but that intellectual property (IP) rights are top-down, artificial statutory entitlements.  Thus, for instance, libertarian law professor, Tom Bell, has written in the University of Illinois Journal of Law, Technology & Policy: “With regard to our tangible rights to person and property, they’re customary and based in common law. Where do the copyrights and patents come from? From the legislative process.” 2006 Univ.Ill. J. L. Tech. & Pol’y 92, 110 (sorry, no link). 

I like Tom, but, as I detailed in Part One, he’s just wrong in his contrast here between the “customary” “common law” court decisions creating property versus the  “legislative process” creating IP rights. This is myth masquerading as history. As all first-year property students learn each year, the foundation of Anglo-American property law is based in a statute, and many property rights in land were created by statutes enacted by Parliament or early American state legislatures.  In fact, the first statute — the Statute Quai Empotores of 1290 — was enacted by Parliament to overrule feudal “custom” enforced by the “common law” decisions at that time, creating by statutory fiat the basic foundational rule of the Anglo-American property right that property rights are alieanable.

As an aside, Geoff Manne asked an excellent question in the comments to Part One: Who cares? My response is that in part it’s important to call out the use of a descriptive historical claim to bootstrap a normative argument. The question is not who cares, but rather the question is why does Tom, Jerry Brito and other libertarians care so much about creating this historical myth, and repeatedly asserting it in their writings and in their presentations? The reason is because this triggers a normative context for many libertarians steeped in Hayek’s theories about the virtues of disaggregated decision-making given dispersed or localized knowledge, as contrasted with the vices of centralized, top-down planning. Thus, by expressly contrasting as an alleged historical fact that property arises from “customary” “common law” court decisions versus the top-down “legislative processes” creating IP, this provides normative traction against IP rights without having to do the heavy lifting of actually proving this as a normative conclusion. Such is the rhetorical value of historical myths generally — they provide normative framings in the guise of a neutral, objective statement of historical fact — and this is why they are a common feature of policy debates, especially in patent law.

What’s even more interesting is that this is not just a historical myth about the source of property rights in land, which were created by both statutes and court decisions, but it’s also an historical myth about IP rights, which are also created by both statutes and court decisions. The institutional and doctrinal interplay between Parliament’s statutes and the application and extension of these statutes by English courts in creating and enforcing property rights in land was repeated in the creation and extension of the modern Anglo-American IP system.  Who would have thunk?

Although there are lots of historical nuances to the actual legal developments, a blog posting is ideal to point out the general institutional and systemic development that occurred with IP rights. It’s often remarked, for instance, that the birth of Anglo-American patent law is in Parliament’s Statute of Monopolies (1624).  Although it’s true (at least in a generalized sense), the actual development of modern patent law — the legal regime that secures a property right in a novel and useful invention — occurred entirely at the hands of the English common law courts in the eighteenth century, who (re)interpreted this statute and extended it far beyond its original text.  (I have extensively detailed this historical development here.)  Albeit with some differences, a similar institutional pattern occurred with Parliament enacting the first modern copyright statute in 1709, the Statute of Anne, which was then interpreted, applied and extended by the English common law courts.

This institutional and doctrinal pattern repeated in America. From the very first enactment of copyright and patent statutes by the states under the Articles of Confederation, and then by Congress enacting the first federal patent and copyright statutes in 1790, courts then interpreted, applied and extended these statutes in common law fashion.  In fact, it is a cliché in patent law that many patent doctrines today were created, not by Congress, but by two judges — Justice Joseph Story and Judge Learned Hand.  Famous patent law historian, Frank Prager, writes that it is “often said that Story was one of the architects of American patent law.”  There’s an entire book published of Judge Learned Hand’s decisions in patent law. That’s how important these two judges have been in creating patent law doctrines.

So, the pattern has been that Congress passes broadly framed statutes, and the federal courts then create doctrines within these statutory frameworks.  In patent law, for instance, courts created the exhaustion doctrine, secondary liability, the experimental use defense, the infringement doctrine of equivalents, and many others.  Beyond this “common law” creation of patent doctrines, courts have further created and defined the actual requirements set forth in the patent statutes for utility, written description, enablement, etc., creating legal phrases and tests that one would search in vain for in the text of the actual patent statutes. Interestingly, Congress sometimes has subsequently codified these judicially created doctrines and sometimes it has left them alone.  Sometimes, Congress even repeals the judicially created tests, as it did in expressly abrogating the judicially created “flash of genius” test in § 103 of the 1952 Patent Act.  All of this goes to show that, just as it’s wrong to say that property rights in land are based solely in custom and common law court decision, it’s equally wrong to say that IP rights are based solely in legislation.

Admittedly, the modern copyright statutes are far more specific and complex than the patent statutes, at least before Congress passed the American Invents Act of 2011 (AIA).  In comparison to the pre-AIA patent statutes, the copyright statutes appear to be excessively complicated with industry and work-specific regimes, such as licensing for cable (§ 111), licensing for satellite transmissions (§ 119), exemptions from liability for libraries (§ 108), and licensing of “phonorecords” (§ 109), among others.  These and other provisions have been cobbled together by repeated amendments and other statutory enactments over the past century or so.  This stands in stark contrast to the invention- and industry-neutral provisions that comprised much of the pre-AIA patent statutes.

So, this is a valid point of differentiation between patents and copyrights, at least as these respective IP rights have developed in the twentieth century.  And there’s certainly a valid argument that complexity in the copyright statutes arising from such attempts to legislate for very specific works and industries increases uncertainties, which in turn unnecessarily increases administration and other transaction costs in the operation of the legal system.

Yet, it bears emphasizing again that, before there arose heavy emphasis on legislation in copyright law, many primary copyright doctrines were in fact first created by courts.  This includes, for instance, fair use and exhaustion doctrines, which were later codified by Congress. Moreover, some very important copyright doctrines remain entirely in the domain of the courts, such as secondary liability. 

The judicially created doctrine of secondary liability in copyright is perhaps the most ironic, if only because it is the use of this doctrine on the Internet against P2P services, like Napster, Aimster, Grokster, and BitTorrent operators, that sends many libertarian IP skeptics and copyleft advocates into paroxysms of outrage about how rent-seeking owners of statutory entitlements are “forcing” companies out of business, shutting down technology and violating the right to liberty on the Internet. But secondary liability is a “customary” “common law” doctrine that developed out of similarly traditional “customary” doctrines in tort law, as further extended by courts to patent and copyright!

As with the historical myth about the origins of property rights in land, the actual facts about the source and nature of IP rights belies the claims by some libertarians that IP rights are congressional “welfare grants” or congressional subsidies for crony corporations. IP rights have developed in the same way as property rights in land with both legislatures and courts creating, repealing, and extending doctrines in an important institutional and doctrinal evolution of these property rights securing technological innovation and creative works.

As I said in Part One, I enjoy a good policy argument about the value of securing property rights in patented innovation or copyrighted works.  I often discuss on panels and in debates how IP rights make possible the private-ordering mechanisms necessary to convert inventions and creative works into real-world innovation and creative products sold to consumers in the marketplace. Economically speaking, as Henry Manne pointed out in a comment to Part One, defining a property right in an asset is what makes possible value-maximizing transactions, and, I would add, morally speaking, it is what secures to the creator of that asset the right to the fruits of his or her productive labors. Thus, I would be happy to debate Tom Bell, Jerry Brito or any other similarly-minded libertarian on these issues in innovation policy, but before we can do so, we must first agree to abandon historical myths and base our normative arguments on actual facts.

My paper with Judge Douglas H. Ginsburg (D.C. Circuit; NYU Law), Behavioral Law & Economics: Its Origins, Fatal Flaws, and Implications for Liberty, is posted to SSRN and now published in the Northwestern Law Review.

Here is the abstract:

Behavioral economics combines economics and psychology to produce a body of evidence that individual choice behavior departs from that predicted by neoclassical economics in a number of decision-making situations. Emerging close on the heels of behavioral economics over the past thirty years has been the “behavioral law and economics” movement and its philosophical foundation — so-called “libertarian paternalism.” Even the least paternalistic version of behavioral law and economics makes two central claims about government regulation of seemingly irrational behavior: (1) the behavioral regulatory approach, by manipulating the way in which choices are framed for consumers, will increase welfare as measured by each individual’s own preferences and (2) a central planner can and will implement the behavioral law and economics policy program in a manner that respects liberty and does not limit the choices available to individuals. This Article draws attention to the second and less scrutinized of the behaviorists’ claims, viz., that behavioral law and economics poses no significant threat to liberty and individual autonomy. The behaviorists’ libertarian claims fail on their own terms. So long as behavioral law and economics continues to ignore the value to economic welfare and individual liberty of leaving individuals the freedom to choose and hence to err in making important decisions, “libertarian paternalism” will not only fail to fulfill its promise of increasing welfare while doing no harm to liberty, it will pose a significant risk of reducing both.

Download here.

 

In light of yesterday’s abysmal jobs report, yesterday’s Wall Street Journal op-ed by Stanford economist John B. Taylor (Rules for America’s Road to Recovery) is a must-read.  Taylor begins by identifying what he believes is the key hindrance to economic recovery in the U.S.:

In my view, unpredictable economic policy—massive fiscal “stimulus” and ballooning debt, the Federal Reserve’s quantitative easing with multiyear near-zero interest rates, and regulatory uncertainty due to Obamacare and the Dodd-Frank financial reforms—is the main cause of persistent high unemployment and our feeble recovery from the recession.

A reform strategy built on more predictable, rules-based fiscal, monetary and regulatory policies will help restore economic prosperity.

Taylor goes on (as have I) to exhort policy makers to study F.A. Hayek, who emphasized the importance of clear rules in a free society.  Hayek explained:

Stripped of all technicalities, [the Rule of Law] means that government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to foresee with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.

Taylor observes that “[r]ules-based policies make the economy work better by providing a predictable policy framework within which consumers and businesses make decisions.”  But that’s not all: “they also protect freedom.”  Thus, “Hayek understood that a rules-based system has a dual purpose—freedom and prosperity.”

We are in a period of unprecedented regulatory uncertainty.  Consider Dodd-Frank.  That statute calls for 398 rulemakings by federal agencies.  Law firm Davis Polk reports that as of June 1, 2012, 221 rulemaking deadlines have expired.  Of those 221 passed deadlines, 73 (33%) have been met with finalized rules, and 148 (67%) have been missed.  The uncertainty, it seems, is far from over.

Taylor’s Hayek-inspired counsel mirrors that offered by President Reagan’s economic team at the beginning of his presidency, a time of economic malaise similar to that we’re currently experiencing.  In a 1980 memo reprinted in last weekend’s Wall Street Journal, Reagan’s advisers offered the following advice:

…The need for a long-term point of view is essential to allow for the time, the coherence, and the predictability so necessary for success. This long-term view is as important for day-to-day problem solving as for the making of large policy decisions. Most decisions in government are made in the process of responding to problems of the moment. The danger is that this daily fire fighting can lead the policy-maker farther and farther from his goals. A clear sense of guiding strategy makes it possible to move in the desired direction in the unending process of contending with issues of the day. Many failures of government can be traced to an attempt to solve problems piecemeal. The resulting patchwork of ad hoc solutions often makes such fundamental goals as military strength, price stability, and economic growth more difficult to achieve. …

Consistency in policy is critical to effectiveness. Individuals and business enterprises plan on a long-range basis. They need to have an environment in which they can conduct their affairs with confidence. …

With these fundamentals in place, the American people will respond. As the conviction grows that the policies will be sustained in a consistent manner over an extended period, the response will quicken.

If you haven’t done so, read both pieces (Taylor’s op-ed and the Reagan memo) in their entirety.

New York Times columnist Gretchen Morgenson is arguing for a “pre-clearance”  approach to regulating new financial products:

The Food and Drug Administration vets new drugs before they reach the market. But imagine if there were a Wall Street version of the F.D.A. — an agency that examined new financial instruments and ensured that they were safe and benefited society, not just bankers.  How different our economy might look today, given the damage done by complex instruments during the financial crisis.

The idea Morgenson is advocating was set forth by law professor Eric Posner (one of my former profs) and economist E. Glen Weyl in this paper.  According to Morgenson,

[Posner and Weyl] contend that new instruments should be approved by a “financial products agency” that would test them for social utility. Ideally, products deemed too costly to society over all — those that serve only to increase speculation, for example — would be rejected, the two professors say.

While I have not yet read the paper, I have some concerns about the proposal, at least as described by Morgenson.

First, there’s the knowledge problem.  Even if we assume that agents of a new “Financial Products Administration” (FPA) would be completely “other-regarding” (altruistic) in performing their duties, how are they to know whether a proposed financial instrument is, on balance, beneficial or detrimental to society?  Morgenson suggests that “financial instruments could be judged by whether they help people hedge risks — which is generally beneficial — or whether they simply allow gambling, which can be costly.”  But it’s certainly not the case that speculative (“gambling”) investments produce no social value.  They generate a tremendous amount of information because they reflect the expectations of hundreds, thousands, or millions of investors who are placing bets with their own money.  Even the much-maligned credit default swaps, instruments Morgenson and the paper authors suggest “have added little to society,” provide a great deal of information about the creditworthiness of insureds.  How is a regulator in the FPA to know whether the benefits a particular financial instrument creates justify its risks? 

When regulators have engaged in merits review of investment instruments — something the federal securities laws generally eschew — they’ve often screwed up.  State securities regulators in Massachusetts, for example, once banned sales of Apple’s IPO shares, claiming that the stock was priced too high.  Oops.

In addition to the knowledge problem, the proposed FPA would be subject to the same institutional maladies as its model, the FDA.  The fact is, individuals do not cease to be rational, self-interest maximizers when they step into the public arena.  Like their counterparts in the FDA, FPA officials will take into account the personal consequences of their decisions to grant or withhold approvals of new products.  They will know that if they approve a financial product that injures some investors, they’ll likely be blamed in the press, hauled before Congress, etc.  By contrast, if they withhold approval of a financial product that would be, on balance, socially beneficial, their improvident decision will attract little attention.  In short, they will share with their counterparts in the FDA a bias toward disapproval of novel products.

In highlighting these two concerns, I’m emphasizing a point I’ve made repeatedly on TOTM:  A defect in private ordering is not a sufficient condition for a regulatory fix.  One must always ask whether the proposed regulatory regime will actually leave the world a better place.  As the Austrians taught us, we can’t assume the regulators will have the information (and information-processing abilities) required to improve upon private ordering.  As Public Choice theorists taught us, we can’t assume that even perfectly informed (but still self-interested) regulators will make socially optimal decisions.  In light of Austrian and Public Choice insights, the Posner & Weyl proposal — at least as described by Morgenson — strikes me as problematic.  [An additional concern is that the proposed pre-clearance regime might just send financial activity offshore.  To their credit, the authors acknowledge and address that concern.]

Obama’s Fatal Conceit

Thom Lambert —  21 September 2011

From the beginning of his presidency, I’ve wanted President Obama to succeed.  He was my professor in law school, and while I frequently disagreed with his take on things, I liked him very much. 

On the eve of his inauguration, I wrote on TOTM that I hoped he would spend some time meditating on Hayek’s The Use of Knowledge in Society.  That article explains that the information required to allocate resources to their highest and best ends, and thereby maximize social welfare, is never given to any one mind but is instead dispersed widely to a great many “men on the spot.”  I worried that combining Mr. Obama’s native intelligence with the celebrity status he attained during the presidential campaign would create the sort of “unwise” leader described in Plato’s Apology:

I thought that he appeared wise to many people and especially to himself, but he was not. I then tried to show him that he thought himself wise, but that he was not. As a result, he came to dislike me, and so did many of the bystanders. So I withdrew and thought to myself: “I am wiser than this man; it is likely that neither of us knows anything worthwhile, but he thinks he knows something when he does not, whereas when I do not know, neither do I think I know; so I am likely to be wiser than he to this small extent, that I do not think I know what I do not know.”

I have now become convinced that President Obama’s biggest problem is that he believes — wrongly — that he (or his people) know better how to allocate resources than do the many millions of “men and women on the spot.”  This is the thing that keeps our very smart President from being a wise President.  It is killing economic expansion in this country, and it may well render him a one-term President.  It is, quite literally, a fatal conceit.

Put aside for a minute the first stimulus, the central planning in the health care legislation and Dodd-Frank, and the many recent instances of industrial policy (e.g., Solyndra).  Focus instead on just the latest proposal from our President.  He is insisting that Congress pass legislation (“Pass this bill!”) that directs a half-trillion dollars to ends he deems most valuable (e.g., employment of public school teachers and first responders, municipal infrastructure projects).  And he proposes to take those dollars from wealthier Americans by, among other things, limiting deductions for charitable giving, taxing interest on municipal bonds, and raising tax rates on investment income (via the “Buffet rule”).

Do you see what’s happening here?  The President is proposing to penalize private investment (where the investors themselves decide which projects deserve their money) in order to fund government investment.  He proposes to penalize charitable giving (where the givers themselves get to choose their beneficiaries) in order to fund government outlays to the needy.  He calls for impairing municipalities’ funding advantage (which permits them to raise money cheaply to fund the projects they deem most worthy) in order to fund municipal projects that the federal government deems worthy of funding.  (More on that here — and note that I agree with Golub that we should ditch the deduction for muni bond interest as part of a broader tax reform.)

In short, the President has wholly disregarded Hayek’s central point:  He believes that he and his people know better than the men and women on the spot how to allocate productive resources.  That conceit renders a very smart man very unwise.  Solyndra, I fear, is just the beginning.