Archives For Hayek

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

On Debating Imaginary Felds

Gus Hurwitz —  18 September 2013

Harold Feld, in response to a recent Washington Post interview with AEI’s Jeff Eisenach about AEI’s new Center for Internet, Communications, and Technology Policy, accused “neo-conservative economists (or, as [Feld] might generalize, the ‘Right’)” of having “stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.”

(Full disclosure: The Center for Internet, Communications, and Technology Policy includes TechPolicyDaily.com, to which I am a contributor.)

Perhaps to the surprise of many, I’m going to agree with Feld. But in so doing, I’m going to expand upon his point: The problem with anti-economics social activists (or, as we might generalize, the ‘Left’)[*] is that they have stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.

I don’t mean this to be snarky. Rather, it is a very real problem throughout modern political discourse, and one that we participants in telecom and media debates frequently contribute to. One of the reasons that I love – and sometimes hate – researching and teaching in this area is that fundamental tensions between government and market regulation lie at its core. These tensions present challenging and engaging questions, making work in this field exciting, but are sometimes intractable and often evoke passion instead of analysis, making work in this field seem Sisyphean.

One of these tensions is how to secure for consumers those things which the market does not (appear to) do a good job of providing. For instance, those of us on both the left and right are almost universally agreed that universal service is a desirable goal. The question – for both sides – is how to provide it. Feld reminds us that “real world economics is painfully complicated.” I would respond to him that “real world regulation is painfully complicated.”

I would point at Feld, while jumping up and down shouting “J’accuse! Nirvana Fallacy!” – but I’m certain that Feld is aware of this fallacy, just as I hope he’s aware that those of us who have spent much of our lives studying economics are bitterly aware that economics and markets are complicated things. Indeed, I think those of us who study economics are even more aware of this than is Feld – it is, after all, one of our mantras that “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” This mantra is particularly apt in telecommunications, where one of the most consistent and important lessons of the past century has been that the market tends to outperform regulation.

This isn’t because the market is perfect; it’s because regulation is less perfect. Geoff recently posted a salient excerpt from Tom Hazlett’s 1997 Reason interview of Ronald Coase, in which Coase recounted that “When I was editor of The Journal of Law and Economics, we published a whole series of studies of regulation and its effects. Almost all the studies – perhaps all the studies – suggested that the results of regulation had been bad, that the prices were higher, that the product was worse adapted to the needs of consumers, than it otherwise would have been.”

I don’t want to get into a tit-for-tat over individual points that Feld makes. But I will look at one as an example: his citation to The Market for Lemons. This is a classic paper, in which Akerlof shows that information asymmetries can cause rational markets to unravel. But does it, as Feld says, show “market failure in the presence of robust competition?” That is a hotly debated point in the economics literature. One view – the dominant view, I believe – is that it does not. See, e.g., the EconLib discussion (“Akerlof did not conclude that the lemon problem necessarily implies a role for government”). Rather, the market has responded through the formation of firms that service and certify used cars, document car maintenance, repairs and accidents, warranty cars, and suffer reputational harms for selling lemons. Of course, folks argue, and have long argued, both sides. As Feld says, economics is painfully complicated – it’s a shame he draws a simple and reductionist conclusion from one of the seminal articles is modern economics, and a further shame he uses that conclusion to buttress his policy position. J’accuse!

I hope that this is in no way taken as an attack on Feld – and I wish his piece was less of an attack on Jeff. Fundamentally, he raises a very important point, that there is a real disconnect between the arguments used by the “left” and “right” and how those arguments are understood by the other. Indeed, some of my current work is exploring this very disconnect and how it affects telecom debates. I’m really quite thankful to Feld for highlighting his concern that at least one side is blind to the views of the other – I hope that he’ll be receptive to the idea that his side is subject to the same criticism.

[*] I do want to respond specifically to what I think is an important confusion in Feld piece, which motivated my admittedly snarky labelling of the “left.” I think that he means “neoclassical economics,” not “neo-conservative economics” (which he goes on to dub “Neocon economics”). Neoconservativism is a political and intellectual movement, focused primarily on US foreign policy – it is rarely thought of as a particular branch of economics. To the extent that it does hold to a view of economics, it is actually somewhat skeptical of free markets, especially of lack of moral grounding and propensity to forgo traditional values in favor of short-run, hedonistic, gains.

In Part One, I addressed the argument by some libertarians that so-called “traditional property rights in land” are based in inductive, ground-up “common law court decisions,” but that intellectual property (IP) rights are top-down, artificial statutory entitlements.  Thus, for instance, libertarian law professor, Tom Bell, has written in the University of Illinois Journal of Law, Technology & Policy: “With regard to our tangible rights to person and property, they’re customary and based in common law. Where do the copyrights and patents come from? From the legislative process.” 2006 Univ.Ill. J. L. Tech. & Pol’y 92, 110 (sorry, no link). 

I like Tom, but, as I detailed in Part One, he’s just wrong in his contrast here between the “customary” “common law” court decisions creating property versus the  “legislative process” creating IP rights. This is myth masquerading as history. As all first-year property students learn each year, the foundation of Anglo-American property law is based in a statute, and many property rights in land were created by statutes enacted by Parliament or early American state legislatures.  In fact, the first statute — the Statute Quai Empotores of 1290 — was enacted by Parliament to overrule feudal “custom” enforced by the “common law” decisions at that time, creating by statutory fiat the basic foundational rule of the Anglo-American property right that property rights are alieanable.

As an aside, Geoff Manne asked an excellent question in the comments to Part One: Who cares? My response is that in part it’s important to call out the use of a descriptive historical claim to bootstrap a normative argument. The question is not who cares, but rather the question is why does Tom, Jerry Brito and other libertarians care so much about creating this historical myth, and repeatedly asserting it in their writings and in their presentations? The reason is because this triggers a normative context for many libertarians steeped in Hayek’s theories about the virtues of disaggregated decision-making given dispersed or localized knowledge, as contrasted with the vices of centralized, top-down planning. Thus, by expressly contrasting as an alleged historical fact that property arises from “customary” “common law” court decisions versus the top-down “legislative processes” creating IP, this provides normative traction against IP rights without having to do the heavy lifting of actually proving this as a normative conclusion. Such is the rhetorical value of historical myths generally — they provide normative framings in the guise of a neutral, objective statement of historical fact — and this is why they are a common feature of policy debates, especially in patent law.

What’s even more interesting is that this is not just a historical myth about the source of property rights in land, which were created by both statutes and court decisions, but it’s also an historical myth about IP rights, which are also created by both statutes and court decisions. The institutional and doctrinal interplay between Parliament’s statutes and the application and extension of these statutes by English courts in creating and enforcing property rights in land was repeated in the creation and extension of the modern Anglo-American IP system.  Who would have thunk?

Although there are lots of historical nuances to the actual legal developments, a blog posting is ideal to point out the general institutional and systemic development that occurred with IP rights. It’s often remarked, for instance, that the birth of Anglo-American patent law is in Parliament’s Statute of Monopolies (1624).  Although it’s true (at least in a generalized sense), the actual development of modern patent law — the legal regime that secures a property right in a novel and useful invention — occurred entirely at the hands of the English common law courts in the eighteenth century, who (re)interpreted this statute and extended it far beyond its original text.  (I have extensively detailed this historical development here.)  Albeit with some differences, a similar institutional pattern occurred with Parliament enacting the first modern copyright statute in 1709, the Statute of Anne, which was then interpreted, applied and extended by the English common law courts.

This institutional and doctrinal pattern repeated in America. From the very first enactment of copyright and patent statutes by the states under the Articles of Confederation, and then by Congress enacting the first federal patent and copyright statutes in 1790, courts then interpreted, applied and extended these statutes in common law fashion.  In fact, it is a cliché in patent law that many patent doctrines today were created, not by Congress, but by two judges — Justice Joseph Story and Judge Learned Hand.  Famous patent law historian, Frank Prager, writes that it is “often said that Story was one of the architects of American patent law.”  There’s an entire book published of Judge Learned Hand’s decisions in patent law. That’s how important these two judges have been in creating patent law doctrines.

So, the pattern has been that Congress passes broadly framed statutes, and the federal courts then create doctrines within these statutory frameworks.  In patent law, for instance, courts created the exhaustion doctrine, secondary liability, the experimental use defense, the infringement doctrine of equivalents, and many others.  Beyond this “common law” creation of patent doctrines, courts have further created and defined the actual requirements set forth in the patent statutes for utility, written description, enablement, etc., creating legal phrases and tests that one would search in vain for in the text of the actual patent statutes. Interestingly, Congress sometimes has subsequently codified these judicially created doctrines and sometimes it has left them alone.  Sometimes, Congress even repeals the judicially created tests, as it did in expressly abrogating the judicially created “flash of genius” test in § 103 of the 1952 Patent Act.  All of this goes to show that, just as it’s wrong to say that property rights in land are based solely in custom and common law court decision, it’s equally wrong to say that IP rights are based solely in legislation.

Admittedly, the modern copyright statutes are far more specific and complex than the patent statutes, at least before Congress passed the American Invents Act of 2011 (AIA).  In comparison to the pre-AIA patent statutes, the copyright statutes appear to be excessively complicated with industry and work-specific regimes, such as licensing for cable (§ 111), licensing for satellite transmissions (§ 119), exemptions from liability for libraries (§ 108), and licensing of “phonorecords” (§ 109), among others.  These and other provisions have been cobbled together by repeated amendments and other statutory enactments over the past century or so.  This stands in stark contrast to the invention- and industry-neutral provisions that comprised much of the pre-AIA patent statutes.

So, this is a valid point of differentiation between patents and copyrights, at least as these respective IP rights have developed in the twentieth century.  And there’s certainly a valid argument that complexity in the copyright statutes arising from such attempts to legislate for very specific works and industries increases uncertainties, which in turn unnecessarily increases administration and other transaction costs in the operation of the legal system.

Yet, it bears emphasizing again that, before there arose heavy emphasis on legislation in copyright law, many primary copyright doctrines were in fact first created by courts.  This includes, for instance, fair use and exhaustion doctrines, which were later codified by Congress. Moreover, some very important copyright doctrines remain entirely in the domain of the courts, such as secondary liability. 

The judicially created doctrine of secondary liability in copyright is perhaps the most ironic, if only because it is the use of this doctrine on the Internet against P2P services, like Napster, Aimster, Grokster, and BitTorrent operators, that sends many libertarian IP skeptics and copyleft advocates into paroxysms of outrage about how rent-seeking owners of statutory entitlements are “forcing” companies out of business, shutting down technology and violating the right to liberty on the Internet. But secondary liability is a “customary” “common law” doctrine that developed out of similarly traditional “customary” doctrines in tort law, as further extended by courts to patent and copyright!

As with the historical myth about the origins of property rights in land, the actual facts about the source and nature of IP rights belies the claims by some libertarians that IP rights are congressional “welfare grants” or congressional subsidies for crony corporations. IP rights have developed in the same way as property rights in land with both legislatures and courts creating, repealing, and extending doctrines in an important institutional and doctrinal evolution of these property rights securing technological innovation and creative works.

As I said in Part One, I enjoy a good policy argument about the value of securing property rights in patented innovation or copyrighted works.  I often discuss on panels and in debates how IP rights make possible the private-ordering mechanisms necessary to convert inventions and creative works into real-world innovation and creative products sold to consumers in the marketplace. Economically speaking, as Henry Manne pointed out in a comment to Part One, defining a property right in an asset is what makes possible value-maximizing transactions, and, I would add, morally speaking, it is what secures to the creator of that asset the right to the fruits of his or her productive labors. Thus, I would be happy to debate Tom Bell, Jerry Brito or any other similarly-minded libertarian on these issues in innovation policy, but before we can do so, we must first agree to abandon historical myths and base our normative arguments on actual facts.

My paper with Judge Douglas H. Ginsburg (D.C. Circuit; NYU Law), Behavioral Law & Economics: Its Origins, Fatal Flaws, and Implications for Liberty, is posted to SSRN and now published in the Northwestern Law Review.

Here is the abstract:

Behavioral economics combines economics and psychology to produce a body of evidence that individual choice behavior departs from that predicted by neoclassical economics in a number of decision-making situations. Emerging close on the heels of behavioral economics over the past thirty years has been the “behavioral law and economics” movement and its philosophical foundation — so-called “libertarian paternalism.” Even the least paternalistic version of behavioral law and economics makes two central claims about government regulation of seemingly irrational behavior: (1) the behavioral regulatory approach, by manipulating the way in which choices are framed for consumers, will increase welfare as measured by each individual’s own preferences and (2) a central planner can and will implement the behavioral law and economics policy program in a manner that respects liberty and does not limit the choices available to individuals. This Article draws attention to the second and less scrutinized of the behaviorists’ claims, viz., that behavioral law and economics poses no significant threat to liberty and individual autonomy. The behaviorists’ libertarian claims fail on their own terms. So long as behavioral law and economics continues to ignore the value to economic welfare and individual liberty of leaving individuals the freedom to choose and hence to err in making important decisions, “libertarian paternalism” will not only fail to fulfill its promise of increasing welfare while doing no harm to liberty, it will pose a significant risk of reducing both.

Download here.

 

In light of yesterday’s abysmal jobs report, yesterday’s Wall Street Journal op-ed by Stanford economist John B. Taylor (Rules for America’s Road to Recovery) is a must-read.  Taylor begins by identifying what he believes is the key hindrance to economic recovery in the U.S.:

In my view, unpredictable economic policy—massive fiscal “stimulus” and ballooning debt, the Federal Reserve’s quantitative easing with multiyear near-zero interest rates, and regulatory uncertainty due to Obamacare and the Dodd-Frank financial reforms—is the main cause of persistent high unemployment and our feeble recovery from the recession.

A reform strategy built on more predictable, rules-based fiscal, monetary and regulatory policies will help restore economic prosperity.

Taylor goes on (as have I) to exhort policy makers to study F.A. Hayek, who emphasized the importance of clear rules in a free society.  Hayek explained:

Stripped of all technicalities, [the Rule of Law] means that government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to foresee with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.

Taylor observes that “[r]ules-based policies make the economy work better by providing a predictable policy framework within which consumers and businesses make decisions.”  But that’s not all: “they also protect freedom.”  Thus, “Hayek understood that a rules-based system has a dual purpose—freedom and prosperity.”

We are in a period of unprecedented regulatory uncertainty.  Consider Dodd-Frank.  That statute calls for 398 rulemakings by federal agencies.  Law firm Davis Polk reports that as of June 1, 2012, 221 rulemaking deadlines have expired.  Of those 221 passed deadlines, 73 (33%) have been met with finalized rules, and 148 (67%) have been missed.  The uncertainty, it seems, is far from over.

Taylor’s Hayek-inspired counsel mirrors that offered by President Reagan’s economic team at the beginning of his presidency, a time of economic malaise similar to that we’re currently experiencing.  In a 1980 memo reprinted in last weekend’s Wall Street Journal, Reagan’s advisers offered the following advice:

…The need for a long-term point of view is essential to allow for the time, the coherence, and the predictability so necessary for success. This long-term view is as important for day-to-day problem solving as for the making of large policy decisions. Most decisions in government are made in the process of responding to problems of the moment. The danger is that this daily fire fighting can lead the policy-maker farther and farther from his goals. A clear sense of guiding strategy makes it possible to move in the desired direction in the unending process of contending with issues of the day. Many failures of government can be traced to an attempt to solve problems piecemeal. The resulting patchwork of ad hoc solutions often makes such fundamental goals as military strength, price stability, and economic growth more difficult to achieve. …

Consistency in policy is critical to effectiveness. Individuals and business enterprises plan on a long-range basis. They need to have an environment in which they can conduct their affairs with confidence. …

With these fundamentals in place, the American people will respond. As the conviction grows that the policies will be sustained in a consistent manner over an extended period, the response will quicken.

If you haven’t done so, read both pieces (Taylor’s op-ed and the Reagan memo) in their entirety.

New York Times columnist Gretchen Morgenson is arguing for a “pre-clearance”  approach to regulating new financial products:

The Food and Drug Administration vets new drugs before they reach the market. But imagine if there were a Wall Street version of the F.D.A. — an agency that examined new financial instruments and ensured that they were safe and benefited society, not just bankers.  How different our economy might look today, given the damage done by complex instruments during the financial crisis.

The idea Morgenson is advocating was set forth by law professor Eric Posner (one of my former profs) and economist E. Glen Weyl in this paper.  According to Morgenson,

[Posner and Weyl] contend that new instruments should be approved by a “financial products agency” that would test them for social utility. Ideally, products deemed too costly to society over all — those that serve only to increase speculation, for example — would be rejected, the two professors say.

While I have not yet read the paper, I have some concerns about the proposal, at least as described by Morgenson.

First, there’s the knowledge problem.  Even if we assume that agents of a new “Financial Products Administration” (FPA) would be completely “other-regarding” (altruistic) in performing their duties, how are they to know whether a proposed financial instrument is, on balance, beneficial or detrimental to society?  Morgenson suggests that “financial instruments could be judged by whether they help people hedge risks — which is generally beneficial — or whether they simply allow gambling, which can be costly.”  But it’s certainly not the case that speculative (“gambling”) investments produce no social value.  They generate a tremendous amount of information because they reflect the expectations of hundreds, thousands, or millions of investors who are placing bets with their own money.  Even the much-maligned credit default swaps, instruments Morgenson and the paper authors suggest “have added little to society,” provide a great deal of information about the creditworthiness of insureds.  How is a regulator in the FPA to know whether the benefits a particular financial instrument creates justify its risks? 

When regulators have engaged in merits review of investment instruments — something the federal securities laws generally eschew — they’ve often screwed up.  State securities regulators in Massachusetts, for example, once banned sales of Apple’s IPO shares, claiming that the stock was priced too high.  Oops.

In addition to the knowledge problem, the proposed FPA would be subject to the same institutional maladies as its model, the FDA.  The fact is, individuals do not cease to be rational, self-interest maximizers when they step into the public arena.  Like their counterparts in the FDA, FPA officials will take into account the personal consequences of their decisions to grant or withhold approvals of new products.  They will know that if they approve a financial product that injures some investors, they’ll likely be blamed in the press, hauled before Congress, etc.  By contrast, if they withhold approval of a financial product that would be, on balance, socially beneficial, their improvident decision will attract little attention.  In short, they will share with their counterparts in the FDA a bias toward disapproval of novel products.

In highlighting these two concerns, I’m emphasizing a point I’ve made repeatedly on TOTM:  A defect in private ordering is not a sufficient condition for a regulatory fix.  One must always ask whether the proposed regulatory regime will actually leave the world a better place.  As the Austrians taught us, we can’t assume the regulators will have the information (and information-processing abilities) required to improve upon private ordering.  As Public Choice theorists taught us, we can’t assume that even perfectly informed (but still self-interested) regulators will make socially optimal decisions.  In light of Austrian and Public Choice insights, the Posner & Weyl proposal — at least as described by Morgenson — strikes me as problematic.  [An additional concern is that the proposed pre-clearance regime might just send financial activity offshore.  To their credit, the authors acknowledge and address that concern.]

Obama’s Fatal Conceit

Thom Lambert —  21 September 2011

From the beginning of his presidency, I’ve wanted President Obama to succeed.  He was my professor in law school, and while I frequently disagreed with his take on things, I liked him very much. 

On the eve of his inauguration, I wrote on TOTM that I hoped he would spend some time meditating on Hayek’s The Use of Knowledge in Society.  That article explains that the information required to allocate resources to their highest and best ends, and thereby maximize social welfare, is never given to any one mind but is instead dispersed widely to a great many “men on the spot.”  I worried that combining Mr. Obama’s native intelligence with the celebrity status he attained during the presidential campaign would create the sort of “unwise” leader described in Plato’s Apology:

I thought that he appeared wise to many people and especially to himself, but he was not. I then tried to show him that he thought himself wise, but that he was not. As a result, he came to dislike me, and so did many of the bystanders. So I withdrew and thought to myself: “I am wiser than this man; it is likely that neither of us knows anything worthwhile, but he thinks he knows something when he does not, whereas when I do not know, neither do I think I know; so I am likely to be wiser than he to this small extent, that I do not think I know what I do not know.”

I have now become convinced that President Obama’s biggest problem is that he believes — wrongly — that he (or his people) know better how to allocate resources than do the many millions of “men and women on the spot.”  This is the thing that keeps our very smart President from being a wise President.  It is killing economic expansion in this country, and it may well render him a one-term President.  It is, quite literally, a fatal conceit.

Put aside for a minute the first stimulus, the central planning in the health care legislation and Dodd-Frank, and the many recent instances of industrial policy (e.g., Solyndra).  Focus instead on just the latest proposal from our President.  He is insisting that Congress pass legislation (“Pass this bill!”) that directs a half-trillion dollars to ends he deems most valuable (e.g., employment of public school teachers and first responders, municipal infrastructure projects).  And he proposes to take those dollars from wealthier Americans by, among other things, limiting deductions for charitable giving, taxing interest on municipal bonds, and raising tax rates on investment income (via the “Buffet rule”).

Do you see what’s happening here?  The President is proposing to penalize private investment (where the investors themselves decide which projects deserve their money) in order to fund government investment.  He proposes to penalize charitable giving (where the givers themselves get to choose their beneficiaries) in order to fund government outlays to the needy.  He calls for impairing municipalities’ funding advantage (which permits them to raise money cheaply to fund the projects they deem most worthy) in order to fund municipal projects that the federal government deems worthy of funding.  (More on that here — and note that I agree with Golub that we should ditch the deduction for muni bond interest as part of a broader tax reform.)

In short, the President has wholly disregarded Hayek’s central point:  He believes that he and his people know better than the men and women on the spot how to allocate productive resources.  That conceit renders a very smart man very unwise.  Solyndra, I fear, is just the beginning.