Archives For Armen Alchian

Thomson-Reuters has listed its “Citation Laureates,” its predictions for particular scholars winning a Nobel prize sometime in the future (not necessarily this year).  Of particular interest to readers of this blog is that George Mason Law Professor Emeritus Gordon Tullock (long mentioned as a favorite of those predicting the Economics prize on this blog) is now included in the list:

* Douglas Diamond at the Graduate School of Business, University of Chicago, for his analysis of financial intermediation and monitoring.

* Jerry A. Hausman of Massachusetts Institute of Technology and Halbert White of University of California San Diego for their contributions to econometrics.

* Anne Krueger of Johns Hopkins University and Gordon Tullock of George Mason University School of Law, Arlington for their description of rent-seeking behavior and its implications.

Unfortunately, Thomson’s prediction rate has not been altogether impressive (see chart); but then again, predicting Nobel prizes isn’t so easy.

Here’s hoping the get this one on the first try.  Gordon is an incredibly well-deserving candidate.

Of course, Thomson-Reuters listed Armen Alchian & Harold Demsetz as Citation Laureates back in 2008 — and they remain my absolute favorites for the prize (along with fellow UCLA economist Benjamin Klein).

In a thorough and convincing paper, “The FTC’s Proposal for Regulating IP through SSOs Would Replace Private Coordination with Government Hold-Up,” Richard Epstein, Scott Kieff and Dan Spulber assess and then decimate the FTC’s proposal on patent notice and remedies, “The Evolving IP Marketplace: Aligning Patent Notice and Remedies with Competition.”  Note Epstein, Kieff and Spulber:

In its recent report entitled “The Evolving IP Marketplace,” the Federal Trade Commission (FTC) advances a far‐reaching regulatory approach (Proposal) whose likely effect would be to distort the operation of the intellectual property (IP) marketplace in ways that will hamper the innovation and commercialization of new technologies. The gist of the FTC Proposal is to rely on highly non-­standard and misguided definitions of economic terms of art such as “ex ante” and “hold-­up,” while urging new inefficient rules for calculating damages for patent infringement. Stripped of the technicalities, the FTC Proposal would so reduce the costs of infringement by downstream users that the rate of infringement would unduly increase, as potential infringers find it in their interest to abandon the voluntary market in favor of a more attractive system of judicial pricing. As the number of nonmarket transactions increases, the courts will play an ever larger role in deciding the terms on which the patents of one party may be used by another party. The adverse effects of this new trend will do more than reduce the incentives for innovation; it will upset the current set of well-­‐functioning private coordination activities in the IP marketplace that are needed to accomplish the commercialization of new technologies. Such a trend would seriously undermine capital formation, job growth, competition, and the consumer welfare the FTC seeks to promote.

Focusing in particular on SSOs, the trio homes in on the potential incentive problem created by the FTC’s proposal:

The central problem with the FTC’s approach is that it would interfere seriously with the helpful incentives all parties in the IP marketplace presently have to contract with each other. The FTC’s approach ignores the powerful incentives that it creates in putative licenses to spurn the voluntary market in order to obtain a strategic advantage over the licensor. In any voluntary market, the low rates that go to initial licensees reflect the uncertainty of the value of the patented technology at the time the license is issued. Once that technology has proven its worth, there is no sound reason to allow any potential licensee who instead held out from the originally offered deal to get bargain rates down the road. Allowing such an option would make the holdout better off than the contracting party. Such holdouts would not need to take licenses for technologies with low value, while resting assured they would still get technologies with high value at below market rates. The FTC seems to overlook that a well-­‐functioning patent damage system should do more than merely calibrate damages after the fact. An efficient approach to damages is one that also reduces the number of infringements overall by making sure that the infringer cannot improve his economic position by his own wrong.

The FTC Proposal rests on the misguided conviction that the law should not allow a licensor to “demand and obtain royalty payments based on the infringer’s switching costs” once the manufacturer has “sunk costs into using the technology;” and it labels any such payments as the result of “hold-­up.”

As Epstein, et al. discuss, current private ordering (reciprocal dealing, repeat play, RAND terms, etc.) works perfectly well to address real hold-up problems, and the FTC seems to be both defining the problem oddly and, thus, creating a problem that doesn’t really exist.

Although not discussed directly, the paper owes a great deal to the great Ben Klein and especially his paper, Why Hold-Ups Occur: The Self-Enforcing Range of Contractual Relationships (to say nothing of Klein, Crawford & Alchian, of course).  Likewise, although not discussed in the paper, Josh and Bruce Kobayashi’s excellent paper, Federalism, Substantive Preemption and Limits on Antitrust: An Application to Patent Holdup is an essential precursor to this paper, addressing the comparative merits of antitrust  and contract-based evaluation of claimed patent holdups in SSOs.

Highly-recommended and an important addition to the ever-interesting antitrust/IP discussion.

Daniel Kahnemann and co-authors discuss, in the most recent issue of the Harvard Business Review (HT: Brian McCann), various strategies for debiasing individual decisions that impact firm performance.  Much of the advice boils down to more conscious deliberation about decisions, incorporating awareness that individuals can be biased into firm-level decisions, and subjecting decisions to more rigorous cost-benefit analysis.  The authors discuss a handful of examples with executives contemplating this or that decision (a pricing change, a large capital outlay, and a major acquisition) and walk through how thinking harder about recognizing biases of individuals responsible for these decisions or recommendations might be identified and nipped in the bud before a costly error occurs.

Luckily for our HBS heroes they are able to catch these potential decision-making errors in time and correct them:

But in the end, Bob, Lisa, and Devesh all did, and averted serious problems as a result. Bob resisted the temptation to implement the price cut his team was clamoring for at the risk of destroying profitability and triggering a price war. Instead, he challenged the team to propose an alternative, and eventually successful, marketing plan. Lisa refused to approve an investment that, as she discovered, aimed to justify and prop up earlier sunk-cost investments in the same business. Her team later proposed an investment in a new technology that would leapfrog the competition. Finally, Devesh signed off on the deal his team was proposing, but not before additional due diligence had uncovered issues that led to a significant reduction in the acquisition price.

The real challenge for executives who want to implement decision quality control is not time or cost. It is the need to build awareness that even highly experienced, superbly competent, and well intentioned managers are fallible. Organizations need to realize that a disciplined decision-making process, not individual genius, is the key to a sound strategy. And they will have to create a culture of open debate in which such processes can flourish.

But what if they didn’t?  Of course, the result would be a costly mistake.  The sanction from the marketplace would provide a significant incentive for firms to act “as-if” rational over time.  As Judd Stone and I have written (forthcoming in the Cardozo Law Review), the firm itself can be expected to play a critical role in this debiasing:

Economic theory provides another reason for skepticism concerning predictable firm irrationality. As Armen Alchian, Ronald Coase, Harold Demsetz, Benjamin Klein, and Oliver Williamson (amongst others) have reiterated for decades, the firm is not merely a heterogeneous hodgepodge of individuals, but an institution constructed to lower transaction costs relative to making use of the price system (the make or buy decision). Firms thereby facilitate specialization, production, and exchange. Firms must react to the full panoply of economic forces and pressures, responding through innovation and competition. To the extent that cognitive biases operate to deprive individuals of the ability to choose rationally, the firm and the market provide effective mechanisms to at least mitigate these biases when they reduce profits.

A critical battleground for behaviorally-based regulatory intervention, including antitrust but not limited to it, is the question of whether agencies and courts on the one hand, or firms on the other, are the least cost avoiders of social costs associated with cognitive bias.  Stone & Wright argue in the antitrust context — contrary to the claims of Commissioner Rosch and other proponents of the behavioral approach — that the claim that individuals are behaviorally biased, and that because firms are made up of individuals, they too must be biased, simply does not provide intellectual support for behavioral regulation.  The most obvious failure is that it lacks the comparative institutional perspective described above.  Most accounts favoring greater implementation of behavioral regulation at the agency level glide over this question.  Not all, of course.

For example, Commissioner Rosch has offered the following response to the “regulators are irrational-too” critique:

My problem with this criticism is that it ignores the fact that, unlike human beings who make decisions in a vacuum, government regulators have the ability to study over time how individuals behave in certain settings (i.e., whether certain default rules provide adequate disclosure to help them make the most informed decision). Thus, if and to the extent that government regulators are mindful of the human failings discussed above, and their rules are preceded by rigorous and objective tests, it is arguable that they are less likely to get things wrong than one would predict. Of course, it may be the case that the concern with behavioral economics is less that regulators are imperfect and more than they are subject to political biases and that behavioral economics is simply liberalism masquerading as economic thinking.24 My response to that is that political capture is everywhere in Washington and that to the extent behavioral economics supports “hands on” regulation it is no more political than neoclassical economics which generally supports “hands off” regulation. On a more serious note, perhaps the best way behavioral economics could counter this critique over the long run would be to identify ways in which the insights from behavioral economics suggest regulation that one would not expect from a “left-wing” legal theory.

For my money, I find this reply altogether unconvincing.  It amounts to the claim that government agencies can be expected to have a comparative advantage over firms in ameliorating the social costs of errors.  The fact that government regulators might “get things wrong” less often than one might predict is besides the point.  The question is, again, comparing the two relevant institutions: firms in the marketplace and government agencies.  “We’re the government and we’re here to help” isn’t much of an answer to the appropriate question here.  There are further problems with this answer.  As I’ve written in response to the Commissioner’s claims:

But seriously, human beings making decisions “in a vacuum?”  It is individuals and firms who are making decisions insulated from market forces that create profit-motive and other incentives to learn about irrationality and get decisions right — not regulators?   The response to the argument that behavioral economics is simply liberalism masquerading as economic thinking (by the way, the argument is not that, it is that antitrust policy based on behavioral economics has not yet proven to be any more than simply interventionism masquerading as economic thinking — but I quibble) is weak.

As calls for behavioral regulation become more common, administrative agencies are built upon its teachings, or even more aggressive claims that behavioral law and economics can claim intellectual victory over rational choice approaches, it is critical to keep the right question in mind so that we do not fall victim to the Nirvana Fallacy.  The right comparative institutional question is whether courts and agencies or the market is better suited to mitigate the social costs of errors.   The external discipline imposed by the market in mitigating decision-making errors is well documented in the economic literature.  The claim that such discipline can replicated, or exceeded, in agencies is an assertion that remains, thus far, in search of empirical support.

Russell Korobkin (UCLA) provocatively declares the ultimate victory of behavioral law and economics over neoclassical economics:

I am declaring victory in the battle for the methodological soul of the law and economics discipline. There is no need to continue to pursue the debate between behavioralists (that is, proponents of incorporating insights previously limited to the discipline of psychology into the economic analysis of legal rules and institutions) and the defenders of the traditional faith in individual optimization as a core analytical assumption of legal analysis.

Behavioral law and economics wins.  And its not close.  Korobkin continues:

[T]he battle to separate the economic analysis of legal rules and institutions from the straightjacket of strict rational choice assumptions has been won, at least by and large.  The fundamental methodological assumption of rational-choice economics, that individual behavior necessarily maximizes subjective expected utility, given constraints, has been largely discredited as an unyielding postulate for the analysis of legal policy.  Yes, such an assumption, even if inaccurate, simplifies the world, but it does so in an unhelpful way, much in the way that it is unhelpful for a drunk who has lost his car keys in the bushes to search under the streetlamp because that is where the light is.

The paper is remarkable on many levels, few of them positive.   I understand Professor Korobkin is trying to be provocative; in this he succeeds.  I — for one — am provoked.  But one problem with claims designed to provoke is that they may sacrifice other virtues in exchange for achieving the intended effect.  In this case, humility and accuracy are the first — but not the last — to go.   Indeed, Korobkin begins by acknowledging (and marginalizing) those would deny victory to the behaviorists while magnanimously offering terms of surrender:

Not everyone has been won over, of course, but enough have to justify granting amnesty to the captured and politely ignoring the unreconstructed.

Unreconstructed.  I guess I’ll have to take that one.  Given the skepticism I’ve expressed (with Douglas Ginsburg) concerning behavioral law and economics, and in particular, the abuse of the behavioral economics literature by legal scholars, it appears capture is unlikely.   Indeed, Judge Ginsburg and I are publishing a critique of the behavioral law and economics movement — Behavioral Law and Economics: Its Origins, Fatal Flaws, and Implications for Liberty — in the Northwestern Law Review in January 2012.   A fuller development of the case for skepticism about behavioral law and economics can wait for the article; it suffices for now to lay out a few of the most incredible aspects of Korobkin’s claims.

Perhaps the most incendiary aspect of Korobkin’s paper is not a statement, but an omission.  Korobkin claims that rational choice economics has been “largely discredited as an unyielding postulate for the analysis of legal policy” — and then provides no citation for this proposition.  None.  Not “scant support,” not “conflicting evidence” — Korobkin dismisses rational choice economics quite literally by fiat.  We are left to infer from the fact that legal scholars have frequently cited two important articles in the behavioral law and economics canon (the 1998 article A Behavioral Approach to Law and Economics by Christine Jolls, Cass Sunstein and Richard Thaler and Law and Behavioral Science: Removing the Rationality Assumption from Law and Economics by Korobkin and Tom Ulen) that the behavioral approach has not only claimed victory in the marketplace for ideas but so decimated rational choice economics as to leave it discredited and “unhelpful.”  One shudders to consider the legion of thinkers chagrinned by Korobkin’s conclusive declaration.

Oh, wait.  The citations prove the behavioral law and economics is popular among legal scholars — and that’s about it.  I’ve no doubt that much is true.  If Korobkin’s claim was merely that behavioral law and economics has become very popular, I suppose that would be a boring paper, but the evidence would at least support the claim.  But the question is about relative quality of insight and analysis, not popularity.  Korobkin acknowledges as much, observing in passing that “Citation counts do not necessarily reflect academic quality, of course, but they do provide insight into what trends are popular within the legal academy.”   Undaunted, Korobkin moves seemlessly from popularity to the comparative claim that behavioral law and economics has “won” the battle over rational choice economics.   There is no attempt to engage intellectually on the merits concerning relative quality; truth, much less empirical validation, is not a mere matter of a headcount.

Even ceding the validity citations as a metric to prove Korobkin’s underlying claim — the comparative predictive power of two rival economic assumptions — what is the relative fraction of citations using rational choice economics to provide insights into legal institutions?  How many cites has Posner’s Economic Analysis of Law received?  Where is the forthcoming comparison of articles in the Journal of Law and Economics, Journal of Legal Studies, Journal of Political Economy, Journal of Law, Economics, and Organization, American Economic Review, etc.?  One might find all sorts of interesting things by analyzing what is going on in the law and economics literature.  No doubt one would find that the behaviorists have made significant gains; but one expecting to find rational choice economics has been discredited is sure to to be disappointed by the facts.

Second, notice that the declaration of victory comes upon the foundation of citations to papers written in 1998 and 2000.  The debate over the law and economics of minimum resale price maintenance took nearly a century to settle in antitrust law, but behavioral law and economics has displaced and discredited all of rational choice economics in just over a decade?  The behavioral economics literature itself is, in scientific terms, very young.  The literature understandably continues to develop.  The theoretical and empirical project of identifying the conditions under which various biases are observed (and when they are not) is still underway and at a relatively early point in its development.  The over-reaching in Korobkin’s claim is magnified when one considers the relevant time horizon: impatience combined with wishful thinking is not a virtue in scientific discourse.

Third, it is fascinating that it is consistently the lawyers, and mostly law professors, rather than the behavioral economists, that wish to “discredit” rational choice economics.  Similarly, rational choice economists generally do not speak in such broad terms about discrediting behavioral economics as a whole.  Indeed, behavioral economists have observed that “it’s becoming clear that behavioral economics is being asked to solve problems it wasn’t meant to address.  Indeed, it seems in some cases that behavioral economics is being used as a political expedient, allowing policymakers to avoid painful but more effective solutions rooted in traditional economics.”  There are, of course, significant debates between theorists concerning welfare implications of models, from empiricists interpreting experiments and field evidence.  It is the law professors without economic training that want to discredit a branch of economics.  It is important to distinguish here between behavioral economics and behavioral law and economics, and between rational choice economics and its application to law.  No doubt there are applications of rational choice economics to law that overreach and warrant deserved criticism; equally, there are abuses of behavioral economics in the behavioral law and economics literature.  It is a very productive exercise, and one in which law professors might have a comparative advantage, to identify and criticize these examples of overreaching in application to law.   But with all due respect to Professor Korobkin, if rational choice economics is going to be discredited — a prospect I doubt given its success in so many areas of the law — some economists are going to have to be involved.

Fourth, in the midst of declaring victory over rational choice economics, Korobkin doesn’t even bother to define rational choice economics correctly.  Korobkin writes:

To the extent that legal scholars wish to premise their conclusions on the assumption that the relevant actors are perfect optimizers of their material self-interest, they bear the burden of persuasion that this assumption is realistic in the particular context that interests them.

Elsewhere, Korobkin writes:

My central thesis, which runs through the three parts of the article to follow, is that now that law and economics has discarded the “revealed preferences” assumption of neoclassical economics – that individual behavior necessarily maximizes subjective expected utility . . .

This isn’t the rational choice argument; this barely suffices as a caricature of the rational choice assumption underlying conventional microeconomic analysis.  Korobkin falls victim to the all-too-common misunderstanding that the rational choice assumption is a descriptive assumption about each individual’s behavior.  Not only is that obviously incorrect, and I suspect Korobkin knows it; anyone with even a passing familiarity with rational choice literature realizes that a host of economists — Friedman, Becker, Stigler, and Alchian, to name a few — have long been interested in, understood, and incorporated irrational economic behavior into microeconomics.  The rational choice assumption has never been about describing the individual decision-making processes of economic agents.  Perhaps a model with a different assumption, e.g. that all individuals exhibit loss aversion or optimism bias (or half of them, or a quarter, or whatever), will offer greater predictive power.  Perhaps not.  Economists all agree that predictive power is the criterion for model selection.   That is the right debate to have (see, e.g., here), not whether law professors find uses for the behavioral approach to argue for various forms of paternalistic intervention — and, for note, is still the case that this literature is used nearly uniformly for such purposes by law professors.  Korobkin’s method of declaring methodological victory on the behalf of behavioral law and economics while failing to accurately describe rational choice economics is a little bit like challenging your rival to “take it outside,” and then remaining inside and gloating about your victory while he is waiting for the fight outside.

Korobkin defends his provocative declaration of victory with the argument that it allows him to “avoid an extended discussion” of a number of claims he has already deemed appropriate to dismiss (mostly through conventional strawman approaches) in favor of focusing on new and exciting challenges for the behaviorists.  I offer two observations on the so-called benefits of declaring victory while the battle is still being waged.  The first is that avoiding evidence-based debate is a bug rather than a feature from the perspective of scientific method.  The second is a much more practical exhortation against premature celebration: you can lose while you admire the scoreboard.  Anyone who has ever played sports knows it is best to “play the whistle.”

One final observation.  I recall from Professor Korobkin’s website bio that he is a Stanford guy.  You’d think he’d be a little bit more sensitive to the risk of losing the game while the band prematurely celebrates victory.

Pioneers of Law and Economics (with Lloyd Cohen) is now available in paperback. 

You can get it for 20% off the cover price at the link above (discounted price = $36).

There are essays focusing on: Ronald Coase, Aaron Director, George Stigler, Armen Alchian, Harold Demsetz, Benjamin Klein, James Buchanan, Gordon Tullock, Henry Manne, Richard Posner, Gary Becker, William Landes, Richard Epstein, Guido Calabresi, Frank Easterbrook, Daniel Fischel, Steven Shavell and A. Mitchell Polinsky.

Contributors are: Harold Demsetz, Nuno Garoupa, Fernando Gómez-Pomar, Mark Grady, Tom Hazlett, Keith Hylton, Kate Litvak, Andrew Morriss, Sam Peltzman, John Pfaff, Larry Ribstein, Stephen Stigler, Robert Tollison, Tom Ulen, Susan Woodward, and Joshua Wright.

Search Bias and Antitrust

Josh Wright —  24 March 2011

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Continue Reading…

The paper is here (HT: Steve Salop).  The AER’s The Top 20 Committee, consisting of Kenneth J. Arrow, B. Douglas Bernheim, Martin S. Feldstein, Daniel L. McFadden, James M. Poterba, and Robert M. Solow, made the selections.  The list is alphabetical, of course, but TOTM readers will observe that it starts off particularly well (see here and here).   A few interesting things jump out, e.g. the multiple appearances from Peter Diamond (and James Mirrlees).  Any big errors or omissions?

Here’s the list:

Alchian, Armen A., and Harold Demsetz. 1972. “Production, Information Costs, and Economic Organization.” American Economic Review, 62(5): 777–95.

Arrow, Kenneth J. 1963. “Uncertainty and the Welfare Economics of Medical Care.” American Economic Review, 53(5): 941–73.

Cobb, Charles W., and Paul H. Douglas. 1928. “A Theory of Production.” American Economic Review, 18(1): 139–65.

Deaton, Angus S., and John Muellbauer. 1980. “An Almost Ideal Demand System.” American Economic Review, 70(3): 312–26.

Diamond, Peter A. 1965. “National Debt in a Neoclassical Growth Model.” American Economic
Review, 55(5): 1126–50.

Diamond, Peter A., and James A. Mirrlees. 1971. “Optimal Taxation and Public Production I: Production Efficiency.” American Economic Review, 61(1): 8–27.

Diamond, Peter A., and James A. Mirrlees. 1971. “Optimal Taxation and Public Production II: Tax
Rules.” American Economic Review, 61(3): 261–78.

Dixit, Avinash K., and Joseph E. Stiglitz. 1977. “Monopolistic Competition and Optimum Product
Diversity.” American Economic Review, 67(3): 297–308.

Friedman, Milton. 1968. “The Role of Monetary Policy.” American Economic Review, 58(1): 1–17.

Grossman, Sanford J., and Joseph E. Stiglitz. 1980. “On the Impossibility of Informationally Efficient Markets.” American Economic Review, 70(3): 393–408.

Harris, John R., and Michael P. Todaro. 1970. “Migration, Unemployment and Development: A Two-Sector Analysis.” American Economic Review, 60(1): 126–42.

Hayek, F. A. 1945. “The Use of Knowledge in Society.” American Economic Review, 35(4): 519–30.

Jorgenson, Dale W. 1963. “Capital Theory and Investment Behavior.” American Economic Review, 53(2): 247–59.

Krueger, Anne O. 1974. “The Political Economy of the Rent-Seeking Society.” American Economic
Review, 64(3): 291–303.

Krugman, Paul. 1980. “Scale Economies, Product Differentiation, and the Pattern of Trade.” American Economic Review, 70(5): 950–59.

Kuznets, Simon. 1955. “Economic Growth and Income Inequality.” American Economic Review,
45(1): 1–28.

Lucas, Robert E., Jr. 1973. “Some International Evidence on Output-Inflation Tradeoffs.” American Economic Review, 63(3): 326–34.

Modigliani, Franco, and Merton H. Miller. 1958. “The Cost of Capital, Corporation Finance and the
Theory of Investment.” American Economic Review, 48(3): 261–97.

Mundell, Robert A. 1961. “A Theory of Optimum Currency Areas.” American Economic Review,
51(4): 657–65.

Ross, Stephen A. 1973. “The Economic Theory of Agency: The Principal’s Problem.” American Economic Review, 63(2): 134–39.

Shiller, Robert J. 1981. “Do Stock Prices Move Too Much to Be Justified by Subsequent Changes in
Dividends?” American Economic Review, 71(3): 421–36.

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.

Editor’s Note: I invited Professor Thaler to respond to the TOTM Free to Choose Symposium, and he graciously accepted and offered the following response.

Richard Thaler is the Ralph and Dorothy Keller Distinguished Service Professor of Behavioral Science and Economics at the University of Chicago Booth School of Business.

I have now had a chance to read through the contributions to this event and have a few thoughts to share.  I cannot, of course, reply to everything that has been said here, and in any case, most of what I would say already appears in print.  Before getting into specifics let me say one thing up front:  take a deep breath!  These posts have a lot of emotion.  I am not sure why.

On to specifics:

Continue Reading…

Douglas Ginsburg is Circuit Judge, U.S. Court of Appeals for the District of Columbia.

Joshua Wright is Associate Professor, George Mason University School of Law.

The behavioral economics research agenda is an ambitious one for several reasons.  The first reason is that behavioral economics requires a theory “true” preferences aside from – and in opposition to — the “revealed” preferences of the decision maker.  A second reason is that while collecting and documenting individual biases in an ad hoc fashion can generate interesting results, policy relevance requires an integrative theory of errors that can predict the sufficient and necessary conditions under which cognitive biases will hamper the decision-making of economic agents.  A third is not unique to behavioral economics but is nonetheless significant: demonstrating that behavioral economics improves predictive power.  The core methodological commitment of the behavioral economics enterprise — as with economics generally at least since Friedman (1953) —  is an empirical one: predictive power.  Indeed, no less than  Christine Jolls, Cass Sunstein and Richard Thaler have described the behavioralist research program as the economic analysis of law “with a higher R-squared,” that is, “a greater power to explain the observed data.”

As I’ve observed previously, there are some good reasons to believe that behavioral law and economics (BLE) scholars do not share these methodological commitments.   I’ve discussed previously the example of failure of BLE scholars to even cite, much less grapple with, the work of Zeiler & Plott (or here) regarding the endowment effect.  Zeiler & Plott present and support the provocative claim that current evidence supporting the endowment effect is better explained by experimental procedures than cognitive biases.  Proponents of regulation based on the endowment effect, in my view, need not agree with this interpretation of these findings but they ought to respond to them if they want to be taken seriously.  Unfortunately, out of the 342 articles in JLR discussing the “endowment effect” from 2006 to present, only 35 cite either Zeiler and Plott article.  I find that ratio discouraging for the discipline of behavioral law and economics generally and the prevailing level of discourse.

Indeed, while David Levine is not referring to the BLE literature, he might as well have been when he writes:

Behavioral economics: love it or hate it – there seems to be no middle ground. Lovers take the obvious fact people are not frictionless maximizing machines together with the false premise that economists assume that they are to conclude that all of economics must be wrong. The haters take the equally obvious fact that laboratories are not the real world to dismiss all laboratory evidence that conflicts with their pet theories as irrelevant. In the end they seem primarily to talk past each other.

How can we improve the discourse and get discussion focused on predictive power and consequences of actual behavioral policies proposed or implemented?  The burden here lies with the skeptics.  As Richard Epstein points out, the behavioralists’ message has been clear and effective; indeed, Bar-Gill and Warren’s article generated the Consumer Financial Protection Bureau.  Behavioral skepticism has proven less effective.

Skeptics, including myself, have been decidedly less effective in convincing their respective audiences that specific behavioral proposals should be rejected and conventional economic approaches should (at least for now) prevail in the market for ideas in the academy and in the policy world.  It is true that the skeptics have a number of forces working against them.  One is that BLE is new and exciting.  Arguing that the “conventional” approach outperforms the newest tool in the toolkit is always an uphill battle.  I’ve alluded to a second reason, failure of at least some of the BLE literature to engage with opposing ideas.  But perhaps most important is the failure of the skeptics to present a comprehensive and convincing case that the conventional economic approach systematically can be expected to outperform BLE when the full social benefits and costs of the various approaches and institutions are accounted for.  I’ve long been of the opinion that two primary reasons for this failure are that different strands of the skeptical literature have talked past one another, and that this has led to a failure to present the “full” case against BLE on the record to be evaluated.

Consistent with this view, the goal of this post is not to present any new ideas about behavioral economics or behavioral law and economics, but catalog the various objections that have been raised in the literature, discussing their interactions, and linking to some of the leading scholarship in the area calling into question the assumed superiority of the BLE approach on a variety of grounds.

Continue Reading…

Henry G. Manne is Dean Emeritus at George Mason University School of Law

Behavioral Economics, like so many efforts previously to upend the hegemony of the neo-classical market model, will leave some footprints on the intellectual sands of time.  However, there is no way that it can accomplish what many of its disciples seem, subliminally at least, to believe:  that we should abandon the traditional model with (because of?) all its implications about private property, competitive markets and individual freedom.  That dream is, of course, ridiculous for one obvious and frequently mentioned reason: Behavioral Economics does not even attempt to offer an integrated theory of resource allocation, the ultimate and necessary mission of any economic theory.  All it does is putter around some select edges of the traditional theory (mainly the very weakly held – and, as we shall see, unnecessary – rationality assumption) and, again like its predecessors in the intellectual history of economics, it claims far more damage to the received model than it actually delivers.

My first observation about the present state of affairs in Behavioral Economic theory is that it builds too ambitiously on the findings of psychology and does not pay enough attention to what economists already well understand.  I don’t know of any respectable economist of the last 80 years or more who has pushed a fundamentalist notion of the rationality assumption in descriptive or analytical economics.  To the extent that some of the more dedicated Behavioralists do accuse devotees of the market model of something like this, they are clearly misunderstanding the heuristic nature of the perfect competition model.  I won’t take the opportunity to try to reeducate them in what economics is all about and what economists do.  Suffice it to say that the model is valuable enough if it does nothing more than paint an idealistic picture of a perfect market – else what’s a heaven for?

What I would like to point out, however, is the irrelevance of much of the substance of Behavioral Economics for “doing” economics.  My principal (as a matter of fact my sole) authority for this proposition (though some of Gary Becker’s work also comes to mind) is the magnificent classic article by Armen Alchian, Uncertainty, Evolution, and Economic Theory, 58 JPE 211 (1950). Continue Reading…

Nobel Speculation Time

Josh Wright —  8 October 2010

Every year around this time, I repeat my prediction that Armen Alchian, Harold Demsetz, and Ben Klein will win the Nobel Prize for contributions to the theory of the firm, property rights, and transaction cost economics.  I understand that last year’s prize makes this combination less likely, but I see no reason to deviate.  I make the case for that combination, one that I think compares quite favorably to the more frequently discussed trio of Hart-Holmstrom-Tirole, in the linked post.  One can also imagine an Alchian / Demsetz prize for narrowly grounded in their work on property rights.

As Armen’s long-time collaborator William Allen put it in his own letter to the Nobel Committee on Armen’s behalf:

Economics is a broad discipline in methodology, as the Committee is fully aware, ranging from detailed historical, institutional, legalistic description to totally abstract, arcane theory. All such approaches, techniques, and emphases are appropriate. But there is much specialization among the members of the fraternity. And, increasingly, the profession has dealt in rigorous, elegant manipulation, even when the work is purportedly empirical—and even when the substantive results hardly warranted such virtuoso flair. Professor Alchian is a splendid technician, and he has contributed significantly and conspicuously to general “theory.” But, in contrast to many, he has always appreciated that the final payoff of Economics is elucidation of the real workings and phenomena of the world. I know of no one at any time who has had a finer sense of how to use economic analytics to explain the world. Sometimes the explanation requires involved, complex analysis, and Professor Alchian does not fear to use the tools which are required; what is uncommon is his lack of fear in using the MINIMUM tools which are required. In large part, his peculiar genius (the word is used advisedly) is to make extraordinarily effective use of elemental, and often elementary, techniques of analysis. And a host of people—many of whom are now in strategic positions in universities, in government, in the legal system, in the world of business and finance—have enormously benefited from the tutelage of Professor Alchian. … I present Armen Alchian as a giant—a giant who, because of his lack of pretension, is easily overlooked by laymen and even by some supposed professionals—who has greatly honored his profession and uniquely contributed to its usefulness. He would grace the distinguished fraternity of Nobel Laureates.

Indeed.  See also Fred McChesney’s entry on Alchian’s pathbreaking contributions to economic science is available here, and David Henderson’s entry on Alchian in the Concise Enyclopedia of Economics here.

Thomson-Reuters also adds Kevin M. Murphy (another Bruin, at least as an undergraduate!) to their “Watch list” this year, which is also a splendid idea.

Gordon Tullock remains, along with Armen, at the very top of the list of the most deserving.

I’m seeing a lot of predictions for Thaler / Schiller.  That would be the prize that least surprises me.