Archives For

Professor Adam Levitin is not impressed by our prediction of the effect on consumer credit of the CFPA.  Readers might recall that, using estimates from the literature on the effect of regulatory shocks on interest rates and of the long-term debt elasticity, we offered a (in our words) “rough calculation” of the “lower bound” of the effect of the CFPA Act on consumer credit at 2.1%.  Professor Levitin says that we just “make up the numbers” and that they do not pass the “straight-faced test.”  In his paper (and second blog post) Professor Levitin offers more of the same formula: a combination of assertions unsupported by evidence, ad hominem attacks, and insistence to his prior assumption that the CFPA will reduce the cost of credit without imposing serious regulatory costs (again, without substantiation).  He writes that his real problem with our analysis is that “The key point here, however, is the impact of the legislation is speculative and certainly not susceptible to precise statistical predictions.”

We continue to be puzzled as to how a carefully qualified estimate of a lower bound could be confused with a precise statistical prediction.  Levitin’s response, unfortunately, does not address our lengthy explanation of how the CFPA will increase lender costs and continues to assert without evidence that failures of consumer protection caused the financial crisis.

More importantly, we’re puzzled by the apparent denial by supporters of the CFPA of the need to provide any evidence that its benefits exceed its costs.  In his blog post, Professor Levitin notes that he “didn’t set out to prove a positive case in the critique and don’t need to do so to make [his] central point” in critiquing our analysis.  That’s fine.  But the central point of the discussion ought to be the costs and benefits of the proposed regulation.  The burden lies with the proponents of this broad sweeping change in the consumer protection regulatory landscape to present some evidence in favor of their proposals.  Professor Levitin further argues that implying a concession from the absence of empirical support in favor of the CFPA as “ridiculous in this context.”  Astute readers may perceive a developing theme.  Professor Levitin and supporters of the CFPA Act have expressed indifference at best to the notion of providing empirical evidence concerning the potential benefits of the CFPA.  Instead, their general strategy has been to nakedly assert, as Professor Levitin does in his analysis, that failures of consumer protection led to the financial crisis and then proceed to offer regulatory proposals under the assumption that they will have benefits and their implementation will occur without cost.  For example, Professor Levitin predicts that issues involving inconsistent regulations between the states will be resolved out of court, that states will likely adopt consistent regulations, and that industry will (apparently costlessly) “conform with the strictest level of regulation.”  As we discuss in the paper, there is simply no evidence to suggest that failures of consumer protection caused the financial crisis.  None.  To reiterate, once again, there is absolutely no evidence whatsoever that  failures in consumer protection precipitated the current financial downturn.

So to the supporters of the CFPA Act, and particularly Professor Levitin, who has criticized our analysis for providing at least some evidence of the legislation’s costs, where is the evidence – any evidence – that the CFPA Act’s benefits will outweigh its costs?  Do you still support the “plain vanilla” provision despite the fact that both a government agency deigning to design financial products and imposing their offering by banks will necessarily involve significant costs?  What are the benefits to outweigh those costs?  What is the proof of those benefits?  Or, even without “plain vanilla,” where is the evidence to suggest any benefits that outweigh the obvious increase in lending costs (and in turn, interest rates) associated with the CFPA’s new regulatory scheme that we lay out in the paper?  We’d like to see two things: a coherent economic explanation of these benefits and some demonstration of empirical evidence suggesting they actually will materialize.  Can the CFPA’s supporters really think that the CFPA and its new regulatory landscape will not increase costs to lenders and reduce consumer credit?  We concede now, as we did in the original paper, that it is quite possible that some of the new proposals could produce benefits.  But as we said then, and repeat now, that discussion should be based on a careful analysis of the costs and benefits of the various proposals.

On to the wager proposal foreshadowed in the title to the post.

The CFPA Act’s supporters have fought vigorously for this piece of legislation.  Professor Levitin appears quite confident that our analysis represents a “scare statistic” meant to avoid rigorous cost-benefit analysis and to ignore precision.   Of course, we find this line of attack ironic in light of the complete absence of empirical evidence in favor of the CFPA Act mustered up by its supporters.  More generally, we’d like to offer Professor Levitin the opportunity to prove that he means what he says about our overestimate of the lower bound of the impact of the CFPA Act on consumer credit and about the beneficial effects of the CFPA Act more generally.  We are economists.  And so we also believe in the power of revealed preferences.  We stand by our estimate of the lower bound at 2.1 percent.  If Professor Levitin is correct that is a ‘scare statistic’ that we’ve inflated from the true number, we would like to provide an opportunity for Professor Levitin to profit from our misguided approach and to test whether he really believes that the effect on consumer credit will be smaller than that.

We propose the following wager to Professor Levitin:

If the effect on consumer credit is less than 2.1 percent, you win and we lose

If and when the CFPA Act is passed, there will be ample data to test the impact of the CFPA on consumer credit directly.  We’re happy to negotiate what methods should be used to calculate the number to both of our satisfaction.  We’re also happy to let you name the stakes.  But let’s make it interesting.  If it’s good enough for Mankiw and Krugman, it’s good enough for us.  What do you say?

Here’s the abstract:

The Consumer Financial Protection Agency Act (“CFPA Act”), introduced by the U.S. Department of the Treasury in June 2009, proposes sweeping regulation of consumer lending and borrowing. As we showed in “The Effect of the CFPA on Consumer Credit” (hereinafter “Evans and Wright (2009)”):

The CFPA Act creates massive litigation exposure for lenders facing (a) potential lawsuits from state and municipal governments for violating more stringent financial protection regulations that those entities can adopt pursuant to the CFPA Act; and (b) litigation under the CFPA Act’s new and undefined standards for engaging in unfair, deceptive, abusive, or unreasonable practices.

The new Agency would impose significant costs on lenders who would be required to: (a) offer to consumers on a preferred basis plain-vanilla products designed by the Agency either before offering their own products or at the same time; (b) seek prior regulatory approval for new lending products which could be defined as minor variations on existing products; (c) face the risk of having lending products banned altogether; and (d) have to comply with various other rules and regulations.

This note responds to a recent paper by Professor Adam Levitin offered in response to Evans and Wright (2009). As a prefatory matter, his paper is filled with various ad hominem attacks which we will ignore. Instead, we focus on the substance of the issues in contention. Professor Levitin’s basic substantive objection is that he disagrees with our estimates that the Treasury Department’s bill would increase interest rates by at least 160 basis points and reduce net job creation by 4.3 percent under plausible assumptions. Professor Levitin’s criticisms are misguided and we stand by those numbers as lower bounds on the effect of the Treasury’s CFPA Act on the economy. We also note that Professor Levitin has disputed virtually none of our findings that the CFPA Act would impose high costs on lenders and ultimately result in denying borrowers choice.

We think it is impossible to read the CFPA Act without concluding that lenders will face higher costs as a result of, among other things, dealing with the new Agency, being forced to offer products designed by a governmental body rather than themselves, coordinating the sale and distribution of financial products across regulatory regimes varying across the fifty states, and facing the increased possibility of fines and litigation under a novel and ambiguous “abusive” practices standard. While we believe there is a debate to be had on the costs and benefits of the CFPA Act, it is difficult to fathom a claim that this particular Act will not impose significant costs on lenders and that those costs will not be passed on to borrowers. Sound public policy should be based on a careful analysis of the costs and benefits of the various proposals. We do not believe Professor Levitin has made a constructive contribution to that deliberation but encourage him and others to do so as Congress considers the CFPA Act of 2009.

We encourage interested readers to take a look at our papers for themselves:

David S. Evans and Joshua D. Wright, The Effect of the Consumer Financial Protection Agency Act of 2009 on Consumer Credit

David S. Evans and Joshua D. Wright, A Response to Professor Levitin on the Effect of the Consumer Financial Protection Agency Act of 2009 on Consumer Credit

In a recent post, Josh jokingly offered a mathematical “proof” to demonstrate that the Neo-Chicago approach to antitrust was simply an extension of the basic Chicago School approach:

Dan identifies the “Neo-Chicago School”, a term coined by David Evans and Jorge Padilla, as the optimal “third way.”  Basically, the Neo-Chicago school is the combination of price theory, empiricism and the error-cost framework to inform the design of antitrust liability rules.  The new addition to the Neo-Chicago label is the addition of the error-cost framework.  As I’ve written elsewhere, while I consider myself a subscriber to the Neo-Chicago approach, I’m not too convinced there is anything “Neo” about it.  Here’s my mathematical proof of this proposition:

Neo-Chicago = Chicago + Error Cost Framework

Neo-Chicago = Chicago + Intellectual creation of Frank Easterbrook

Neo-Chicago = Chicago + Chicago

Neo-Chicago = 2*Chicago

It’s trivial to demonstrate then that Neo-Chicago is really just a double dose of the Chicago School.  QED.

Like many great jokes this one has dubious premises. Here’s why the theorem fails.

  • Neo-Chicago begins with Chicago. The single-monopoly profit theorem, and many other concepts that are associated with the Chicago School, are widely accepted by antitrust practitioners.
  • Neo-Chicago, however, also agrees that modern industrial organization theory-what is sometimes called post-Chicago–is also useful. Modern IO theory identifies necessary conditions for firms with significant market power to engage in anticompetitive behavior. Those necessary conditions are useful for fashioning screens-if the necessary conditions for a practice to be anticompetitive fail we can stop the analysis.
  • Neo-Chicago recognizes that there are both false positives and false negatives and that the frequency and costs of these is an empirical matter than may vary over time and jurisdiction.

Neo-Chicago can be distinguished from Post-Chicago and Chicago-Squared both of which we take as largely ideological approaches to antitrust:

  • Post-Chicago too often says “it could be anticompetitive” therefore “it is” anticompetitive. It sweeps up many business practices based on assumption-driven possibility theorems.
  • In the hands of some Chicago Squared says Chicago shows that firms do not have the incentive to engage in anticompetitive unilateral conduct in many circumstances and then invokes false positives to eliminate all exceptions. The bad version of Chicago Squared that we too often see doesn’t treat error costs seriously but merely invokes error costs often, with little analysis or evidence, to reject interventions. In fact we would limit the term Chicago Squared to this ideologically driven version of Chicago to distinguish it from the careful analysis of early error-cost advocates such as Easterbrook and Posner.

Neo-Chicago is a non-ideological evidence-based approach to antitrust that can be used globally to fashion competition policy.  It recognizes that optimal rules vary geographically and over time depending on the facts.  Two examples illustrate:

  • There are likely to be valid differences in the likelihood of false positives and false negatives, and the cost of those, across jurisdictions based on business culture, economic history and legal regime. Neo-Chicago implies that optimal rules can vary across jurisdictions.
  • Changes in a legal regime could require changes in rules. As the US tightens up class certification standards and adopts heightened pleading standards there is at least an argument that the rate of false positives will decline and therefore rules should be stricter.

We believe the Neo-Chicago approach can lay the basis for a professional antitrust discipline that can provide guidance for the application of antitrust in diverse jurisdictions and circumstances. It also provides a basis for engaging in constructive discourse about antitrust policy.

evansDavid Evans is Head, Global Competition Policy Practice, LECG; Executive Director, Jevons Institute for Competition Law and Economics, and Visiting Professor, University College London; and Lecturer, University of Chicago.

I’d like to propose a contest for the greatest intellectual embarrassment of antitrust. Let me name the first contestant—tying, which some of you know has been one of my favorite for years. Here’s why. First, there is no persuasive theoretical or empirical evidence that tying is a business practice that is likely to harm consumers.  (This is not the blog to deal with Professor Elhauge’s provocative paper except to say that it does not alter this view.)  There is work that says it could be, under stringent conditions, and one can point to cases where maybe the practice has been used in a harmful way.  Yet the courts have put tying in the same antitrust category as price fixing when done by a firm with some market power.   Second, the courts, lacking any analytical framework for detecting bad behavior, have developed a mechanical test for tying that doesn’t have any connection whatsoever to any of the plausible theories of when and why tying might be bad.  The test leads to false positives almost by design.  Third, tying has led to one of the most ridiculous antitrust remedies of all time—namely the  European Commission’s insistence that Microsoft expend effort creating and offering a product–a version of Windows that didn’t include Microsoft’s media player technology—that no one wants. Now, I understand that others will have their own candidates. But to beat mine your challenge is you must show a complete lack of theoretical or empirical support; a really bad legal test; and a remedy that better demonstrates the bankruptcy of the law.   The challenge is on.

evansDavid Evans is Head, Global Competition Policy Practice, LECG; Executive Director, Jevons Institute for Competition Law and Economics, and Visiting Professor, University College London; and Lecturer, University of Chicago.

The treatment of unilateral conduct remains an intellectual and policy mess as we finish out the first decade of the 21st century. There were signs of hope a few years ago. The European Commission embarked on an effort to adopt an effects-based approach to unilateral conduct and to move away from the analytically-empty, object-based approach developed by the European Courts.  Meanwhile the Federal Trade Commission and the U.S. Department of Justice embarked on a series of hearings on unilateral conduct that brought the best thinkers together and hoped to achieve some consensus.  Hopes were dashed in 2008.  The Justice Department and the FTC splintered. The DOJ issued a lengthy report that for all intents and purposes argued for significantly limiting the circumstances under which a business practice could be found to constitute anticompetitive unilateral conduct. Three of the four sitting Federal Trade Commissioners quickly asserted their fundamental disagreement. Towards the end of the year the European Commission finally issued a document that adopted an effects-based approach, sort of, but only for guidance for its prosecutorial discretion over which cases it would focus its resources on.  I say “sort of” because although much of the framework it adopts is quite sensible, the Commission places virtually insurmountable obstacles to considering efficiencies. (For a comparative review of the EC and DOJ reports see my article here.)

The incoherence and discord in this area of antitrust law fundamentally result from a failure on the part of economists and other antitrust scholars to roll up their sleeves and to do significant empirical work. Instead, two kinds of flags get waved to urge the respective followers on. The first involves the famous economic possibility theorem-“it could, therefore it will.”  That’s a statement based on a seemingly scientific economic model that says something might happen under some conditions.  Oftentimes those conditions are unstated, buried in footnotes, or buried in mathematics that only the careful reader uncovers.  (See my article with Jorge Padilla here). Many are guilty of waving possibility theorems around. But in my experience the pro-intervention crowd, including the competition authorities, are the most enamored with untested assumption-driven economic theory.  Continue Reading…