Archives For cost-benefit analysis

The world of economics and public policy has lost yet another giant.  Joining Ronald Coase, James Buchanan, Armen Alchian, and Robert Bork is a man whose name may be less familiar to TOTM readers but whose ideas have been hugely influential, particularly on me.

As the first chairman of President Reagan’s Council of Economic Advisers, Murray Weidenbaum lay much of the blame for the anemic economy President Reagan “inherited” (my, how I’ve come to hate that word!) on the then-existing regulatory state.  Command and control dominated in those days, and there was virtually no consideration of such mundane matters as the costs and benefits of regulatory interventions and the degree to which regulations were tailored to fit the market failures they purported to correct.  Murray understood that such an unmoored regulatory state strangled innovation and would inevitably become co-opted by regulatees, who would use the machinery of the state to squelch competition and gain other advantages.  He counseled the President to do something about it.

The result was Executive Order 12291, which subjected major federal regulations to cost-benefit analysis and stated that “[r]egulatory action shall not be undertaken unless the potential benefits to society from the regulation outweigh the potential costs to society.”  Such basic cost-benefit balancing seems like nothing more than common sense these days, but when Murray was pushing the idea at Washington University back in the late 1970s, it was considered pretty radical.  Many of the Nixon era environmental statutes, for example, proudly eschewed consideration of costs.  Murray helped us see how silly that was.

I distinctly remember a conversation we had in 1993.  I had just been hired as a research fellow at Wash U’s Center for the Study of American Business, and Murray, the Center director, was taking me and the other research fellow to lunch.  The faculty dining club at Wash U is across a busy-ish street from the main campus.  There’s a tunnel a block or so west of the dining club, but hardly anybody would use it when walking to lunch.  As we waited for an opening in traffic and crossed the street, Murray remarked, “See fellows, this is what I’m talking about.  Crossing this busy street is risky.  All these lunch-goers could eliminate the risk of an accident by walking two blocks out of their way.  But nobody ever does that.  The risk reduction just isn’t worth the cost.”

That was classic Murray.  He was a plain-talking purveyor of common sense.  He was firm in his beliefs but always kind and never doctrinaire.  By presenting his ideas calmly and rationally, he earned the respect of differently minded folks, like Democratic Senator Thomas Eagleton, with whom he co-taught a popular course at Wash U.  Our country is a better place because of Murray’s service, and I am where I am because he took me under his wing.

Rest in peace, Murray.

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.

Last week, over Commissioner Wright’s dissent, the FTC approved amendments to its HSR rules (final text here) that, as Josh summarizes in his dissent,

establish, among other things, a procedure for the automatic withdrawal of an HSR filing upon the submission of a filing to the U.S. Securities and Exchange Commission announcing that the notified transaction has been terminated.

I discussed the proposed amendments and Josh’s concurring statement on their publication in February.

At the time, Josh pointed out that:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction. Indeed, it would be surprising to see firms incurring the costs and devoting the time and effort associated with antitrust review in the absence of a good faith intent to proceed with their transaction.

The proposed rules, if adopted, could increase the costs of corporate takeovers and thus distort the market for corporate control. Some companies that had complied with or were attempting to comply with a Second Request, for example, could be forced to restart their antitrust review, leading to significant delays and added expenses. The proposed rules could also create incentives for firms to structure their transactions less efficiently and discourage the use of tender offers. Finally, the proposed new rules will disproportionately burden U.S. public companies; the Federal Register notice acknowledges that the new rules will not apply to tender offers for many non-public and foreign companies.

Given these concerns, I hope that interested parties will avail themselves of the opportunity to submit public comments so that the Commission can make an informed decision at the conclusion of this process.

Apparently none of the other commissioners shared his concerns. But they remain valid. Most importantly, the amendments were adopted without a shred of evidence to suggest they were needed or would be helpful in any way. As Josh says in his dissent:

It has long been accepted as a principle of good governance that federal agencies should issue new regulations only if their benefits exceed their costs….However, I have not seen evidence that any of the over 68,000 transactions that have been notified under the HSR Rules has resulted in the allocation of resources to a truly hypothetical transaction.

In the absence of evidence that the automatic withdrawal rule would remedy a problem that exists under the current HSR regime, and thus benefit the public, I believe we should refrain from creating new regulations.

For what it’s worth. the single comment received by the Commission on the proposed rule supported Josh’s views:

Although the rule may prevent such inefficiency in the future, it would also require companies to incur substantial costs in premerger negotiations and resource allocation while waiting for FTC approval during the HSR period. Currently, firms can avoid such costs by temporarily withdrawing offers or agreements until they are assured of FTC approval. Under the proposed rule, however, doing so would automatically withdraw a company’s HSR filing, subjecting it to another HSR filing and filing fee.

Presumably the absence of other comments means the business community isn’t too concerned about the amendments. But that doesn’t mean they should have been adopted without any evidence to support the claim that they were needed. I commend Josh for sticking to his principles and going down swinging.

Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A on Google+ that “our efforts at patent reform only went about halfway to where we need to go” and that he would like “to see if we can build some additional consensus on smarter patent laws.” So, Reps. DeFazio and Chaffetz introduced on March 1 the Saving High-tech Innovators from Egregious Legal Disputes (SHIELD) Act, which creates a “losing plaintiff patent-owner pays” litigation system for a single type of patent owner—patent licensing companies that purchase and license patents in the marketplace (and who sue infringers when infringers refuse their requests to license). To Google, to Representative DeFazio, and to others, these patent licensing companies are “patent trolls” who are destroyers of all things good—and the SHIELD Act will save us all from these dastardly “trolls” (is a troll anything but dastardly?).

As I and other scholars have pointed out, the “patent troll” moniker is really just a rhetorical epithet that lacks even an agreed-upon definition.  The term is used loosely enough that it sometimes covers and sometimes excludes universities, Thomas Edison, Elias Howe (the inventor of the lockstitch in 1843), Charles Goodyear (the inventor of vulcanized rubber in 1839), and even companies like IBM.  How can we be expected to have a reasonable discussion about patent policy when our basic terms of public discourse shift in meaning from blog to blog, article to article, speaker to speaker?  The same is true of the new term, “Patent Assertion Entities,” which sounds more neutral, but has the same problem in that it also lacks any objective definition or usage.

Setting aside this basic problem of terminology for the moment, the SHIELD Act is anything but a “smarter patent law” (to quote President Obama). Some patent scholars, like Michael Risch, have begun to point out some of the serious problems with the SHIELD Act, such as its selectively discriminatory treatment of certain types of patent-owners.  Moreover, as Professor Risch ably identifies, this legislation was so cleverly drafted to cover only a limited set of a specific type of patent-owner that it ended up being too clever. Unlike the previous version introduced last year, the 2013 SHIELD Act does not even apply to the flavor-of-the-day outrage over patent licensing companies—the owner of the podcast patent. (Although you wouldn’t know this if you read the supporters of the SHIELD Act like the EFF who falsely claim that this law will stop patent-owners like the podcast patent-owning company.)

There are many things wrong with the SHIELD Act, but one thing that I want to highlight here is that it based on a falsehood: the oft-repeated claim that two Boston University researchers have proven in a study that “patent troll suits cost American technology companies over $29 billion in 2011 alone.”  This is what Rep. DeFazio said when he introduced the SHIELD Act on March 1. This claim was repeated yesterday by House Members during a hearing on “Abusive Patent Litigation.” The claim that patent licensing companies cost American tech companies $29 billion in a single year (2011) has become gospel since this study, The Direct Costs from NPE Disputes, was released last summer on the Internet. (Another name of patent licensing companies is “Non Practicing Entity” or “NPE.”)  A Google search of “patent troll 29 billion” produces 191,000 hits. A Google search of “NPE 29 billion” produces 605,000 hits. Such is the making of conventional wisdom.

The problem with conventional wisdom is that it is usually incorrect, and the study that produced the claim of “$29 billion imposed by patent trolls” is no different. The $29 billion cost study is deeply and fundamentally flawed, as explained by two noted professors, David Schwartz and Jay Kesan, who are also highly regarded for their empirical and economic work in patent law.  In their essay, Analyzing the Role of Non-Practicing Entities in the Patent System, also released late last summer, they detailed at great length serious methodological and substantive flaws in The Direct Costs from NPE Disputes. Unfortunately, the Schwartz and Kesan essay has gone virtually unnoticed in the patent policy debates, while the $29 billion cost claim has through repetition become truth.

In the hope that at least a few more people might discover the Schwartz and Kesan essay, I will briefly summarize some of their concerns about the study that produced the $29 billion cost figure.  This is not merely an academic exercise.  Since Rep. DeFazio explicitly relied on the $29 billion cost claim to justify the SHIELD Act, and he and others keep repeating it, it’s important to know if it is true, because it’s being used to drive proposed legislation in the real world.  If patent legislation is supposed to secure innovation, then it behooves us to know if this legislation is based on actual facts. Yet, as Schwartz and Kesan explain in their essay, the $29 billion cost claim is based on a study that is fundamentally flawed in both substance and methodology.

In terms of its methodological flaws, the study supporting the $29 billion cost claim employs an incredibly broad definition of “patent troll” that covers almost every person, corporation or university that sues someone for infringing a patent that it is not currently being used to manufacture a product at that moment.  While the meaning of the “patent troll” epithet shifts depending on the commentator, reporter, blogger, or scholar who is using it, one would be extremely hard pressed to find anyone embracing this expansive usage in patent scholarship or similar commentary today.

There are several reasons why the extremely broad definition of “NPE” or “patent troll” in the study is unusual even compared to uses of this term in other commentary or studies. First, and most absurdly, this definition, by necessity, includes every university in the world that sues someone for infringing one of its patents, as universities don’t manufacture goods.  Second, it includes every individual and start-up company who plans to manufacture a patented invention, but is forced to sue an infringer-competitor who thwarted these business plans by its infringing sales in the marketplace.  Third, it includes commercial firms throughout the wide-ranging innovation industries—from high tech to biotech to traditional manufacturing—that have at least one patent among a portfolio of thousands that is not being used at the moment to manufacture a product because it may be “well outside the area in which they make products” and yet they sue infringers of this patent (the quoted language is from the study). So, according to this study, every manufacturer becomes an “NPE” or “patent troll” if it strays too far from what somebody subjectively defines as its rightful “area” of manufacturing. What company is not branded an “NPE” or “patent troll” under this definition, or will necessarily become one in the future given inevitable changes in one’s business plans or commercial activities? This is particularly true for every person or company whose only current opportunity to reap the benefit of their patented invention is to license the technology or to litigate against the infringers who refuse license offers.

So, when almost every possible patent-owning person, university, or corporation is defined as a “NPE” or “patent troll,” why are we surprised that a study that employs this virtually boundless definition concludes that they create $29 billion in litigation costs per year?  The only thing surprising is that the number isn’t even higher!

There are many other methodological flaws in the $29 billion cost study, such as its explicit assumption that patent litigation costs are “too high” without providing any comparative baseline for this conclusion.  What are the costs in other areas of litigation, such as standard commercial litigation, tort claims, or disputes over complex regulations?  We are not told.  What are the historical costs of patent litigation?  We are not told.  On what basis then can we conclude that $29 billion is “too high” or even “too low”?  We’re supposed to be impressed by a number that exists in a vacuum and that lacks any empirical context by which to evaluate it.

The $29 billion cost study also assumes that all litigation transaction costs are deadweight losses, which would mean that the entire U.S. court system is a deadweight loss according to the terms of this study.  Every lawsuit, whether a contract, tort, property, regulatory or constitutional dispute is, according to the assumption of the $29 billion cost study, a deadweight loss.  The entire U.S. court system is an inefficient cost imposed on everyone who uses it.  Really?  That’s an assumption that reduces itself to absurdity—it’s a self-imposed reductio ad absurdum!

In addition to the methodological problems, there are also serious concerns about the trustworthiness and quality of the actual data used to reach the $29 billion claim in the study.  All studies rely on data, and in this case, the $29 billion study used data from a secret survey done by RPX of its customers.  For those who don’t know, RPX’s business model is to defend companies against these so-called “patent trolls.”  So, a company whose business model is predicated on hyping the threat of “patent trolls” does a secret survey of its paying customers, and it is now known that RPX informed its customers in the survey that their answers would be used to lobby for changes in the patent laws.

As every reputable economist or statistician will tell you, such conditions encourage exaggeration and bias in a data sample by motivating participation among those who support changes to the patent law.  Such a problem even has a formal name in economic studies: self-selection bias.  But one doesn’t need to be an economist or statistician to be able to see the problems in relying on the RPX data to conclude that NPEs cost $29 billion per year. As the classic adage goes, “Something is rotten in the state of Denmark.”

Even worse, as I noted above, the RPX survey was confidential.  RPX has continued to invoke “client confidences” in refusing to disclose its actual customer survey or the resulting data, which means that the data underlying the $29 billion claim is completely unknown and unverifiable for anyone who reads the study.  Don’t worry, the researchers have told us in a footnote in the study, they looked at the data and confirmed it is good.  Again, it doesn’t take economic or statistical training to know that something is not right here. Another classic cliché comes to mind at this point: “it’s not the crime, it’s the cover-up.”

In fact, keeping data secret in a published study violates well-established and longstanding norms in all scientific research that data should always be made available for testing and verification by third parties.  No peer-reviewed medical or scientific journal would publish a study based on a secret data set in which the researchers have told us that we should simply trust them that the data is accurate.  Its use of secret data probably explains why the $29 billion study has not yet appeared in a peer-reviewed journal, and, if economics has any claim to being an actual science, this study never will.  If a study does not meet basic scientific standards for verifying data, then why are Reps. DeFazio and Chaffetz relying on it to propose national legislation that directly impacts the patent system and future innovation?  If heads-in-the-clouds academics would know to reject such a study as based on unverifiable, likely biased claptrap, then why are our elected officials embracing it to create real-world legal rules?

And, to continue our running theme of classic clichés, there’s the rub. The more one looks at the actual legal requirements of the SHIELD Act, the more, in the words of Professor Risch, one is left “scratching one’s head” in bewilderment.  The more one looks at the supporting studies and arguments in favor of the SHIELD Act, the more one is left, in the words of Professor Risch, “scratching one’s head.”  The more and more one thinks about the SHIELD Act, the more one realizes what it is—legislation that has been crafted at the behest of the politically powerful (such as an Internet company who can get the President to do a special appearance on its own social media website) to have the government eliminate a smaller, publicly reviled, and less politically-connected group.

In short, people may have legitimate complaints about the ways in which the court system in the U.S. generally has problems.  Commentators and Congresspersons could even consider revising the general legal rules governing patent ligtiation for all plaintiffs and defendants to make the ligitation system work better or more efficiently (by some established metric).   Professor Risch has done exactly this in a recent Wired op-ed.  But it’s time to call a spade a spade: the SHIELD Act is a classic example of rent-seeking, discriminatory legislation.

Although it probably flew under almost everyone’s radar, last week Josh issued his first Concurring Statement as an FTC Commissioner.  The statement came in response to a seemingly arcane Notice of Proposed Rulemaking relating to Hart-Scott-Rodino Premerger Notification Rules:

The proposed rules also establish a procedure for the automatic withdrawal of an HSR filing when filings are made with the U.S. Securities and Exchange Commission (SEC) announcing that a transaction has been terminated.

The proposed rulemaking itself isn’t enormously significant, but Josh’s statement lays down a marker that indicates (as anyone could have predicted) that he intends to do everything he can to improve the agency and its process.

The rule, as suggested above, would automatically withdraw an HSR filing whenever transacting parties filed certain notices with the SEC announcing the termination of a deal.  You may recall that the Hertz/Dollar Thrifty deal had been in the works for at least five years when it finally closed.  When Hertz withdrew its tender offer in October 2011, it did not withdraw its HSR filing.  As reported at the time, Hertz withdrew its bid over difficulty securing FTC approval, which had plagued other offers for Thrifty:

In a sign of frustration, Mr. Thompson said that the company had spent some $30 million over the last few years dealing with the barrage of takeover offers.

Obviously, given the difficulty of securing FTC approval and the costs imposed by the uncertainty it created, there was real benefit to Hertz (and perhaps Thrifty, for that matter) from receiving a decision from the FTC without meanwhile tying up the company’s resources, restraining its decision- and deal-making abilities, complicating negotiations and weakening its credit by maintaining a stalled-but-pending merger.  So the deal was withdrawn, but the HSR filing was not.

In August 2012 the parties re-initiated the merger following ongoing consultations by Hertz with the FTC, and, in November 2012 — a full year after the deal was withdrawn (and a year and a half after the HSR filing) — the FTC approved the deal.

But, understandably, FTC staff don’t want to be wasting resources reviewing hypothetical transactions, and so, following on the heels of the Hertz/Dollar Thrifty deal, wrote the proposed rule to ensure that it never happens again.

Except it didn’t happen in Hertz because, after all, the deal was eventually made. According to Josh, in fact, the situation intended to be avoided by the rule has never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction. Indeed, it would be surprising to see firms incurring the costs and devoting the time and effort associated with antitrust review in the absence of a good faith intent to proceed with their transaction.

This isn’t to say (and Josh doesn’t say) that the proposed rule is a bad idea, just that, given the apparently negligible benefits of the rule, the costs could easily outweigh the benefits.

Which is why Josh’s Statement is important. What Josh is asking for is not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits. And as Josh points out, there could indeed be some costs:

The proposed rules, if adopted, could increase the costs of corporate takeovers and thus distort the market for corporate control. Some companies that had complied with or were attempting to comply with a Second Request, for example, could be forced to restart their antitrust review, leading to significant delays and added expenses. The proposed rules could also create incentives for firms to structure their transactions less efficiently and discourage the use of tender offers. Finally, the proposed new rules will disproportionately burden U.S. public companies; the Federal Register notice acknowledges that the new rules will not apply to tender offers for many non-public and foreign companies.

Given these concerns, I hope that interested parties will avail themselves of the opportunity to submit public comments so that the Commission can make an informed decision at the conclusion of this process.

What is surprising is not that Josh suggested that there might be unanticipated costs to such a rule, nor that cost-benefit analysis be applied. Rather, what’s surprising is that the rest of the Commission didn’t sign on. Why is that surprising? Well, because cost-benefit analysis is not only sensible, it’s consistent with the Obama Administration’s stated regulatory approach. Executive Order 13563 requires that:

Each agency must, among other things:  (1) propose or adopt a regulation only upon a reasoned determination that its benefits justify its costs (recognizing that some benefits and costs are difficult to quantify) . . . In applying these principles, each agency is directed to use the best available techniques to quantify anticipated present and future benefits and costs as accurately as possible.

Unfortunately, as Berin Szoka has pointed out,

The FCC, FTC and many other regulatory agencies aren’t required to do cost-benefit analysis at all.  Because these are “independent agencies”—creatures of Congress rather than part of the Executive Branch (like the Department of Justice)—only Congress can impose cost-benefit analysis on agencies.  A bipartisan bill, the Independent Agency Regulatory Analysis Act (S. 3486), would have allowed the President to impose the same kind of cost-benefit analysis on independent regulatory agencies as on Executive Branch agencies, including review by the Office of Information and Regulatory Affairs (OIRA) for “significant” rulemakings (those with $100 million or more in economic impact, that adversely affect sectors of the economy in a material way, or that create “serious inconsistency” with other agencies’ actions). . . . yet the bill has apparently died . . . .

Legislation or not, it is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

It may have happened before, but I can’t recall an FTC Commissioner laying down the cost-benefit-analysis gauntlet and publicly calling for consistent cost-benefit review at the Commission, even of seemingly innocuous (but often not actually innocuous), technical rules.

This is exactly the sort of thing that those of us who extolled Josh’s appointment hoped for, and I’m delighted to see him pushing this kind of approach right out of the gate.  No doubt he rocked some boats and took some heat for it. Good. That means he’s on the right track.

The semester is off to a bang.  I arrived at Stanford Monday to start teaching in the Law School and begin a research fellowship at the Hoover Institution.  Yesterday I hiked in the mountains overlooking the SF Bay.  Today I am flying back to DC (and blogging in flight, how cool is that) to testify Thursday before the House Committee on Financial Services alongside SEC Chairman Schapiro, former Chairman Pitt, and former Commissioner Paul Atkins on proposed legislation from Congressman Scott Garrett and Chairman Spencer Bachus to reform and reshape the SEC.

Part of the hearing, titled “Fixing the Watchdog: Legislative Proposals to Improve and Enhance the Securities and Exchange Commission” will deal with the study on SEC organizational reform mandated by the Dodd-Frank Act and conducted by the Boston Consulting Group.  Frankly, I found it full of quotes from the consultant’s desk manual, with references to “no-regrets implementation,” “business process optimization” and “multi-faceted transformation.”  I believe the technical term is gobbledy-gook.

The remainder of the hearing will involve a discussion of the SEC Organizational Reform Act (or “Bachus Bill”) and the SEC Regulatory Accountability Act (or “Garrett Bill”).  The Bachus Bill proposes a number of organizational reforms, like breaking up the new Division of Risk, Strategy, and Financial Innovation to embed the economists there back in to the various functional divisions.  The Garrett Bill seeks to strengthen the guiding principles originally formulated in the NSMIA amendments by elaborating on how the agency can meet its economic analysis burden in rule-making.

I thought I would give TOTM readers a sneak peak at my testimony.  I aim to make two key points.  First, sincere economic analysis is important.  SEC rules have consistently done a poor job of meeting the mandate of the NSMIA to consider the effect of new rules on efficiency, competition, and capital formation, and they will continue to do a poor job until they hire more economists and give them increased authority in the enforcement and the rule-making process.  Second, the SEC’s mission should include an explicit requirement that it consider the effect of new rules on the state based system of business entity formation.

Here’s a sneak peak at my testimony for TOTM readers:

Chairman Bachus, Ranking Member Frank, and distinguished members of the Committee, it is a privilege to testify today.  My name is J.W. Verret.  I am an Assistant Professor of Law at Stanford Law School where I teach corporate and securities law.  I also serve as a Fellow at the Hoover Institution and as a Senior Scholar at the Mercatus
Center at George Mason University.  I am currently on leave from the George Mason Law School.

My testimony today will focus on two important and necessary reforms.

First, I will argue that clarifying the SEC’s legislative mandate to conduct economic analysis and a commitment of authority to economists on staff at the SEC are both vital to ensure that new rules work for investors rather than against them.  Second, I will urge that the SEC be required to consider the impact of new rules on the state-based system of business incorporation.

Every President since Ronald Reagan has requested that independent agencies like the SEC commit to sincere economic cost-benefit analysis of new rules.  Further, unlike many other independent agencies the SEC is subject to a legislative mandate that it consider the effect of most new rules on investor protection, efficiency, competition and capital formation.

The latter three principles have been interpreted as requiring a form of cost-benefit economic analysis using empirical evidence, economic theory, and compliance cost data.  These tools help to determine rule impact on stock prices and stock exchange competitiveness and measure compliance costs that are passed on to investors.

Three times in the last ten years private parties have successfully challenged SEC rules for failure to meet these requirements.  Over the three cases, no less than five distinguished jurists on the DC Circuit, appointed during administrations of both Republican and Democratic Presidents, found the SEC’s economic analysis wanting. One
failure might have been an aberation, three failures out of three total challenges is a dangerous pattern.

Many SEC rules have treated the economic analysis requirements as an afterthought. This is in part a consequence of the low priority the Commission places on economic analysis, evidenced by the fact that economists have no significant authority in the rule-making process or the enforcement process.

As an example of the level of analysis typically given to significant rule-making, consider the SEC’s final release of its implementation of Section 404(b) of the Sarbanes-Oxley Act.  The SEC estimated that the rule would impose an annual cost of $91,000 per publicly traded company.  In fact a subsequent SEC study five years later found average implementation costs for 404(b) of $2.87 million per company.

That error in judgment only applies to estimates of direct costs.  The SEC gave no consideration whatsoever to the more important category of indirect costs, like the impact of the rule on the volume of new offerings or IPOs on US exchanges.

In Business Roundtable v. SEC alone the SEC estimates it dedicated over $2.5 million in staff hours to a rule that was struck down.  An honest commitment by the SEC to empower economists in the rule-making process will be a vital first step to ensure the mistakes of the proxy access rule are not replicated in future rules.

I also support the goal in H.R. 2308 to further elaborate on the economic analysis requirements.  I would suggest, in light of the importance and pervasiveness of the state-based system of corporate governance, that the bill include a provision requiring the SEC to consider the impact of new rules on the states when rule-making touches on issues of corporate governance.

The U.S. Supreme Court has noted that “No principle of corporation law and practice is more firmly established than a state’s authority to regulate domestic corporations.”

Delaware is one prominent example, serving as the state of incorporation for half of all publicly traded companies.  Its corporate code is so highly valued among shareholders that the mere fact of Delaware incorporation typically earns a publicly traded company a 2-8% increase in value.  Many other states also compete for incorporations, particularly New York, Massachusetts, California and Texas.

In order to fully appreciate this fundamental characteristic of our system, I would urge adding the following language to H.R. 2308:

“The Commission shall consider the impact of new rules on the traditional role of states in governing the internal affairs of business entities and whether it can achieve its stated objective without preempting state law.”

The SEC can comply by taking into account commentary from state governors and state secretaries of state during the open comment period.  It can minimize the preemptive effect of new rules by including references to state law where appropriate similar to one
already found in Section 14a-8.  It can also commit to a process for seeking guidance on state corporate law by creating a mandatory state court certification procedure similar to that used by the SEC in the AFSCME v. AIG case in 2008.

I thank you again for the opportunity to testify and I look forward to answering your questions.

[Cross-posted at PYMNTS.COM]

Richard Cordray’s nomination hearing provided an opportunity to learn something new about the substantive policies of the new Consumer Financial Protection Bureau.  Unfortunately, that opportunity came and went without answering many of the key questions that remain concerning the impact of the CFPB’s enforcement and regulatory agenda on the availability of consumer credit, economic growth, and jobs.

The Consumer Financial Protection Bureau’s critics, including myself, [1] have expressed concerns that the CFPB— through enforcement and regulation—could harm consumers and small businesses by reducing the availability of credit.  The intellectual blueprint for the CFPB is founded on the insight, from behavioral economics, that “[m]any consumers are uninformed and irrational,” that “consumers make systematic mistakes in their choice of credit products,” and that the CFPB should play a central role in determining which and to what extent these products are used. [2] The CFPB’s recent appointment of Sendhil Mullainathan as its Assistant Director for Research confirms its commitment to the behaviorist approach to regulation of consumer credit.  Mullainathan, in work co-authored with Professor Michael S. Barr, provided the intellectual basis for the much debated “plain vanilla” provision in the original legislation and advocated a whole host of new consumer credit regulations ranging from improved disclosures to “harder” forms of paternalism.  The concern, in short, is that the CFPB is hard-wired to take a myopic view of the tried-and-true benefits of consumer credit markets and runs the risk of harming many (and especially the socially and economically disadvantaged groups in the greatest need of access to consumer credit) in the name of protecting the few.

To be sure, there is absolutely no doubt that there are unscrupulous and unsavory characters in lending markets engaging in bad acts ranging from fraud to preying upon vulnerable borrowers.  Nonetheless, it is critical to recognize the positive role that lending markets and the availability of consumer credit has played in the American economy, especially in facilitating entrepreneurial activity and small business growth.  Taking into account these important benefits is fundamental to developing sound consumer credit policy.  I had hoped that the hearings might focus upon Mr. Cordray’s underlying philosophical approach to weighing the costs and benefits of credit regulation and how that balance might be struck at his CFPB.  They did not, instead focusing largely upon another important issue: the precise contours of CFPB authority and oversight.

Currently, the unemployment rate is over 9 percent and all of the available evidence suggests the CFPB’s approach will run a significant risk of overregulation that will reduce the availability of consumer credit to small businesses and thus further depress the economy.  Therefore, getting hard answers concerning how the CFPB views and will account for these risks in its enforcement and regulatory decisions is critical.  Certainly, the nomination hearing offered small hints toward this end.  We learned that under Mr. Cordray’s watch, CFPB enforcement will involve not only lawsuits but also a “more flexible toolbox” that includes “research reports, rulemaking guidance, consumer education and empowerment, and the ability to supervise and examine both large banks and many nonbank institutions.”

The job of protecting consumers in financial products markets—the domain of the new CFPB—extends to all such consumers.  The benefits of healthy markets and competition in consumer credit products has generated tremendous economic benefits to the most disadvantaged as well as to small businesses.  If the CFPB agenda were limited to educating consumers about the costs and benefits of various products and improving disclosures, there would be far less need for concern that it will be a drag on consumers, entrepreneurial activity, and economic growth.  However, the CFPB’s intellectual blueprint suggests a more aggressive and dangerous agenda, and the authority it has been granted renders that agenda feasible.  The CFPB must account for the benefits from lending markets and balance them against its laudable objective of preventing deceptive practices when crafting its enforcement and regulatory agenda.  Unfortunately, after Tuesday’s nomination hearing, the CFPB’s approach to this complex and delicate balance remains an open question.

—–

[1] David S. Evans & Joshua D. Wright, The Effect of the Consumer Financial Protection Agency Act of 2009 on Consumer Credit, 22(3) Loyola Consumer L. Rev. 277 (2010).

[2] Oren Bar-Gill & Elizabeth Warren, Making Credit Safer, 157 U. Pa. L. Rev. 1, 39 (2008).

There has been plenty of Hurricane Irene blogging, and some posts linking natural disasters to various aspects of law and policy (see, e.g. my colleague Ilya Somin discussing property rights and falling trees).   Often, post-natural disaster economic discussion at TOTM turns to the perverse consequences of price gouging laws.  This time around, the damage from the hurricane got me thinking about the issue of availability of credit.  In policy debates in and around the new CFPB and its likely agenda — which is often reported to include restrictions on payday lending — I often take up the unpopular (at least in the rooms in which these debates often occur) position that while payday lenders can abuse consumers, one should think very carefully about incentives before going about restricting access to any form of consumer credit.  In the case of payday lending, for example, proponents of restrictions or outright bans generally have in mind a counterfactual world in which consumers who are choosing payday loans are simply “missing out” on other forms of credit with superior terms.  Often, proponents of this position rely upon a theory involving particular behavioral biases of at least some substantial fraction of borrowers who, for example, over estimate their future ability to pay off the loan.  Skeptics of government-imposed restrictions on access to consumer credit (whether it be credit cards or payday lending) often argue that such restrictions do not change the underlying demand for consumer credit.  Consumer demand for credit — whether for consumption smoothing purposes or in response to a natural disaster or personal income “shock” or another reason — is an important lubricant for economic growth.  Restrictions do not reduce this demand at all — in fact, critics of these restrictions point out, consumers are likely to switch to the closest substitute forms of credit available to them if access to one source is foreclosed.  Of course, these stories are not necessarily mutually exclusive: that is, some payday loan customers might irrationally use payday lending while better options are available while at the same time, it is the best source of credit available to other customers.

In any event, one important testable implication for the economic theories of payday lending relied upon by critics of such restrictions (including myself) is that restrictions on their use will have a negative impact on access to credit for payday lending customers (i.e. they will not be able to simply turn to better sources of credit).  While most critics of government restrictions on access to consumer credit appear to recognize the potential for abuse and favor disclosure regimes and significant efforts to police and punish fraud, the idea that payday loans might generate serious economic benefits for society often appears repugnant to supporters.  All of this takes me to an excellent paper that lies at the intersection of these two issues: natural disasters and the economic effects of restrictions on payday lending.  The paper is Adair Morse’s Payday Lenders: Heroes or Villians.    From the abstract:

I ask whether access to high-interest credit (payday loans) exacerbates or mitigates individual financial distress. Using natural disasters as an exogenous shock, I apply a propensity score matched, triple difference specification to identify a causal relationship between access-to-credit and welfare. I find that California foreclosures increase by 4.5 units per 1,000 homes in the year after a natural disaster, but the existence of payday lenders mitigates 1.0-1.3 of these foreclosures. In a placebo test for natural disasters covered by homeowner insurance, I find no payday lending mitigation effect. Lenders also mitigate larcenies, but have no effect on burglaries or vehicle thefts. My methodology demonstrates that my results apply to ordinary personal emergencies, with the caveat that not all payday loan customers borrow for emergencies.

To be sure, there are other papers with different designs that identify economic benefits from payday lending and other otherwise “disfavored” credit products.  Similarly, there papers out there that use different data and a variety of research designs and identify social harms from payday lending (see here for links to a handful, and here for a recent attempt).  A literature survey is available here.  Nonetheless, Morse’s results remind me that consumer credit institutions — even non-traditional ones — can generate serious economic benefits in times of need and policy analysts must be careful in evaluating and weighing those benefits against potential costs when thinking about and designing restrictions that will change incentives in consumer credit markets.

Medical Devices

Paul H. Rubin —  18 April 2011

The GAO has recently issued a report on medical devices.  The thrust of the report is that “high-risk” medical devices do not receive enough scrutiny from the FDA and that recalls are not handled well.  This report and other evidence indicates that the FDA is likely to require more testing of devices.  As of now, most medical devices are approved on a fast track that requires significantly less testing than that required for new drugs.  (As I have discussed in a forthcoming Cato Journal article, medical devices are also subject to more immunity from state produce liability lawsuits.)

The GAO report is remarkable.  The GAO defines its mission as

“Our Mission is to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. We provide Congress with timely information that is objective, fact-based, nonpartisan, nonideological, fair, and balanced.”

But the report on medical devices is entirely unbalanced.  It deals only with procedures for approval and the recall process (both of which are judged inadequate.)  There is no discussion of either costs or benefits.   That is, no evidence is presented that there is any actual harm from the “flawed” approval and recall processes.  Even more importantly, there is no evidence presented about the benefits to consumers from easy and rapid approval of medical devices.

As is well known, virtually all economists who have studied the FDA drug approval process have concluded that it causes serious harm by delaying drugs.  The import of the GAO Report is that we should duplicate that harm with medical devices.  This is an odd and perverse way of providing a “benefit” to the American people.

Does anyone really still believe that the threat of antitrust enforcement doesn’t lead to undesirable caution on the part of potential defendants?

Whatever you may think of the merits of the Google/ITA merger (and obviously I suspect the merits cut in favor of the merger), there can be no doubt that restraining Google’s (and other large companies’) ability to acquire other firms will hurt those other firms (in ITA’s case, for example, they stand to lose $700 million).  There should also be no doubt that this restraint will exceed whatever efficient level is supposed by supporters of aggressive antitrust enforcement.  And the follow-on effect from that will be less venture funding and thus less innovation.  Perhaps we have too much innovation in the economy right now?

Reuters fleshes out the point in an article titled, “Google’s M&A Machine Stuck in Antitrust Limbo.”  That about sums it up.

Here are the most salient bits:

Not long ago, selling to Google offered one of the best alternatives to an initial public offering for up-and-coming technology startups. . . . But Google’s M&A machine looks to be gumming up.

* * *

The problem is antitrust limbo.

* * *

Ironically that may make it less appealing to sell to Google. The company has announced just $200 million of acquisitions in 2011 — the smallest sum since the panic of 2008.

* * *

The ITA acquisition has sent a warning signal to the venture capital and startup communities. Patents may still be available. But no fast-moving entrepreneur wants to get stuck the way ITA has since agreeing to be sold last July 1.

* * *

For a small, growing business the risks are huge.

* * *

That doesn’t exclude Google as an exit option. But the regulatory risk needs to be hedged with a huge breakup fee. . . . With Google’s rising antitrust issues, however, the fee needs to be as big as the purchase price.