Archives For sarbanes-oxley

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

There are many days that I wish Larry Ribstein were still here, and today is definitely one of those days.  He would have had a lot to say about the tenth anniversary of SOX today.  He and Henry Butler noted in their book “The Sarbanes-Oxley Debacle: What We’ve Learned; How to Fix It” that:

“while the direct costs are substantial, they are only the tip of the iceberg….An important aspect of SOX’s indirect costs is the Act’s impact on litigation. SOX gives litigators the benefit of 20/20 hindsight to identify minor or technical reporting mistakes as the basis for lawsuits against corporations, officers, and directors. While the first major market correction will be painful for investors, SOX will surely turn it into a festival for trial lawyers. Litigation on this scale should not be confused with shareholder protection. SOX has created a ticking litigation time bomb.”
Kevin Lacroix discusses a 2011 study from Cornerstone Research that demonstrated just how prescient Larry and Henry were on this aspect of SOX.  This study shows how cases involving accounting allegations are increasingly common in nearly all of the years after SOX, that cases involving internal control weakness allegations are much more likely to settle, and that cases involving accounting allegations dominate settlements as they make up 70-90% of total settlement dollars.  So it turns out, as Larry and Henry predicted, SOX has become quite the precocious child at tens-years old (which corresponds, I am told, with fourth grade).  Education.com offers advice for the precocious fourth grader that might prove useful to the parents of little SOX:
“With increased participation in school and extracurricular activities, in addition to her growing sense of self, and ability to process the world around her, fourth grade can be a heady time.  It’s important to help your child learn to budget and manage her time effectively, making sure, especially, that she always gets a good night’s rest.  If your fourth grader becomes withdrawn or seems stressed, try helping her pare down some of her activities until she has a schedule that allows for unscheduled play and quiet time. “

Randomizing Regulation

Josh Wright —  10 February 2012

An interesting post on the University of Pennsylvania Reg Blog from Michael Abramowicz, Ian Ayres, and Yair Listokin (AAY) on “Randomizing Regulation,” based upon their piece in the U Penn L. Rev.

If legislators disagree about the efficacy of a proposed policy, why not resolve the disagreement with a bet?  One approach would be to impose one policy approach randomly on some members of the population, but not on others, to determine whether the policy meets its goals. This solution would overcome the measurement problems of conventional regression analysis and would provide a useful way to compare regulations and promote bipartisan agreement. Legislators might agree that once such a test is complete, the winning approach would apply to everyone.

For example, regulators could test the Sarbanes-Oxley Act’s most controversial provisions, such as those requiring public companies to institute internal controls and then to have their CEOs and CFOs certify their financial statements, by randomly repealing one or more of those provisions for some corporations for some period of time. Randomization would enable analysts to determine which regulatory regime is optimal by assessing which test-group of corporations has the highest level of success, whether measured by stock price, investor confidence in financial reporting, lack of fraud, or other yardsticks.

Conventional statistical and econometric analytical techniques are often used to measure the efficacy of statutes and regulations, but they face problems that randomized trials would not. Researchers may purposefully or mistakenly omit variables from their regression analyses, leading to incorrect results. Publishers are more likely to feature work that provides statistically significant results, even if those results are not correct, a phenomenon known as publication bias.

No doubt many economists and empiricists are nodding their heads in agreement and drooling at the opportunity to more accurately identify and measure the effects of regulation.  Randomization would allow application of techniques far superior to what is typically used.  AAY discuss some of the common critiques of randomization in the blog post, and at greater length in the paper.  The longer version is worth reading, but here is the short version from the blog post:

Ethical concerns are important, but may not present a significant barrier to using randomized tests. While legal randomized tests would lack the informed consent provided in medical experiments, the government regularly imposes regulations on the public – within constitutional and other legal bounds. Also, randomization sometimes makes the imposition more equal than regulation imposed using predetermined criteria. We tend to think it is worse to impose rules on people because the selected people are unpopular rather than simply because they were selected randomly.
How should randomized trials work? The experiments should be large enough to produce meaningful results. The test groups, meanwhile, should be the smallest possible without changing the results outside those test groups. For example, driving speed limits cannot be randomized at the individual level because such a test group size would significantly increase the risk of accidents. However, the test group could be at the county level.
Experiments should also be of sufficiently long durations to prevent test subjects from changing their behavior temporarily for the duration of the experiment. For example, if different income tax levels are imposed on different people to see if imposing a higher income tax reduces work output, an experiment of short duration would be more likely to be biased. Workers could wait out a temporary increase in income tax level by temporarily working less, and plan to work more once their income tax level decreases.
There is no problem, under current standards of judicial review, with administrative agencies testing out different regulations on their own. Agencies could put their proposed experimental regulations through the regular notice and comment process. After running the experiment, the agencies could provide a randomization impact statement explaining why the agency decided to test regulations through that process, describing the experiment, and providing its results. Because randomization provides for more objective analysis of policy results, courts should be more deferential in conducting hard look review to agencies that have selected policies through this approach.
Interesting stuff.

Roberta Romano has just posted her paper, Regulating in the Dark. Here’s the abstract:

Foundational financial legislation is typically adopted in the midst or aftermath of financial crises, when an informed understanding of the causes of the crisis is not yet available. Moreover, financial institutions operate in a dynamic environment of considerable uncertainty, such that legislation enacted even under the best of circumstances can have perverse unintended consequences, and regulatory requirements correct for an initial set of conditions can become inappropriate as economic and technological circumstances change. Furthermore, the stickiness of the status quo in the U.S. political system renders it difficult to revise legislation, even though there may be a consensus to do so. This essay contends that the best means of responding to this dismal state of affairs is to include, as a matter of course, in crisis-driven financial legislation and its implementing regulation two key procedural mechanisms: (1) a requirement of automatic subsequent review and reconsideration of the legislative and regulatory decisions at some future point in time; and (2) regulatory exemptive or waiver powers, that encourage, where feasible, small scale experimentation, as well as flexibility in implementation. Both procedural devices will better inform and calibrate the regulatory apparatus, and could thereby mitigate, at least on the margin, the unintended errors which will invariably accompany financial legislation and rulemaking originating in a crisis. Given the centrality of financial institutions and markets to economic growth and societal well-being, it is exceedingly important for legislators acting in a financial crisis with the best of intentions, to not make matters worse.

It’s worth noting that Henry Butler and I, in our book about SOX (at 96-97, footnotes omitted), also suggested “sunset” provisions as an antidote to crisis-driven regulation:

[S]ignificant new financial and governance regulation like SOX that displaces and supplements prior regulatory approaches should be subject to periodic review and sunset provisions. Although Congress, of course, can always undertake such reviews, prior experience indicates that it will not. Legislation is a one-way regulatory ratchet. It arises when the conditions for reform are ripe for a regulatory panic. The conditions for a “deregulatory panic” are less likely to develop. Firms learn to live with the extra costs and may not be willing or able to bear the costs of lobbying for repeal, at least in the absence of a regulatory cataclysm. Thus, it is not surprising that SOX sponsor Michael Oxley, despite recognizing that SOX was “excessive” in some respects, and admitting that it had been rushed through Congress, suggested that Congress would not be revisiting the issue, even as to the seriously affected small companies. He said, “If I had another crack at it I would have provided a bit more flexibility for small- and medium-sized companies.” In other words, Congress normally does not have “another crack” at regulation. A sunset or review mechanism would change that.

Perhaps Congress can learn some lessons from itself. The USA Patriot Act was passed less than one year before SOX and, like SOX, was passed by an overwhelming majority. Unlike SOX, the USA Patriot Act includes sunset provisions for some of its most controversial provisions. The Patriot Act’s sunset provision forced Congress and the president to reevaluate and debate those provisions, in an atmosphere far  removed from the immediate post-9/11 panic. American investors would benefit from a sober reevaluation of SOX. Perhaps the courts will provide that opportunity. For future regulatory panics, Congress would do well to remember the lessons of the Patriot Act.

One footnote in Romano’s article particularly grabbed my attention.  Referring to Jack Coffee’s criticism of sunset provisions in a non-yet-public manuscript (“The Political Economy of Dodd-Frank: Why Financial Reform Tends to be Frustrated and Systemic Risk Perpetuated”), Romano notes:

Coffee (2011:4, 6,9) sweepingly seeks to dismiss the scholarship with which he disagrees by engaging in serial name calling, referring to the authors, Steve Bainbridge, Larry Ribstein and me, as “the ‘Tea Party Caucus’ of corporate and securities law professors” (a claim that would have been humorous had it not been said earnestly), “conservative critics of securities regulation,” (a claim, at least in my case, that would be accurate if he had dropped the adjective), and further referring to Bainbridge and Ribstein, as “[my] loyal adherents.”

 She also observes in this footnote:  

[I]n the American political tradition and academic literature, advocacy of sunsetting has historically cut across political party lines. It has had a distinguished liberal pedigree, having been advocated by, among others, President Jimmy Carter, Senator Edward Kennedy, political scientist Theodore Lowi, and Common Cause (Breyer 1982; Kysar 2005).”

Like Butler and me, she cites the Patriot Act precedent.

Well, I’m proud to be included in Romano’s and Bainbridge’s “tea party” and surprised at being there because I advocated an idea also endorsed by Carter, Kennedy, Lowi and Common Cause.  It’s sad a scholar of Coffee’s stature sees a need to resort to such rhetoric, though almost understandable since Romano’s devastating critique doesn’t leave him much of a ledge to sit on.

 As for Romano’s article, definitely do read the whole thing.  Rather than simply condemning Dodd-Frank, she argues persuasively for a way to avoid future financial over-regulation.

Update:  Matt Bodie confuses blogs and scholarly articles, statutes and people. Bainbridge sets him straight, and Leiter agrees.  But do read Bodie’s post anyway because he links to some great Gretchen posts which even I had forgotten.

We have heard much about the costs of internal controls reporting under SOX 404. Proponents argue that the fraud reduction is worth the costs.  One might question this in light of anecdotes like all the missing cash at MF Global (and many other post-SOX securities fraud suits where auditors and executives had signed off on internal controls).  But more comprehensive evidence would be helpful.

The latest word on the evidence front is Rice and Weber, How Effective is Internal Control Reporting Under SOX 404? Determinants of the Non-Disclosure of Existing Material Weaknesses.  Here’s the abstract:

We study determinants of internal control reporting decisions during the SOX 404 era using a sample of restating firms whose original misstatements are linked to underlying control weaknesses. We find that only a minority of these firms acknowledge their existing control weaknesses during their misstatement periods, and that this proportion has declined over time. Further, the probability of reporting existing weaknesses is negatively associated with external capital needs, firm size, non-audit fees, and the presence of a large audit firm; it is positively associated with financial distress, auditor effort, previously reported control weaknesses and restatements, and recent auditor and management changes. These results provide evidence that detection and disclosure incentives play a role in whether existing material weaknesses are reported, which has implications for the effectiveness of SOX 404 in providing investors with advance warning of potential accounting problems.

There are three remarkable things here. 

  • The authors study only firms that had internal controls weaknesses to see which reported, reducing the problem of confounding the existence of problems with the weakness of reporting. 
  • “Only a minority” (32.4%) of these weak-controls firms actually report their weaknesses, despite SOX. 
  • These firms are least likely to report weaknesses when they most need money.  This shouldn’t seem too surprising, because this is when the firm has most incentive to misreport.  But if you hoped SOX would be effective in counteracting those incentives, forget about it.

The authors explain on the Harvard blog

The usefulness of internal control reports in providing advance warning on the likelihood of misstatements in the financial reports is reduced if control weaknesses are not disclosed until after the misstatements themselves are later revealed. * * *

The results of this study make several contributions to the literature. By documenting that SOX 404 reports are not always effective in identifying existing control weaknesses and, further, that the effectiveness has not improved over time, our results lend some support to criticisms of internal control reporting in practice and suggest that recent declines in reported material weaknesses may not be reflective of improvements in underlying control practices, consistent with concerns voiced by the SEC. These results also inform recent debates over the value of requiring control reports to be audited. Despite the audit requirement of SOX 404, our evidence indicates that the majority of restating firms provided no advance warning of the control problems that led to their misstatements. Finally, our results also have implications for future academic research. We document considerable variation in whether existing weaknesses are actually reported and our evidence on the determinants of that reporting should be considered by future research using public disclosures to study internal control practices.

The authors note the caveat that “the generalizability of our results to firms with control weaknesses that do not lead to restatements is unclear. This is particularly true of our results for Big 4 vs. non-Big 4 auditors because of the direct role that auditors play in certifying the reliability of financial statements (and thus in the likelihood of restatement).” They offer the following explanation of the curious negative correlation with Big 4 accountants:

Given previous evidence that larger auditors tend to provide higher quality financial statement audits (see Francis [2004] for a review), larger auditors may be better able to “audit around” control weaknesses and avoid the misstatements that would lead to inclusion in our sample.

The bottom line is that even if internal controls reporting is generally a good idea, this evidence indicates the current approach is failing: it’s not only imposing high costs, but it’s getting low results.  One might hope that in light of these results SOX would at least be revised to target mandates where they are most needed. This could happen in a more dynamic regulatory system.  But in 2002 Congress locked internal controls reporting in a vault impervious to post-2002 data.  The PCAOB can tweak auditors’ obligations, but it can’t change the basic regulatory framework.

Ritter, Gao and Zhu ask, Where have all the IPOs Gone?  Well, not to young men everywhere, but to the older men and women who run the big companies that have replaced public markets as the key venture capital exit. Here’s the abstract:

During 1980-2000, an average of 311 companies per year went public in the U.S. Since the technology bubble burst in 2000, the average has been only 102 initial public offerings (IPOs) per year, with the drop especially precipitous among small firms. Many have blamed the Sarbanes-Oxley Act of 2002 and the 2003 Global Settlement’s effects on analyst coverage for the decline in U.S. IPO activity. We offer an alternative explanation. We posit that the advantages of selling out to a larger organization, which can speed a product to market and realize economies of scope, have increased relative to the benefits of remaining as an independent firm. Consistent with this hypothesis, we document that there has been a decline in the profitability of small company IPOs, and that small company IPOs have provided public market investors with low returns throughout the last three decades. Venture capitalists have been increasingly exiting their investments with trade sales rather than IPOs, and an increasing fraction of firms that have gone public have been involved in acquisitions. Our analysis suggests that IPO volume will not return to the levels of the 1980s and 1990s even with regulatory changes.

Why has this happened?  The authors suggest that “there has been a structural change over time that has increased the profitability of large firms that can realize economies of scope, speed products to market, and realize economies of scale in information technology.” For example, Apple could “assign a veritable army of engineers to rapidly implement new technologies into its consumer electronics products, such as the iPad, iPod, and iPhone. * * *No small independent company could implement new technology so rapidly and sell tens of millions of units to consumers in a matter of months.”

On the demand side, “the internet has made comparison shopping easier for consumers * * * Increased speed of communication thus leads to both a greater advantage from implementing new technology quickly, and a greater opportunity cost of waiting.”

If the authors are right, the future of innovation is mainly in large firms. (Not entirely:  the data doesn’t exclude the possibility of more Facebooks, Zyngas and Groupons.) Small firms might come up with the initial ideas, but big firms will become more influential in determining which ideas succeed. 

The question then becomes whether the structure of big firms will adapt to their new role by staying entrepreneurial and receptive to innovation. Big firms tend to get bureaucratized and complacent.  We have needed new blood to shake up the incumbents.  Big firms will have to offer decentralized decision-making and innovative compensation policies (insider trading?). 

Here is where deregulation could be useful:  not in determining the number of IPOs, but in letting today’s big firms offer the incentive structures and flexibility that enabled them to grow up to be what they are today.

I don’t share this to offer an opinion on the underlying action, but I thought it would be an item of interest to our readers.  Much has been written on this blog about challenges in the SEC’s FCPA enforcement process.

I am surprised the news media hasn’t touched Herman Cain’s relationship with AGCO Corp. during the FCPA Oil-for-Food bribery settlement given the recent focus on his campaign.  I thought I would share the facts with limited editorializing.  Maybe the story is that it is an issue calling Herman Cain’s business judgment into question, then again maybe the point is how the FCPA is so harsh that even upstanding business leaders can get caught up in it.

Herman Cain was a member of the Board of Directors of AGCO Corp. from December 2004 until 2011.  He also joined the Audit Committee of AGCO in December 2004, which had enhanced obligations for overseeing company internal controls after passage of Sarbanes-Oxley in 2003.  Like many companies, including General Electric, AGCO was implicated in the Oil-for-Food scandal in which Saddam Hussein’s regime coerced companies selling food or other items permitted under exemptions to the UN embargo to give kickbacks to the regime.

The SEC filed a complaint in 2009 charging AGCO with violations of the Foreign Corrupt Practices Act in connection with the Oil-for-Food program.  The SEC complaint alleges that “AGCO failed to accurately record in its books and records the kickbacks that were authorized for payment to Iraq.  AGCO also failed to devise and maintain systems of internal accounting controls to detect and prevent such illicit payments.” The complaint describes an internal procedure whereby “sales and marketing personnel were able to enter into contracts without review from the legal or finance departments.”   AGCO earned nearly $14 million on the contracts secured through the kickbacks.

AGCO settled with the SEC without admitting or denying the allegations in the complaint and paid nearly $20 million to settle the SEC and companion actions by the DOJ and Danish authorities.  AGCO also entered into a deferred prosecution agreement with the DOJ over the companion criminal charges.  The deferred prosecution agreement called on AGCO to fix problems in its internal controls to prevent future FCPA violations.

The SEC Complaint describes illicit activity taking place between 2000 and 2003 (ending in March 2003 with the US invasion), all of which occurred prior to Herman Cain joining the Board of AGCO in 2004.  The SEC complaint also notes that AGCO senior managers received a red flag in 2004 indicating the prior bribery in the form of questions from a news reporter.   AGCO’s fiscal year in 2004 ended in March, the SEC complaint doesn’t indicate whether the 2004 red flag came before or after March, and therefore whether AGCO’s Audit Committee would have discussed it if they knew about it in leading up to filing the 2004 annual financial statements in March of 2005.

Now, its clear that there is no evidence to indicate that Herman Cain had anything to do with the bribery.  And, its also clear that almost all, if not in fact all, of the failure of management to catch the bribery occurred before his joining the Board or the Audit Committee.  If the internal audit procedures were actually fundamentally flawed as the SEC alleged, I do think however it may be legitimate to ask what Herman Cain did to fix them during his tenure on the Audit Committee.  That’s not to say it makes him complicit or liable, but it’s at least more relevant than some of stories we’ve seen in the silly season thus far in the last six months.

A Macro Conference

Paul H. Rubin —  14 October 2011

I was invited to attend the Financial Times Global Conference “The View From the Top: The Future of America” and since I was in New York anyway I thought it would be fun.  I don’t hang around with macro types much, and even less with liberal macro types.  I will not summarize the entire conference, but a few observations:

  1. Reinhart-Rogoff was a hit, mentioned several times.  Aside from the merits of the book, I think people were trying to give Obama cover for no recovery.  R-R apparently says it takes an average of 7 years to get out of a financial crisis.
  2. The first speaker (Gene Sperling) was late and the Gillian Tett of the FT, the moderator, took some informal polls of the audience (mainly business journalists.)  Pretty pessimistic: Thought that there would be a double-dip, the EU would lose at least one member, and yields would not increase.
  3. Sperling (Director of the National Economic Council) spent a lot of time talking about how bad unemployment is and arguing for the President’s Jobs plan (which the Senate has already rejected.)  Not much new to propose.
  4. Peter Orszagh (former OMB Director, now with CITI) made a few interesting points.  He said that the Administration got the original forecast wrong, and did not realize that the recession was “L” and not “V” shaped.  He also predicted that middle class incomes will not return to their original level and that policy should not fool people into thinking they would.
  5. Several speakers (Laura Tyson of Berkeley and former CEA Chair; Steve Case , AOL founder) argued for better immigration laws (no quarrel there: the Republicans have got themselves into a terrible position on immigration).  Tyson in particular argued for more STEM (science, technology, engineering, mathematics) education.  I asked her if she thought the increasing gender imbalance in colleges (now about 2 women per man) was responsible for the STEM problem and she indicated that it might be part of the problem.  Really something worth further examination and some policy analysis.  Of course the immigration mess makes this problem worse since it is harder to import engineers from abroad.
  6. Someone (I think Steve Rattner, former Auto Czar) made the point that while the American economy is doing badly and unemployment is a real problem, American companies are doing very well, in part because of foreign earnings.  There were also several inconclusive discussions of a tax holiday for repatriation of foreign earnings.  Some said that this would be “unfair” but others understood that future effects, not past fairness, was what was relevant.  Not clear what the effects would be, however.
  7. A few mentions of Sarbanes-Oxley and Dodd-Frank, but mostly the role of regulation was ignored.  Health care was mentioned but not, I believe, Obamacare.  Everyone agreed that businesses were “afraid” to spend money but little discussion of the source of the fear.
  8. Most were not worried about conflict with China.  I asked about Chinese demographics (aging population, gender imbalance with too many males.)  Whenever I hear discussions of China I raise this issue since people seem to ignore it and it is a serious issue.  Michael Spence (Nobel Laureate, now at NYU) said that China was in a position to establish a viable retirement program (no details) but that the gender issue was not one that was being dealt with.  There seemed to be almost envy of the ability of the Chinese to do what they wanted independent of the desires of the people.
  9. Laurence Fink of BlackRock made the interesting point that the current situation seems a lot like the 1970s, including the widespread pessimism.  Martin Wolf, Chief Economics Commentator of the FT, agreed.  But the lesson he drew was that we need more and wiser regulation.  I spoke with him briefly and indicated that I was in the Reagan Administration, and that last time we got in a pessimistic mess like this deregulation al la Reagan was the solution.  He rejected this approach.  But I am hopeful.

Yesterday at the Illinois Corporate Colloquium Steve Choi presented his paper (with Pritchard and Weichman), Scandal Enforcement at the SEC: Salience and the Arc of the Option Backdating Investigations.  Here’s the abstract:

We study the impact of scandal-driven media scrutiny on the SEC’s allocation of enforcement resources. We focus on the SEC’s investigations of option backdating in the wake of numerous media articles on the practice of backdating. We find that as the level of media scrutiny of option backdating increased, the SEC shifted its mix of investigations significantly toward backdating investigations and away from investigations involving other accounting issues. We test the hypothesis that SEC pursued more marginal investigations into backdating as the media frenzy surrounding the practice persisted at the expense of pursuing more egregious accounting issues that did not involve backdating. Our event study of stock market reactions to the initial disclosure of backdating investigations shows that those reactions declined over our sample period. We also find that later backdating investigations are less likely to target individuals and less likely to accompanied by a parallel criminal investigation. Looking at the consequences of the SEC’s backdating investigations, later investigations were more likely to be terminated or produce no monetary penalties. We find that the magnitude of the option backdating accounting errors diminished over time relative to other accounting errors that attracted SEC investigations.

As readers of this blog, and Ideoblog before it, will appreciate, this paper particularly resonated with me.  As I wrote in a large number of posts (e.g.) backdating was a molehill the media blew up into a mountain.  Now come Choi et al with evidence that while the SEC was spending its scarce resources on this overblown molehill it was ignoring real mountains (e.g., Madoff).

I found the paper overall quite persuasive.  I wasn’t entirely convinced by the evidence that the backdating cases were getting weaker.  In particular, stock price reactions may just indicate the market was learning about the which companies were involved before the investigations were brought, and was gradually figuring out that backdating was not such a big deal.  But I was convinced of the evidence of the opportunity costs of the SEC’s backdating obsession — the otherwise inexplicable decline in investigations of serious non-backdating accounting problems.

As we discussed in the Colloquium, the paper reveals that there are agency costs not just in the backdating companies that were investigated but also in the agency that was doing the investigating.  Although it’s not clear exactly what moved the SEC to follow the media, there is at least some doubt about whether the SEC’s resource allocation decisions were in the public interest.

This calls attention to another set of agents — the ones in the media.  Why did the media love backdating so much?  As discussed in my Public Face of Scholarship, there are “demand” and “supply” explanations:  the public demands stories about cheating executives and/or journalists like to supply these stories.  David Baron, Persistent Media Bias, presents a supply theory emphasizing journalists’ anti-market bias.

Whatever the cause of media bias, when the media is influential its bias can result in bad public policy. SEC enforcement isn’t the only example. As I discuss in my article (at 1210-11, footnotes omitted):

Where interest groups are closely divided, the outcome of political battles may depend on how much voter support each side can enlist. This may depend on how journalists have portrayed the issue to the public. For example, the press is an important influence on corporate governance. One factor in the rapid passage of the Sarbanes-Oxley Act, the strongest federal financial regulation in seventy years, may have been the overwhelmingly negative coverage of business in the first half of 2002: seventy-seven percent of the 613 major network evening news stories on business concerned corporate scandals.

It’s not clear what can be done to better align SEC enforcement policy with the public interest.  Incentive compensation for SEC investigators?  Perhaps the only thing we can do (as with corporate crime) is to try to keep in mind when creating regulation that even if corporate agents may sometimes do the wrong thing, people don’t stop being people when they go to work for the government.

The semester is off to a bang.  I arrived at Stanford Monday to start teaching in the Law School and begin a research fellowship at the Hoover Institution.  Yesterday I hiked in the mountains overlooking the SF Bay.  Today I am flying back to DC (and blogging in flight, how cool is that) to testify Thursday before the House Committee on Financial Services alongside SEC Chairman Schapiro, former Chairman Pitt, and former Commissioner Paul Atkins on proposed legislation from Congressman Scott Garrett and Chairman Spencer Bachus to reform and reshape the SEC.

Part of the hearing, titled “Fixing the Watchdog: Legislative Proposals to Improve and Enhance the Securities and Exchange Commission” will deal with the study on SEC organizational reform mandated by the Dodd-Frank Act and conducted by the Boston Consulting Group.  Frankly, I found it full of quotes from the consultant’s desk manual, with references to “no-regrets implementation,” “business process optimization” and “multi-faceted transformation.”  I believe the technical term is gobbledy-gook.

The remainder of the hearing will involve a discussion of the SEC Organizational Reform Act (or “Bachus Bill”) and the SEC Regulatory Accountability Act (or “Garrett Bill”).  The Bachus Bill proposes a number of organizational reforms, like breaking up the new Division of Risk, Strategy, and Financial Innovation to embed the economists there back in to the various functional divisions.  The Garrett Bill seeks to strengthen the guiding principles originally formulated in the NSMIA amendments by elaborating on how the agency can meet its economic analysis burden in rule-making.

I thought I would give TOTM readers a sneak peak at my testimony.  I aim to make two key points.  First, sincere economic analysis is important.  SEC rules have consistently done a poor job of meeting the mandate of the NSMIA to consider the effect of new rules on efficiency, competition, and capital formation, and they will continue to do a poor job until they hire more economists and give them increased authority in the enforcement and the rule-making process.  Second, the SEC’s mission should include an explicit requirement that it consider the effect of new rules on the state based system of business entity formation.

Here’s a sneak peak at my testimony for TOTM readers:

Chairman Bachus, Ranking Member Frank, and distinguished members of the Committee, it is a privilege to testify today.  My name is J.W. Verret.  I am an Assistant Professor of Law at Stanford Law School where I teach corporate and securities law.  I also serve as a Fellow at the Hoover Institution and as a Senior Scholar at the Mercatus
Center at George Mason University.  I am currently on leave from the George Mason Law School.

My testimony today will focus on two important and necessary reforms.

First, I will argue that clarifying the SEC’s legislative mandate to conduct economic analysis and a commitment of authority to economists on staff at the SEC are both vital to ensure that new rules work for investors rather than against them.  Second, I will urge that the SEC be required to consider the impact of new rules on the state-based system of business incorporation.

Every President since Ronald Reagan has requested that independent agencies like the SEC commit to sincere economic cost-benefit analysis of new rules.  Further, unlike many other independent agencies the SEC is subject to a legislative mandate that it consider the effect of most new rules on investor protection, efficiency, competition and capital formation.

The latter three principles have been interpreted as requiring a form of cost-benefit economic analysis using empirical evidence, economic theory, and compliance cost data.  These tools help to determine rule impact on stock prices and stock exchange competitiveness and measure compliance costs that are passed on to investors.

Three times in the last ten years private parties have successfully challenged SEC rules for failure to meet these requirements.  Over the three cases, no less than five distinguished jurists on the DC Circuit, appointed during administrations of both Republican and Democratic Presidents, found the SEC’s economic analysis wanting. One
failure might have been an aberation, three failures out of three total challenges is a dangerous pattern.

Many SEC rules have treated the economic analysis requirements as an afterthought. This is in part a consequence of the low priority the Commission places on economic analysis, evidenced by the fact that economists have no significant authority in the rule-making process or the enforcement process.

As an example of the level of analysis typically given to significant rule-making, consider the SEC’s final release of its implementation of Section 404(b) of the Sarbanes-Oxley Act.  The SEC estimated that the rule would impose an annual cost of $91,000 per publicly traded company.  In fact a subsequent SEC study five years later found average implementation costs for 404(b) of $2.87 million per company.

That error in judgment only applies to estimates of direct costs.  The SEC gave no consideration whatsoever to the more important category of indirect costs, like the impact of the rule on the volume of new offerings or IPOs on US exchanges.

In Business Roundtable v. SEC alone the SEC estimates it dedicated over $2.5 million in staff hours to a rule that was struck down.  An honest commitment by the SEC to empower economists in the rule-making process will be a vital first step to ensure the mistakes of the proxy access rule are not replicated in future rules.

I also support the goal in H.R. 2308 to further elaborate on the economic analysis requirements.  I would suggest, in light of the importance and pervasiveness of the state-based system of corporate governance, that the bill include a provision requiring the SEC to consider the impact of new rules on the states when rule-making touches on issues of corporate governance.

The U.S. Supreme Court has noted that “No principle of corporation law and practice is more firmly established than a state’s authority to regulate domestic corporations.”

Delaware is one prominent example, serving as the state of incorporation for half of all publicly traded companies.  Its corporate code is so highly valued among shareholders that the mere fact of Delaware incorporation typically earns a publicly traded company a 2-8% increase in value.  Many other states also compete for incorporations, particularly New York, Massachusetts, California and Texas.

In order to fully appreciate this fundamental characteristic of our system, I would urge adding the following language to H.R. 2308:

“The Commission shall consider the impact of new rules on the traditional role of states in governing the internal affairs of business entities and whether it can achieve its stated objective without preempting state law.”

The SEC can comply by taking into account commentary from state governors and state secretaries of state during the open comment period.  It can minimize the preemptive effect of new rules by including references to state law where appropriate similar to one
already found in Section 14a-8.  It can also commit to a process for seeking guidance on state corporate law by creating a mandatory state court certification procedure similar to that used by the SEC in the AFSCME v. AIG case in 2008.

I thank you again for the opportunity to testify and I look forward to answering your questions.

Six years ago Henry Butler and I wrote about what we called the Sarbanes-Oxley Debacle. Well, it’s still a debacle after all these years, and having significant effects on business and international competition.

Yesterday’s WSJ opined, concerning the potential NYSE/Deutsche Borse merger that 

whoever ends up owning the iconic trading venue, the question is whether Washington will allow any U.S. stock exchange to be an attractive destination for young companies. It’s clear to most stock-exchange watchers that no business combination can relieve the burden that the 2002 Sarbanes-Oxley (Sarbox) law places on firms seeking to join the public markets. This is no doubt one of the reasons that Mr. Niederauer sees advantages in a merger with a foreign partner that has most of its business overseas.

The editorial suggests expanding Dodd-Frank’s exemption from SOX 404(b) from companies with less than $75 million in public securities to those with less than a $250 million float. It relies on the SEC’s own 2009 study showing that the vast majority of companies find that the benefits of SOX compliance outweigh the costs.

Ironically, the same issue of the WSJ reports that Chuck Schumer is “favoring the German deal as the best way to protect New York.” Schumer says: “My sole motivation here is keeping New York the No. 1 financial center in the world, and I will be guided by that criteria far above anything else in making a decision.”

Schumer is supposedly worried about Chicago’s rise in the competing derivatives market.  But Roger Altman, whose Evercore Partners is advising Nasdaq on its proposal to take over the NYSE, says “the greatest threat to New York is not Chicago. It’s Hong Kong and China, by a mile.”

All of which suggests that Schumer should have thought about all this back when he was helping to provide an entrée for these foreign competitors by backing SOX and Dodd-Frank.  There’s still time for him to read Butler and my book.

A couple of months ago I asked, “what happened to IPOs.”  Today’s WSJ asks almost the exact same question and gets the same answer:

The elephant in the room is the 2002 Sarbanes-Oxley law, which triggered billions of dollars in new compliance costs for public companies.

* * * The question for companies now, as ever, is whether the benefits of going public are worth the costs. It’s indisputable that America has raised those costs in recent years. In addition to Sarbox’s Section 404, Congress has made it easier for big labor to get proxy access, increased the opportunities for lawsuits, too often turned reporting mistakes into major fines or potential felonies, and meddled into corporate pay decisions. None of these make going public more attractive.

With America still suffering close to 9% unemployment, it’s time for both parties to bring the cost/benefit calculus for IPOs back into balance.