I have long held reservations about corporate governance research that hinges on event studies. (An event study is “an analysis of whether there was a statistically significant reaction in financial markets to past occurrences of a given type of event that is hypothesized to affect public firms’ market values.†An example of the sort of study that makes me a bit nervous is the study of derivative lawsuits done by Professors Fischel and Bradley in their 1986 paper titled “The Role of Liability Rules and the Derivative Suit in Corporate Law.â€) I have been leery of sharing my views regarding event studies, however, because it seems that most folks in my area of the academy have no similar reservations. Discretion being the better part of valor, I would prefer not to be viewed as standing alone off in left field. After spending some time a few afternoons ago chatting about my concerns with my mathematician friend John Armstrong, however, I am emboldened to share my thoughts here.
In a nutshell, I worry that event studies as traditionally conducted in the context of corporate governance undervalue the long term implications of and cumulative effects of various events. I worry that, relying on event studies, we might be quick to undervalue activity that does not immediately generate a market reaction but that, in the bigger picture, lays the foundation for achieving a meaningful goal.Â
For purposes of discussion here, let us use an event study designed as follows: A researcher wants to know what impact, if any, an institutional shareholder announcing its intention to withhold its affirmative vote on a slate of directors at an annual meeting has on the market. In the typical event study, the researcher would look at the stock price of the stock of the company at issue on the day before the institutional shareholder’s announcement and the researcher would look at the stock price the day after. If the stock price moved only minimally (in a way that was not “abnormal†for the market), the researcher would conclude that the institutional shareholder’s announcement did not matter to the market. If the researcher were being thorough, I suppose the researcher might also look at how many shareholders at the next annual meeting, a week hence, actually withheld their votes. If the number withholding was not abnormal, I imagine the researcher would believe his view of the irrelevance of the institutional investor’s pronouncement confirmed.
But what troubles me is that this ignores the long view. Stay with me: Assume that, 11 months after the above-mentioned institutional investor airs its concerns, an article appears in the WSJ reporting that the much-loved, long-serving CEO of the company at issue was arrested for drunk driving. Assume that, the day after the drunk driving announcement, the stock price of the company at issue dropped 20%. A researcher with an event study affinity might say that the drunk driving announcement moved the market. But what of the notion that the announcement PLUS the recall of the institutional investor’s concerns actually cumulatively moved the market? How do event studies account for time lag? How do event studies account for the accumulation of information? Surely just the announcement that the CEO was arrested for drunk driving should not, in and of itself, move the market. But I can easily imagine situations where that might just be the straw that broke the camel’s back, so to speak. Yet your typical event study would not account for that, would it? Moreover, what if, at the annual meeting several weeks later, an abnormal number of investors withheld their votes for the nominated panel of directors? Would we attribute that to the drunk driving instance? I cannot imagine we would. Yet would our scholarly memories be long enough to remember to attribute it to the institutional investor’s disavow of faith a year prior?
To be clear: I much admire the scholars in our field who are aggressively using all research tools, event studies and otherwise. I share my unease with event studies for what it is worth, which might be nothing. (My hope, however, is that a useful exchange of ideas will occur here.)
Elizabeth,
Thanks for bringing up such a provocative topic. It sounds to me like your issue is not so much with event studies, but with the over-reliance on the immediate conclusions of those studies.
To stick with your example, it seems that the event study has produced useful information – that the shareholder action had no immediate impact on share prices. That information is true, and will remain true (and potentially valuable) even if some later event, in the context of a company where the shareholder action has already taken place, does cause share price to move.
The issue arises when prices move following some later action (the DWI in your example), and observers fail to consider the possible role of the earlier action in the price movement. It is not the case that the event study undervalues the cumulative impact of many events. It is that it is not designed to measure such an impact. As I read your post, your concern is not with the study – it is with the scholar who uses the conclusions of the study to argue that event A could not have had a role in a price movement at time B.
That seems like a legitimate concern, and one that does not fundamentally question the usefulness of event studies.
Trey
Elizabeth, isn’t that what statistical significance for multiple-dependent factors is about? Here is an example from my own research; you can tell me if you think its responsive.
I found that companies that (a) pay their executives based on profit (b) don’t use budget-based target setting and c) provide unlimited bonus opportunities tend to outperform their sector peers (at p
M.Hodak,
But what about the notion that even if we are dealing with a REGULAR phenomenon, such that we can put together a sample size of 500 large companies or some such, there is STILL always a dissimilarity between firms. No firm is ever in the exact same position as another firm. No firm ever has a context that lines up EXACTLY like another firm, such that you can never really tell whether one isolated thing – CEO resignation, for example – moved the market or whether the accumulation of events leading up to that fact with the fact as the straw that broke the camel’s back caused the event.
I think your skepticism is justified on the grounds of sample sizes. How many public company CEOs have been arrested for ANYTHING in, say, the last 20 years, let alone in sufficient numbers to create subject and control groups likely to yield statistically significant results on any hypothesis?
Like you, no doubt, I’ve seen some extremely clever study designs that could tease out good results with limited samples. But that’s very hard to do, and I rarely see it done well in the governance studies I tend to review. That’s not a failing of the event study technique; it’s a shortcoming of stretching to apply that technique to relatively infrequent phenomena.
M. Hodak, thanks for the comment.
But I continue to hold my concern because I wonder if we could ever FIND a comparable pool to use. If we look at stock price reactions to other CEO arrests, could we control for aspects of THOSE stock market reactions that were due to other, lingering variables (such as the CEO having fired the CFO a few months earlier or something like that)? My concern is that we can never really control for inherent dissimilarities in the context of any given event for any given firm, can we? (I appreciate your feedback, so please do feel free to respond back and explain how my objection further might also be meritless….)
I don’t think your objection is about event studies, per se. One could design an event study to account for independent variables with contingent variables. For example, one could look at stock price reaction to CEO arrests given a prior institutional events (versus the absence of such contingent events). No example comes to mind, but I test contingent hypoetheses in other kinds of studies quite often.