Gelbach, Helland and Klick on Single Firm, Single Event Studies

Cite this Article
Joshua D. Wright, Gelbach, Helland and Klick on Single Firm, Single Event Studies, Truth on the Market (November 11, 2009), https://truthonthemarket.com/2009/11/11/gelbach-helland-and-klick-on-single-firm-single-event-studies/

Larry Ribstein points to the new paper from Gelbach, Helland and Klick on Valid Inference in Single Firm, Single Event Studies.  This is an important paper with implications for finance, securities litigation and antitrust where event studies are frequently used as economic expert evidence.  Ribstein gives a good, non-technical explanation of its contribution:

Essentially what’s happening is that single-firm event studies are determining the existence of abnormal returns against an assumption that the firm’s returns are “normally” distributed under a bell-shaped curve. “Abnormal” refers to returns located around the “bell’s” right and left sides. The problem is that returns are often not normally distributed, and you can’t determine if the observed returns are abnormal if you don’t know the shape of the curve. The paper proposes “a very simple but statistically sound alternative,” the “SQ” test, which does not present the problem of assuming a normal distribution.

The slightly more technical version is as follows.  The standard approach to the event study is comparing t-ratios to critical values drawn from the normal distribution.  So if we are interested in abnormal returns, and those returns come from a normal distribution, the standard approach will perform pretty well.  But the evidence is that abnormal returns are non-normal, and GHK present a lot of evidence on that score.  One might also justify the standard approach in a context with a lot of events. But in the law and economics literature, event studies with small numbers of firms and a small number of events (often 1) are common — and of policy importance.  The primary contribution of GHK is to offer an alternative approach with better statistical properties for these types of studies.

The SQ test that they propose works is related to a Chow test.  Recall that a Chow test is aimed at detecting a structural break in the relationship between Y and X between two sets of observations given the normality of regression residuals.  A Chow test focuses on testing whether the slopes are the same across both periods of observations.  The logic of the GHK test is to apply a sort of structural break test which aims at testing whether the abnormal returns before and after the break come from the same distribution.   GHK come up with the SQ test which provides a test statistic and uses critical values which are estimated from the empirical distribution of predicted residuals from earlier in the sample.

GHK have a great example in the text of the paper discussing how the SQ test would apply in practice:

To illustrate using the example on which we focus below, suppose that a firm discloses that its past quarterly earnings were substantially below the level claimed in an earlier earnings statement. A class of plaintiffs file an action under SEC rule 10b-5, citing standard fraud-on-the-market arguments. In light of Dura’s requirement that plaintiffs establish loss causation pursuant to the firm’s corrective disclosure, an expert witness wants to test the null hypothesis that the corrective disclosure had no effect on a firm’s stock price; the alternative hypothesis is that the event has reduced the value of the firm’s stock.

To use our method, the witness would obtain data on the security’s daily return and the market return for both the event date and a set of, say, n = 99 pre-event observations. She would then use OLS estimation to estimate the firm’s beta and the coefficient on an event dummy, with the latter coefficient being the estimated event effect. All of these steps are taken in both the standard approach and in ours. To implement our test, the analyst would then calculate the fitted residuals from the estimated model, sort them, and find the 5th most negative value among the
non-event dates. She would reject the null hypothesis if the coefficient on the event dummy were less than or equal to this value. Remarkably, the Type I error rate of this test converges to 0.05, regardless of the shape of the true distribution of abnormal returns.

The intuition for this result is simple. In a sample of 99 randomly drawn variables, the fifth most negative element is the sample 0.05-quantile. It has long been known that sample quantiles are consistent estimators of population quantiles, so that the sample 0.05-quantile of a large collection of abnormal returns is an excellent estimate of the 0.05-quantile of the true underlying probability distribution for the abnormal returns. As we discuss below, this quantile is the key estimand in assessing whether a single event on a known date significantly reduced a firm’s value.  While the details below involve some technical points, this simple example illustrates how easy our sample quantile (SQ) test is to use in practice.

What interested me in this paper the most was thinking about its application to antitrust event studies.  There is now a substantial literature on event studies in antitrust — which frequently focus either on antitrust litigation or mergers.  For example, McAfee and Williams (1988) suggest that the standard event study methodology cannot detect anticompetitive mergers using data from a horizontal merger “known” to be anticompetitive.   Bittlingmayer and Hazlett (2000) use the event study methodology to test the reaction of financial markets to the antitrust litigation against Microsoft.  Eckbo (1983), Garbade, Silber and White (1982), Werden and Williams (1989) and Eckbo and Wier (1985) are all examples in this literature.  There is of course now also the debate on the value and appropriate use of merger retrospectives to evaluate merger policy (see, e.g. Carlton 2008).  Weinberg offers an excellent and up to date literature review.  There are also examples of litigated merger decisions that rely on event study evidence.

All of this raises important issues about anticompetitive policy generally, but also expert evidence.  Ribstein raises the possibility of judges using their own experts to overcome the bias of hired experts tied to their own standard theories.  Oddly enough, the rate of judges appointing their own experts in antitrust cases is infinitesimal despite the growth in econometric and economic sophisticated of evidence over the last couple of decades so I’m a bit skeptical of  this solution in that context.   But maybe I shouldn’t be.  In the meantime, I think the GHK paper raises all sorts of interesting issues for event study analysis in antitrust and more generally.

Enjoy the paper.