Archives For

I posted this originally on my own blog, but decided to cross-post here since Thom and I have been blogging on this topic.

“The U.S. stock market is having another solid year. You wouldn’t know it by looking at the shares of companies that manage money.”

That’s the lead from Charles Stein on Bloomberg’s Markets’ page today. Stein goes on to offer three possible explanations: 1) a weary bull market, 2) a move toward more active stock-picking by individual investors, and 3) increasing pressure on fees.

So what has any of that to do with the common ownership issue? A few things.

First, it shows that large institutional investors must not be very good at harvesting the benefits of the non-competitive behavior they encourage among the firms the invest in–if you believe they actually do that in the first place. In other words, if you believe common ownership is a problem because CEOs are enriching institutional investors by softening competition, you must admit they’re doing a pretty lousy job of capturing that value.

Second, and more importantly–as well as more relevant–the pressure on fees has led money managers to emphasis low-cost passive index funds. Indeed, among the firms doing well according to the article is BlackRock, “whose iShares exchange-traded fund business tracks indexes, won $20 billion.” In an aggressive move, Fidelity has introduced a total of four zero-fee index funds as a way to draw fee-conscious investors. These index tracking funds are exactly the type of inter-industry diversified funds that negate any incentive for competition softening in any one industry.

Finally, this also illustrates the cost to the investing public of the limits on common ownership proposed by the likes of Einer Elhague, Eric Posner, and Glen Weyl. Were these types of proposals in place, investment managers could not offer diversified index funds that include more than one firm’s stock from any industry with even a moderate level of market concentration. Given competitive forces are pushing investment companies to increase the offerings of such low-cost index funds, any regulatory proposal that precludes those possibilities is sure to harm the investing public.

Just one more piece of real evidence that common ownership is not only not a problem, but that the proposed “fixes” are.

regulation-v41n3-coverCalm Down about Common Ownership” is the title of a piece Thom Lambert and I published in the Fall 2018 issue of Regulation, which just hit online. The article is a condensed version our recent paper, “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” In short, we argue that concern about common ownership lacks a theoretically sound foundation and is built upon faulty empirical support. We also explain why proposed “fixes” would do more harm than good.

Over the past several weeks we wrote a series of blog posts here that summarize or expand upon different parts of our argument. To pull them all into one place:

At the heart of the common ownership issue in the current antitrust debate is an empirical measure, the Modified Herfindahl-Hirschmann Index, researchers have used to correlate patterns of common ownership with measures of firm behavior and performance. In an accompanying post, Thom Lambert provides a great summary of just what the MHHI, and more specifically the MHHIΔ, is and how it can be calculated. I’m going to free-ride off Thom’s effort, so if you’re not very familiar with the measure, I suggest you start here and here.

There are multiple problems with the common ownership story and with the empirical evidence proponents of stricter antitrust enforcement point to in order to justify their calls to action. Thom and I address a number of those problems in our recent paper on “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” However, one problem we don’t take on in that paper is the nature of the MHHIΔ itself. More specifically, what is one to make of it and how should it be interpreted, especially from a policy perspective?

The Policy Benchmark

The benchmark for discussion is the original Herfindahl-Hirschmann Index (HHI), which has been part of antitrust for decades. The HHI is calculated by summing the squared value of each firm’s market share. Depending on whether you use percents or percentages, the value of the sum may be multiplied by 10,000. For instance, for two firms that split the market evenly, the HHI could be calculated either as:

HHI = 502 + 502 = 5.000, or
HHI = (.502 + .502)*10,000 = 5,000

It’s a pretty simple exercise to see that one of the useful properties of HHI is that it is naturally bounded between 0 and 10,000. In the case of a pure monopoly that commands the entire market, the value of HHI is 10,000 (1002). As the number of firms increases and market shares approach very small fractions, the value of HHI asymptotically approaches 0. For a market with 10 firms firms that evenly share the market, for instance, HHI is 1,000; for 100 identical firms, HHI is 100; for 1,000 identical firms, HHI is 1. As a result, we know that when HHI is close to 10,000, the industry is highly concentrated in one firm; and when the HHI is close to zero, there is no meaningful concentration at all. Indeed, the Department of Justice’s Horizontal Merger Guidelines make use of this property of the HHI:

Based on their experience, the Agencies generally classify markets into three types:

  • Unconcentrated Markets: HHI below 1500
  • Moderately Concentrated Markets: HHI between 1500 and 2500
  • Highly Concentrated Markets: HHI above 2500

The Agencies employ the following general standards for the relevant markets they have defined:

  • Small Change in Concentration: Mergers involving an increase in the HHI of less than 100 points are unlikely to have adverse competitive effects and ordinarily require no further analysis.
  • Unconcentrated Markets: Mergers resulting in unconcentrated markets are unlikely to have adverse competitive effects and ordinarily require no further analysis.
  • Moderately Concentrated Markets: Mergers resulting in moderately concentrated markets that involve an increase in the HHI of more than 100 points potentially raise significant competitive concerns and often warrant scrutiny.
  • Highly Concentrated Markets: Mergers resulting in highly concentrated markets that involve an increase in the HHI of between 100 points and 200 points potentially raise significant competitive concerns and often warrant scrutiny. Mergers resulting in highly concentrated markets that involve an increase in the HHI of more than 200 points will be presumed to be likely to enhance market power. The presumption may be rebutted by persuasive evidence showing that the merger is unlikely to enhance market power.

Just by way of reference, an HHI of 2500 could reflect four firms sharing the market equally (i.e., 25% each), or it could be one firm with roughly 49% of the market and 51 identical small firms sharing the rest evenly.

Injecting MHHIΔ Into the Mix

MHHI is intended to account for both the product market concentration among firms captured by the HHI, and the common ownership concentration across firms in the market measured by the MHHIΔ. In short, MHHI = HHI + MHHIΔ.

As Thom explains in great detail, MHHIΔ attempts to measure the combined effects of the relative influence of shareholders that own positions across competing firms on management’s strategic decision-making and the combined market shares of the commonly-owned firms. MHHIΔ is the measure used in the various empirical studies allegedly demonstrating a causal relationship between common ownership (higher MHHIΔs) and the supposed anti-competitive behavior of choice.

Some common ownership critics, such as Einer Elhague, have taken those results and suggested modifying antitrust rules to incorporate the MHHIΔ in the HHI guidelines above. For instance, Elhague writes (p 1303):

Accordingly, the federal agencies can and should challenge any stock acquisitions that have produced, or are likely to produce, anti-competitive horizontal shareholdings. Given their own guidelines and the empirical results summarized in Part I, they should investigate any horizontal stock acquisitions that have created, or would create, a ΔMHHI of over 200 in a market with an MHHI over 2500, in order to determine whether those horizontal stock acquisitions raised prices or are likely to do so.

Elhague, like many others, couch their discussion of MHHI and MHHIΔ in the context of HHI values as though the additive nature of MHHI means such a context make sense. And if the examples are carefully chosen, the numbers even seem to make sense. For instance, even in our paper (page 30), we give a few examples to illustrate some of the endogeneity problems with MHHIΔ:

For example, suppose again that five institutional investors hold equal stakes (say, 3%) of each airline servicing a market and that the airlines have no other significant shareholders.  If there are two airlines servicing the market and their market shares are equivalent, HHI will be 5000, MHHI∆ will be 5000, and MHHI (HHI + MHHI∆) will be 10000.  If a third airline enters and grows so that the three airlines have equal market shares, HHI will drop to 3333, MHHI∆ will rise to 6667, and MHHI will remain constant at 10000.  If a fourth airline enters and the airlines split the market evenly, HHI will fall to 2500, MHHI∆ will rise further to 7500, and MHHI will again total 10000.

But do MHHI and MHHI∆ really fit so neatly into the HHI framework? Sadly–and worringly–no, not at all.

The Policy Problem

There seems to be a significant problem with simply imposing MHHIΔ into the HHI framework. Unlike HHI, from which we can infer something about the market based on the nominal value of the measure, MHHIΔ has no established intuitive or theoretical grounding. In fact, MHHIΔ has no intuitively meaningful mathematical boundaries from which to draw inferences about “how big is big?”, a fundamental problem for antitrust policy.

This is especially true within the range of cross-shareholding values we’re talking about in the common ownership debate. To illustrate just how big a problem this is, consider a constrained optimization of MHHI based on parameters that are not at all unreasonable relative to hypothetical examples cited in the literature:

  • Four competing firms in the market, each of which is constrained to having at least 5% market share, and their collective sum must equal 1 (or 100%).
  • Five institutional investors each of which can own no more than 5% of the outstanding shares of any individual airline, with no restrictions across airlines.
  • The remaining outstanding shares are assumed to be diffusely owned (i.e., no other large shareholder in any firm).

With only these modest restrictions on market share and common ownership, what’s the maximum potential value of MHHI? A mere 26,864,516,491, with an MHHI∆ of 26,864,513,774 and HHI of 2,717.

That’s right, over 26.8 billion. To reach such an astronomical number, what are the parameter values? The four firms split the market with 33, 31.7, 18.3, and 17% shares, respectively. Investor 1 owns 2.6% of the largest firm (by market share) while Investors 2-5 each own between 4.5 and 5% of the largest firm. Investors 1 and 2 own 5% of the smallest firm, while Investors 3 and 4 own 3.9% and Investor 5 owns a minuscule (0.0006%) share. Investor 2 is the only investor with any holdings (a tiny 0.0000004% each) in the two middling firms. These are not unreasonable numbers by any means, but the MHHI∆ surely is–especially from a policy perspective.

So if MHHI∆ can range from near zero to as much as 28.6 billion within reasonable ranges of market share and shareholdings, what should we make of Elhague’s proposal that mergers be scrutinized for increasing MHHI∆ by 200 points if the MHHI is 2,500 or more? We argue that such an arbitrary policy model is not only unfounded empirically, but is completely void of substantive reason or relevance.

The DOJ’s Horizontal Merger Guidelines above indicate that antitrust agencies adopted the HHI benchmarks for review “[b]ased on their experience”.  In the 1982 and 1984 Guidelines, the agencies adopted HHI standards 1,000 and 1,800, compared to the current 1,500 and 2,500 levels, in determining whether the industry is concentrated and a merger deserves additional scrutiny. These changes reflect decades of case reviews relating market structure to likely competitive behavior and consumer harm.

We simply do not know enough yet empirically about the relation between MHHI∆ and benchmarks of competitive behavior and consumer welfare to make any intelligent policies based on that metric–even if the underlying argument had any substantive theoretical basis, which we doubt. This is just one more reason we believe the best response to the common ownership problem is to do nothing, at least until we have a theoretically, and empirically, sound basis on which to make intelligent and informed policy decisions and frameworks.

As Thom previously posted, he and I have a new paper explaining The Case for Doing Nothing About Common Ownership of Small Stakes in Competing Firms. Our paper is a response to cries from the likes of Einer Elhauge and of Eric Posner, Fiona Scott Morton, and Glen Weyl, who have called for various types of antitrust action to reign in what they claim is an “economic blockbuster” and “the major new antitrust challenge of our time,” respectively. This is the first in a series of posts that will unpack some of the issues and arguments we raise in our paper.

At issue is the growth in the incidence of common-ownership across firms within various industries. In particular, institutional investors with broad portfolios frequently report owning small stakes in a number of firms within a given industry. Although small, these stakes may still represent large block holdings relative to other investors. This intra-industry diversification, critics claim, changes the managerial objectives of corporate executives from aggressively competing to increase their own firm’s profits to tacitly colluding to increase industry-level profits instead. The reason for this change is that competition by one firm comes at a cost of profits from other firms in the industry. If investors own shares across firms, then any competitive gains in one firm’s stock are offset by competitive losses in the stocks of other firms in the investor’s portfolio. If one assumes corporate executives aim to maximize total value for their largest shareholders, then managers would have incentive to soften competition against firms with which they share common ownership. Or so the story goes (more on that in a later post.)

Elhague and Posner, et al., draw their motivation for new antitrust offenses from a handful of papers that purport to establish an empirical link between the degree of common ownership among competing firms and various measures of softened competitive behavior, including airline prices, banking fees, executive compensation, and even corporate disclosure patterns. The paper of most note, by José Azar, Martin Schmalz, and Isabel Tecu and forthcoming in the Journal of Finance, claims to identify a causal link between the degree of common ownership among airlines competing on a given route and the fares charged for flights on that route.

Measuring common ownership with MHHI

Azar, et al.’s airline paper uses a metric of industry concentration called a Modified Herfindahl–Hirschman Index, or MHHI, to measure the degree of industry concentration taking into account the cross-ownership of investors’ stakes in competing firms. The original Herfindahl–Hirschman Index (HHI) has long been used as a measure of industry concentration, debuting in the Department of Justice’s Horizontal Merger Guidelines in 1982. The HHI is calculated by squaring the market share of each firm in the industry and summing the resulting numbers.

The MHHI is rather more complicated. MHHI is composed of two parts: the HHI measuring product market concentration and the MHHI_Delta measuring the additional concentration due to common ownership. We offer a step-by-step description of the calculations and their economic rationale in an appendix to our paper. For this post, I’ll try to distill that down. The MHHI_Delta essentially has three components, each of which is measured relative to every possible competitive pairing in the market as follows:

  1. A measure of the degree of common ownership between Company A and Company -A (Not A). This is calculated by multiplying the percentage of Company A shares owned by each Investor I with the percentage of shares Investor I owns in Company -A, then summing those values across all investors in Company A. As this value increases, MHHI_Delta goes up.
  2. A measure of the degree of ownership concentration in Company A, calculated by squaring the percentage of shares owned by each Investor I and summing those numbers across investors. As this value increases, MHHI_Delta goes down.
  3. A measure of the degree of product market power exerted by Company A and Company -A, calculated by multiplying the market shares of the two firms. As this value increases, MHHI_Delta goes up.

This process is repeated and aggregated first for every pairing of Company A and each competing Company -A, then repeated again for every other company in the market relative to its competitors (e.g., Companies B and -B, Companies C and -C, etc.). Mathematically, MHHI_Delta takes the form:

where the Ss represent the firm market shares of, and Betas represent ownership shares of Investor I in, the respective companies A and -A.

As the relative concentration of cross-owning investors to all investors in Company A increases (i.e., the ratio on the right increases), managers are assumed to be more likely to soften competition with that competitor. As those two firms control more of the market, managers’ ability to tacitly collude and increase joint profits is assumed to be higher. Consequently, the empirical research assumes that as MHHI_Delta increases, we should observe less competitive behavior.

And indeed that is the “blockbuster” evidence giving rise to Elhauge’s and Posner, et al.,’s arguments  For example, Azar, et. al., calculate HHI and MHHI_Delta for every US airline market–defined either as city-pairs or departure-destination pairs–for each quarter of the 14-year time period in their study. They then regress ticket prices for each route against the HHI and the MHHI_Delta for that route, controlling for a number of other potential factors. They find that airfare prices are 3% to 7% higher due to common ownership. Other papers using the same or similar measures of common ownership concentration have likewise identified positive correlations between MHHI_Delta and their respective measures of anti-competitive behavior.

Problems with the problem and with the measure

We argue that both the theoretical argument underlying the empirical research and the empirical research itself suffer from some serious flaws. On the theoretical side, we have two concerns. First, we argue that there is a tremendous leap of faith (if not logic) in the idea that corporate executives would forgo their own self-interest and the interests of the vast majority of shareholders and soften competition simply because a small number of small stakeholders are intra-industry diversified. Second, we argue that even if managers were so inclined, it clearly is not the case that softening competition would necessarily be desirable for institutional investors that are both intra- and inter-industry diversified, since supra-competitive pricing to increase profits in one industry would decrease profits in related industries that may also be in the investors’ portfolios.

On the empirical side, we have concerns both with the data used to calculate the MHHI_Deltas and with the nature of the MHHI_Delta itself. First, the data on institutional investors’ holdings are taken from Schedule 13 filings, which report aggregate holdings across all the institutional investor’s funds. Using these data masks the actual incentives of the institutional investors with respect to investments in any individual company or industry. Second, the construction of the MHHI_Delta suffers from serious endogeneity concerns, both in investors’ shareholdings and in market shares. Finally, the MHHI_Delta, while seemingly intuitive, is an empirical unknown. While HHI is theoretically bounded in a way that lends to interpretation of its calculated value, the same is not true for MHHI_Delta. This makes any inference or policy based on nominal values of MHHI_Delta completely arbitrary at best.

We’ll expand on each of these concerns in upcoming posts. We will then take on the problems with the policy proposals being offered in response to the common ownership ‘problem.’

 

 

 

 

 

 

Michael Sykuta is Associate Professor, Agricultural and Applied Economics, and Director, Contracting Organizations Research Institute at the University of Missouri.

The US agriculture sector has been experiencing consolidation at all levels for decades, even as the global ag economy has been growing and becoming more diverse. Much of this consolidation has been driven by technological changes that created economies of scale, both at the farm level and beyond.

Likewise, the role of technology has changed the face of agriculture, particularly in the past 20 years since the commercial introduction of the first genetically modified (GMO) crops. However, biotechnology itself comprises only a portion of the technology change. The development of global positioning systems (GPS) and GPS-enabled equipment have created new opportunities for precision agriculture, whether for the application of crop inputs, crop management, or yield monitoring. The development of unmanned and autonomous vehicles and remote sensing technologies, particularly unmanned aerial vehicles (i.e. UAVs, or “drones”), have created new opportunities for field scouting, crop monitoring, and real-time field management. And currently, the development of Big Data analytics is promising to combine all of the different types of data associated with agricultural production in ways intended to improve the application of all the various technologies and to guide production decisions.

Now, with the pending mergers of several major agricultural input and life sciences companies, regulators are faced with a challenge: How to evaluate the competitive effects of such mergers in the face of such a complex and dynamic technology environment—particularly when these technologies are not independent of one another? What is the relevant market for considering competitive effects and what are the implications for technology development? And how does the nature of the technology itself implicate the economic efficiencies underlying these mergers?

Before going too far, it is important to note that while the three cases currently under review (i.e., ChemChina/Syngenta, Dow/DuPont, and Bayer/Monsanto) are frequently lumped together in discussions, the three present rather different competitive cases—particularly within the US. For instance, ChemChina’s acquisition of Syngenta will not, in itself, meaningfully change market concentration. However, financial backing from ChemChina may allow Syngenta to buy up the discards from other deals, such as the parts of DuPont that the EU Commission is requiring to be divested or the seed assets Bayer is reportedly looking to sell to preempt regulatory concerns, as well as other smaller competitors.

Dow-DuPont is perhaps the most head-to-head of the three mergers in terms of R&D and product lines. Both firms are in the top five in the US for pesticide manufacturing and for seeds. However, the Dow-DuPont merger is about much more than combining agricultural businesses. The Dow-DuPont deal specifically aims to create and spin-off three different companies specializing in agriculture, material science, and specialty products. Although agriculture may be the business line in which the companies most overlap, it represents just over 21% of the combined businesses’ annual revenues.

Bayer-Monsanto is yet a different sort of pairing. While both companies are among the top five in US pesticide manufacturing (with combined sales less than Syngenta and about equal to Dow without DuPont), Bayer is a relatively minor player in the seed industry. Likewise, Monsanto is focused almost exclusively on crop production and digital farming technologies, offering little overlap to Bayer’s human health or animal nutrition businesses.

Despite the differences in these deals, they tend to be lumped together and discussed almost exclusively in the context of pesticide manufacturing or crop protection more generally. In so doing, the discussion misses some important aspects of these deals that may mitigate traditional competitive concerns within the pesticide industry.

Mergers as the Key to Unlocking Innovation and Value

First, as the Dow-DuPont merger suggests, mergers may be the least-cost way of (re)organizing assets in ways that maximize value. This is especially true for R&D-intensive industries where intellectual property and innovation are at the core of competitive advantage. Absent the protection of common ownership, neither party would have an incentive to fully disclose the nature of its IP and innovation pipeline. In this case, merging interests increases the efficiency of information sharing so that managers can effectively evaluate and reorganize assets in ways that maximize innovation and return on investment.

Dow and DuPont each have a wide range of areas of application. Both groups of managers recognize that each of their business lines would be stronger as focused, independent entities; but also recognize that the individual elements of their portfolios would be stronger if combined with those of the other company. While the EU Commission argues that Dow-DuPont would reduce the incentive to innovate in the pesticide industry—a dubious claim in itself—the commission seems to ignore the potential increases in efficiency, innovation and ability to serve customer interests across all three of the proposed new businesses. At a minimum, gains in those industries should be weighed against any alleged losses in the agriculture industry.

This is not the first such agricultural and life sciences “reorganization through merger”. The current manifestation of Monsanto is the spin-off of a previous merger between Monsanto and Pharmacia & Upjohn in 2000 that created today’s Pharmacia. At the time of the Pharmacia transaction, Monsanto had portfolios in agricultural products, chemicals, and pharmaceuticals. After reorganizing assets within Pharmacia, three business lines were created: agricultural products (the current Monsanto), pharmaceuticals (now Pharmacia, a subsidiary of Pfizer), and chemicals (now Solutia, a subsidiary of Eastman Chemical Co.). Merging interests allowed Monsanto and Pharmacia & Upjohn to create more focused business lines that were better positioned to pursue innovations and serve customers in their respective industries.

In essence, Dow-DuPont is following the same playbook. Although such intentions have not been announced, Bayer’s broad product portfolio suggests a similar long-term play with Monsanto is likely.

Interconnected Technologies, Innovation, and the Margins of Competition

As noted above, regulatory scrutiny of these three mergers focuses on them in the context of pesticide or agricultural chemical manufacturing. However, innovation in the ag chemicals industry is intricately interwoven with developments in other areas of agricultural technology that have rather different competition and innovation dynamics. The current technological wave in agriculture involves the use of Big Data to create value using the myriad data now available through GPS-enabled precision farming equipment. Monsanto and DuPont, through its Pioneer subsidiary, are both players in this developing space, sometimes referred to as “digital farming”.

Digital farming services are intended to assist farmers’ production decision making and increase farm productivity. Using GPS-coded field maps that include assessments of soil conditions, combined with climate data for the particular field, farm input companies can recommend the types of rates of applications for soil conditioning pre-harvest, seed types for planting, and crop protection products during the growing season. Yield monitors at harvest provide outcomes data for feedback to refine and improve the algorithms that are used in subsequent growing seasons.

The integration of digital farming services with seed and chemical manufacturing offers obvious economic benefits for farmers and competitive benefits for service providers. Input manufacturers have incentive to conduct data analytics that individual farmers do not. Farmers have limited analytic resources and relatively small returns to investing in such resources, while input manufacturers have broad market potential for their analytic services. Moreover, by combining data from a broad cross-section of farms, digital farming service companies have access to the data necessary to identify generalizable correlations between farm plot characteristics, input use, and yield rates.

But the value of the information developed through these analytics is not unidirectional in its application and value creation. While input manufacturers may be able to help improve farmers’ operations given the current stock of products, feedback about crop traits and performance also enhances R&D for new product development by identifying potential product attributes with greater market potential. By combining product portfolios, agricultural companies can not only increase the value of their data-driven services for farmers, but more efficiently target R&D resources to their highest potential use.

The synergy between input manufacturing and digital farming notwithstanding, seed and chemical input companies are not the only players in the digital farming space. Equipment manufacturer John Deere was an early entrant in exploiting the information value of data collected by sensors on its equipment. Other remote sensing technology companies have incentive to develop data analytic tools to create value for their data-generating products. Even downstream companies, like ADM, have expressed interest in investing in digital farming assets that might provide new revenue streams with their farmer-suppliers as well as facilitate more efficient specialty crop and identity-preserved commodity-based value chains.

The development of digital farming is still in its early stages and is far from a sure bet for any particular player. Even Monsanto has pulled back from its initial foray into prescriptive digital farming (call FieldScripts). These competitive forces will affect the dynamics of competition at all stages of farm production, including seed and chemicals. Failure to account for those dynamics, and the potential competitive benefits input manufacturers may provide, could lead regulators to overestimate any concerns of competitive harm from the proposed mergers.

Conclusion

Farmers are concerned about the effects of these big-name tie-ups. Farmers may be rightly concerned, but for the wrong reasons. Ultimately, the role of the farmer continues to be diminished in the agricultural value chain. As precision agriculture tools and Big Data analytics reduce the value of idiosyncratic or tacit knowledge at the farm level, the managerial human capital of farmers becomes relatively less important in terms of value-added. It would be unwise to confuse farmers’ concerns regarding the competitive effects of the kinds of mergers we’re seeing now with the actual drivers of change in the agricultural value chain.

I received word today that Douglass North passed away yesterday at the age of 95 (obit here). Professor North shared the Nobel Prize in Economic with Robert Fogel in 1993 for his work in economic history on the role of institutions in shaping economic development and performance.

Doug was one of my first professors in graduate school at Washington University. Many of us in our first year crammed into Doug’s economic history class for fear that he might retire and we not get the chance to study under him. Little did we expect that he would continue teaching into his DoughNorth_color_300-doc80s. The text for our class was the pre-publication manuscript of his book, Institutions, Institutional Change and Economic Performance. Doug’s course offered an interesting juxtaposition to the traditional neoclassical microeconomics course for first-year PhD students. His work challenged the simplifying assumptions of the neoclassical system and shed a whole new light on understanding economic history, development and performance. I still remember that day in October 1993 when the department was abuzz with the announcement that Doug had received the Nobel Prize. It was affirming and inspiring.

As I started work on my dissertation, I had hoped to incorporate a historical component on the early development of crude oil futures trading in the 1930s so I could get Doug involved on my committee. Unfortunately, there was not enough information still available to provide any analysis (there was one news reference to a new crude futures exchange, but nothing more–and the historical records of the NY Mercantile Exchange had been lost in a fire).and I had to focus solely on the deregulatory period of the late 1970s and early 1980s. I remember joking at one of our economic history workshops that I wasn’t sure if it counted as economic history since it happened during Doug’s lifetime.

Doug was one of the founding conspirators for the International Society for New Institutional Economics (now the Society for Institutional & Organizational Economics) in 1997, along with Ronald Coase and Oliver Williamson. Although the three had strong differences of opinions concerning certain aspects of their respective theoretical approaches, they understood the generally complementary nature of their work and its importance not just for the economic profession, but for understanding how societies and organizations perform and evolve and the role institutions play in that process.

The opportunity to work around these individuals, particularly with North and Coase, strongly shaped and influenced my understanding not only of economics, but of why a broader perspective of economics is so important for understanding the world around us. That experience profoundly affected my own research interests and my teaching of economics. Some of Doug’s papers continue to play an important role in courses I teach on economic policy. Students, especially international students, continue to be inspired by his explanation of the roles of institutions, how they affect markets and societies, and the forces that lead to institutional change.

As we prepare to celebrate Thanksgiving in the States, Doug’s passing is a reminder of how much I have to be thankful for over my career. I’m grateful for having had the opportunity to know and to work with Doug. I’m grateful that we had an opportunity to bring him to Mizzou in 2003 for our CORI Seminar series, at which he spoke on Understanding the Process of Economic Change (the title of his next book at the time). And I’m especially thankful for the influence he had on my understanding of economics and that his ideas will continue to shape economic thinking and economic policy for years to come.

Our TOTM colleague Dan Crane has written a few posts here over the past year or so about attempts by the automobile dealers lobby (and General Motors itself) to restrict the ability of Tesla Motors to sell its vehicles directly to consumers (see here, here and here). Following New Jersey’s adoption of an anti-Tesla direct distribution ban, more than 70 lawyers and economists–including yours truly and several here at TOTM–submitted an open letter to Gov. Chris Christie explaining why the ban is bad policy.

Now it seems my own state of Missouri is getting caught up in the auto dealers’ ploy to thwart pro-consumer innovation and competition. Legislation (HB1124) that was intended to simply update statutes governing the definition, licensing and use of off-road and utility vehicles got co-opted at the last minute in the state Senate. Language was inserted to redefine the term “franchisor” to include any automobile manufacturer, regardless whether they have any franchise agreements–in direct contradiction to the definition used throughout the rest of the surrounding statues. The bill defines a “franchisor” as:

“any manufacturer of new motor vehicles which establishes any business location or facility within the state of Missouri, when such facilities are used by the manufacturer to inform, entice, or otherwise market to potential customers, or where customer orders for the manufacturer’s new motor vehicles are placed, received, or processed, whether or not any sales of such vehicles are finally consummated, and whether or not any such vehicles are actually delivered to the retail customer, at such business location or facility.”

In other words, it defines a franchisor as a company that chooses to open it’s own facility and not franchise. The bill then goes on to define any facility or business location meeting the above criteria as a “new motor vehicle dealership,” even though no sales or even distribution may actually take place there. Since “franchisors” are already forbidden from owning a “new motor vehicle dealership” in Missouri (a dubious restriction in itself), these perverted definitions effectively ban a company like Tesla from selling directly to consumers.

The bill still needs to go back to the Missouri House of Representatives, where it started out as addressing “laws regarding ‘all-terrain vehicles,’ ‘recreational off-highway vehicles,’ and ‘utility vehicles’.”

This is classic rent-seeking regulation at its finest, using contrived and contorted legislation–not to mention last-minute, underhanded legislative tactics–to prevent competition and innovation that, as General Motors itself pointed out, is based on a more economically efficient model of distribution that benefits consumers. Hopefully the State House…or the Governor…won’t be asleep at the wheel as this legislation speeds through the final days of the session.

An occasional reader brought to our attention a bill that is fast making its way through the U.S. House Committee on Financial Services. The Small Company Disclosure Simplification Act (H.R. 4167) would exempt emerging growth companies and companies with annual gross revenue less than $250 million from using the eXtensible Business Reporting Language (XBRL) structure data format currently required for SEC filings. This would effect roughly 60% of publicly listed companies in the U.S.

XBRL makes it possible to easily extract financial data from electronic SEC filings using automated computer programs. Opponents of the bill (most of whom seem to make their living using XBRL to sell information to investors or assisting filing companies comply with the XBRL requirement) argue the bill will create a caste system of filers, harm the small companies the bill is intended to help, and harm investors (for example, see here and here). On pretty much every count, the critics are wrong. Here’s a point-by-point explanation of why:

1) Small firms will be hurt because they will have reduced access to capital markets because their data will be less accessible. — FALSE
The bill doesn’t prohibit small firms from using XBRL, it merely gives them the option to use it or not. If in fact small companies believe they are (or would be) disadvantaged in the market, they can continue filing just as they have been for at least the last two years. For critics to turn around and argue that small companies may choose to not use XBRL simply points out the fallacy of their claim that companies would be disadvantaged. The bill would basically give business owners and management the freedom to decide whether it is in fact in the company’s best interest to use the XBRL format. Therefore, there’s no reason to believe small firms will be hurt as claimed.

Moreover, the information disclosed by firms is no different under the bill–only the format in which it exists. There is no less information available to investors, it just makes it little less convenient to extract–particularly for the information service companies whose computer systems rely on XBRL to gather they data they sell to investors. More on this momentarily.

2) The costs of the current requirement are not as large as the bill’s sponsors claims.–IRRELEVANT AT BEST
According to XBRL US, an XBRL industry trade group, the cost of compliance ranges from $2,000 for small firms up to $25,000–per filing (or $8K to $100K per year). XBRL US goes on to claim those costs are coming down. Regardless whether the actual costs are the “tens of thousands of dollars a year” that bill sponsor Rep. Robert Hurt (VA-5) claims, the point is there are costs that are not clearly justified by any benefits of the disclosure format.

Moreover, if costs are coming down as claimed, then small businesses will be more likely to voluntarily use XBRL. In fact, the ability of small companies to choose NOT to file using XBRL will put competitive pressure on filing compliance companies to reduce costs even further in order to attract business, rather than enjoying a captive market of companies that have no choice.

3) Investors will be harmed because they will lose access to small company data.–FALSE
As noted above,investors will have no less information under the bill–they simply won’t be able to use automated programs to extract the information from the filings. Moreover, even if there was less information available, information asymmetry has long been a part of financial markets and markets are quite capable of dealing with such information asymmetry effectively in how prices are determined by investors and market-makers.  Paul Healy and Krishna Palepu (2001) provide an overview of the literature that shows markets are not only capable, but have an established history, of dealing with differences in information disclosure among firms. If any investors stand to lose, it would be current investors in small companies whose stocks could conceivably decrease in value if the companies choose not to use XBRL. Could. Conceivably. But with no evidence to suggest they would, much less that the effects would be large. To the extent large block holders and institutional investors perceive a potential negative effect, those investors also have the ability to influence management’s decision on whether to take advantage of the proposed exemption or to keep filing with the XBRL format.

The other potential investor harm critics point to with alarm is the prospect that small companies would be more likely and better able to engage in fraudulent reporting because regulators will not be able to as easily monitor the reports. Just one problem: the bill specifically requires the SEC to assess “the benefits to the Commission in terms of improved ability to monitor securities markets” of having the XBRL requirement. That will require the SEC to actively engage in monitoring both XBRL and non-XBRL filings in order to make that determination. So the threat of rampant fraud seems a tad bit overblown…certainly not what one critic described as “a massive regulatory loophole that a fraudulent company could drive an Enron-sized truck through.”

In the end, the bill before Congress would do nothing to change the kind of information that is made available to investors. It would create a more competitive market for companies who do choose to file using the XBRL structured data format, likely reducing the costs of that information format not only for small companies, but also for the larger companies that would still be required to use XBRL. By allowing smaller companies the freedom to choose what technical format to use in disclosing their data, the cost of compliance for all companies can be reduced. And that’s good for investors, capital formation, and the global competitiveness of US-based stock exchanges.

The Securities and Exchange Commission (SEC) recently scored a significant win against a Maryland banker accused of naked short-selling. What may be good news for the SEC is bad news for the market, as the SEC will now be more likely to persecute other alleged offenders of naked short-selling restrictions.

“Naked” short selling is when a trader sells stocks the trader doesn’t actually own (and doesn’t borrow in a prescribed period of time) in the hopes of buying the stocks later (before they must be delivered) at a lower price. The trader is basically betting that the stock price will decline. If it doesn’t, the trader must purchase the stock at a higher price–or breach their original sale contract.Some critics argue that such short-selling leads to market distortions and potential market manipulation, and some even pointed to short-selling as a boogey-man in the 2008 financial crisis, hence the restrictions on short-selling giving rise to the SEC’s enforcement proceedings.

Just one problem, there’s a lot of evidence that shows restrictions on short-selling make markets less efficient, not more.

This isn’t exactly news. Thom argued against short-selling restrictions seven years ago (here) and our late colleague, Larry Ribstein, followed up a couple years ago (here).  The empirical evidence just continues to pile in. Beber and Pagano, in the Journal of Finance earlier this year examine not just US restrictions on short-selling, but global restrictions. Their abstract reads:

Most regulators around the world reacted to the 2007–09 crisis by imposing bans on short selling. These were imposed and lifted at different dates in different countries, often targeted different sets of stocks, and featured varying degrees of stringency. We exploit this variation in short-sales regimes to identify their effects on liquidity, price discovery, and stock prices. Using panel and matching techniques, we find that bans (i) were detrimental for liquidity, especially for stocks with small capitalization and no listed options; (ii) slowed price discovery, especially in bear markets, and (iii) failed to support prices, except possibly for U.S. financial stocks.

So while the SEC may celebrate their prosecution victory, investors may have reason to be less enthusiastic.

Who’s Flying The Plane?

Michael Sykuta —  12 November 2012

It’s an appropriate question, both figuratively and literally. Today’s news headlines are now warning of a looming pilot shortage. A combination of new qualification standards for new pilots and a large percentage of pilots reaching the mandatory retirement age of 65 is creating the prospect of having too few pilots for the US airline industry.

But it still begs the question of “Why?” According to the WSJ article linked above, the new regulations require newly hired pilots to have at least 1,500 hours of prior flight experience. What’s striking about that number is that it is six times the current requirement, significantly increasing the cost (and time) of training to be a pilot.

Why such a huge increase in training requirements? I don’t fly as often as some of my colleagues, but do fly often enough to be concerned that the person in the front of the plane knows what they’re doing. I appreciate the public safety concerns that must have been at the forefront of the regulatory debate. But the facts don’t support an argument that public safety is endangered by the current level of experience pilots are required to attain. Quite the contrary, the past decade has been among the safest ever for airline passengers. In fact, the WSJ reports that:

Congress’s 2010 vote to require 1,500 hours of experience in August 2013 came in the wake of several regional-airline accidents, although none had been due to pilots having fewer than 1,500 hours.

Indeed, to the extent human error has been involved in airline accidents and near misses over the past decade, federally employed air traffic controllers, not privately employed pilots, have been more to blame.

The coincidence of such a staggering increase in training requirements for new pilots and the impending mandatory retirement of a large percentage of current pilots suggests that perhaps other forces were at work behind the scenes when Congress passed the rules in 2010. Legislative proposals are often written by special interests just waiting in the wings (no pun intended) for an opportune moment. Given the downsizing and cost-reduction focus of the US airline industry over the past many years, no group has been more disadvantaged and no group stands more to gain from the new rules than current pilots and the pilots unions.

And so the question, as we face this looming shortage of newly qualified pilots: Who’s flying the plane?

 

As an economist, it’s inevitable that social friends ask my thoughts about current economic issues (at least it’s better than being asked for free legal advice). This weekend a friend commented about the “recovery that isn’t”, reflecting the public sense that the economy doesn’t seem to be doing as well as government reports (particularly unemployment reports) and some politicians make it out to be.

This morning I ran across a weekend article in the WSJ Online that reports on the broader unemployment rate by State in the U.S. In the article, Ben Casselman discusses the difference between the official unemployment rate, formally known as U3 (those who are not working but actively seeking work), and the broadest Labor Department measure, affectionately called U6, which includes people who want to work but are not actively looking and those who want full-time work but are working part-time jobs to make ends meet. Casselman shows how the difference in those measures sometimes reveals significant differences. Take Idaho, for instance, whose unemployment rate is below the national average, but whose U6 measure is above the national average, suggesting a disproportionately large number of people who want full-time work but are stuck with only part-time job opportunities or have given up looking.

This got me wondering just how the difference between U3 and U6 are behaving on a national level, so I went to the Dept of Labor’s website and downloaded both series going back to 1994, when U6 was first introduced.

US Unemployment and Underemployement, 1994-2012This figure shows U3 (seasonally adjusted, in blue) and the difference between U6 and U3 (i.e., underemployment, in red) for the past 18 years. As one would expect, the two are positively correlated. But a couple things stand out. First, while positively correlated, the degree of correlation before and after mid-2001 is very different. For the first eight years, the two track very closely; not so closely afterward.

Second, while unemployment has dropped 2% since hitting its peak of 10% in October 2009, the underemployment rate has barely moved, dropping from 7.2% in October 2009 to only 6.9% in September 2012. So, regardless of whether you buy Jack Welch’s conspiracy theory about the unemployment (U3) numbers being manipulated, it’s clear that there is a persistently large portion of the labor force–double what it had been in 2008–that is wanting full-time work and unable (or discouraged) to find it.

Which pretty well sums up, I believe, the disconnect between the numbers and the reality of the economy. Pointing to the official unemployment numbers masks the truth about the state of the labor market in the US and belies the economic malaise that persists.

Paul Fain has an interesting update today on the issue of two-tier pricing for California’s community college system. Santa Monica College rocked the boat in March when it announced plans to start using a two-tier pricing schedule that would charge higher tuition rates for high-demand courses.

Santa Monica–and most all community colleges in California apparently–have been slammed with would-be students looking to take classes that would help prepare them for better jobs or for further education and training (that would prepare them for better jobs).  The problem is that state funding for community colleges has been drastically reduced, thereby limiting the number of course offerings schools can offer at the subsidized tuition rate of $36 per credit hour. Santa Monica had the radical idea (well, radical for anyone that fails to understand economics, perhaps) of offering additional sections of high-demand courses, but at full-cost tuition rates (closer to $200 per credit hour).

Students protested. Faculty at other community colleges complained. Santa Monica College relented. So students don’t have to worry about paying more for courses they will not be able to take and faculty at other colleges don’t have to worry about the possibility of more students wanting to go to their schools because the overflow tuition at Santa Monica drives students to find substitutes. Well, that, and no more worries for those faculty at schools who charge even more than $200 for students to get those core courses that they cannot get into at their community college. It didn’t matter much anyhow, since most agreed that Santa Monica College’s proposal would have violated the law.

Now there is a proposal before the California legislature that would allow schools to implement two-tier pricing, but only for technical trade courses, not for high-demand general education-type courses.

Aside from complaints that “the state should be giving away education–even if they are not” (which are the most inane because they have nothing to do with the issue at hand), there are a few other arguments or positions offered that just cause one to scratch one’s head in wonder:

1) Fain reports that Michelle Pilati, president of the Academic Senate of California Community Colleges, asserted that “two-tiered tuition is unfair to lower-income students because it would open up classes to students who have the means to pay much more.” Apparently, Ms. Pilati would prefer all students have equal access to no education than to open up more spaces (to lower-income students) by opening up more spaces to higher-income students at higher prices. Gotcha.

2) The Board of Trustees at San Diego Community College seems to agree, having passed a resolution opposing the proposed legislation because it “would limit or exclude student access based solely on cost, causing inequities in the treatment of students”. Apparently the inequity of some students getting an education and some not is more noble because explicit out-of-pocket costs are not involved and other forms of rationing are used. And yet…

3) According to Fain,  Nancy Shulock, director of the Institute for Higher Education Leadership and Policy at California State University at Sacramento, asserts “wealthier students have a leg up when registering for courses. She said research has found that higher-income students generally have more ‘college knowledge’ that helps them navigate often-complex registration processes. That means wealthier students could more quickly snag spots in classes, getting the normal price, while their lower-income peers would be more likely to pay the higher rates under a two-tiered system.”

So, community colleges have created overly complex registration systems that disadvantage lower-income students. Yet, all that suggests is that the current system already punishes lower-income students because wealthier students can more easily “snag” the limited number of subsidized sections. Perhaps community colleges could make their enrollment processes less complex?

Regardless the fate of the “two-tier pricing” legislation, there is already a two-tier system in place; only the current two-tier plan prevents people from getting educations at any price.