Conspiracies and collusion often (always?) get a bad rap. Adam Smith famously derided “people of the same trade” for their inclination to conspire against the public or contrive to raise prices. Today, such conspiracies and contrivances are per se illegal and felonies punishable under the Sherman Act.
It is well known and widely accepted that collusion to suppress competition is associated with an increase in price, a transfer of consumer surplus to producers, and a deadweight loss. It seems that nothing good comes from anticompetitive collusion.
But what if there was some good from a conspiracy in restraint of trade?
Using data from the formation and breakup of illegal cartels, Hyo Kang finds higher levels of innovation—measured by patents and R&D spending—during the cartel period than in the period before the formation of the cartel or the period after the breakup of the cartel.
By Kang’s measures, during the cartel period, colluding firms increased the annual number of patent applications by about 50% or more and their R&D expenditures by more than 20% relative to the pre-cartel period. After the breakup of the cartel, patent applications and R&D spending return to approximately pre-cartel levels.
These findings are consistent with ICLE’s review of research on four-to-three mergers in the telecom industry. The review found that, of those studies that considered the effect on investment in four-to-three mergers, all of them demonstrated that capital expenditures, a proxy for investment, increased post-merger.
If Kang’s conclusions are correct they contradict John Hicks’ quip that “the best of all monopoly profits is a quiet life.” Instead of silently collecting the profits of price fixing and other forms of collusion, cartel conspirators seem to be aggressively innovating. So what gives?
Kang’s paper points to Joseph Schumpeter, who argued that some degree of market power can promote innovation by providing firms with the financial resources and predictability required for innovative activities:
Thus it is true that there is or may be an element of genuine monopoly gain in those entrepreneurial profits which are the prizes offered by capitalist society to the successful innovator. But the quantitative importance of that clement, its volatile nature and its function in the process in which it emerges put it in a class by itself. The main value to a concern of a single seller position that is secured by patent or monopolistic strategy does not consist so much in the opportunity to behave temporarily according to the monopolist schema, as in the protection it affords against temporary disorganization of the market and the space it secures for long-range planning.
Along this line, Kang argues that the reduced competition afforded by the cartel provides both an incentive to innovate and an ability to innovate. Incentives include the potential for higher returns from innovation and the reduction of duplicative R&D investment. Increased profits from collusion provide increased resources available for R&D, thereby improving a firm’s ability to innovate. In some ways, it can be argued that the cartel arrangement reduces price competition, while increasing competition along other dimensions.
A seemingly unrelated working paper by R. Andrew Butters and Thomas N. Hubbard come to a similar conclusion. They note that over time, hotels have increased competition along nonprice dimensions, trading improved room size and in-room amenities for reduced out-of-room amenities such full-service restaurants, swimming pools, and meeting spaces.
Butters & Hubbard note that many out-of-room amenities are typified by fixed costs that do not vary (much) with hotel size, while room-size and in-room amenities are largely variable costs with respect to hotel size. With the shift from out-of-room amenities to in-room amenities, the market has shifted from one of larger hotels with many rooms, to smaller hotels with fewer rooms. Thus with the shift in the dimensions of competition, the structure of the industry has shifted along with it.
The research of Kang and Butters & Hubbard raise important issues about competition policy. A single-minded focus on price ignores the other many dimensions across which firms compete. While a cartel’s consumers may face higher prices, they may also benefit from increased innovation. Similarly, while hotel guests may experience reduced price competition among hotels, they are also experiencing a better in-room experience. Although increased concentration and outright collusion may harm consumers along the price dimension, they may also benefit along other dimensions that are not so easily quantified or quantifiable.
John Maynard Keynes wrote in his famous General Theorythat “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”
This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society, New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning.
Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.
Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.”
Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s.
Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.
In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.
First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.
The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.
In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.
Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.
Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,
“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”
This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.
Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.
In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data.
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger…
One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.
In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.
Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:
U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.
Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).
Model 1: Unadjusted for demographics and content quality
Model 2: Adjusted for demographics but not content quality
Model 3: Adjusted for demographics and data usage
Model 4: Adjusted for demographics and content quality
Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:
The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing.
In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE.
Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition.
In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.
At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway. For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors.
So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”
For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in TheEconomists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.
Paul H. Rubin is the Dobbs Professor of Economics Emeritus, Emory University, and President, Southern Economic Association, 2013
I want to thank Geoff for inviting me to blog about my new book.
My book, The Capitalist Paradox: How Cooperation Enables Free Market Competition, Bombardier Books, 2019, has been published. The main question I address in this short book is: Given the obvious benefits of markets over socialism, why do so many still oppose markets? I have been concerned with this issue for many years. Given the current state of American politics, the question is even more important than when I began the book.
I begin by pointing out that humans are not good intuitive economists. Our minds evolved in a simple setting where the economy was simple, with little trade, little specialization (except by age and gender), and little capital. In this world there was no need for our brains to evolve to understand economics. (Politics is a different story.) The main takeaway from this world was that our minds evolved to view the world as zero-sum. Zero-sum thinking is the error behind most policy errors in economics.
The second part of the argument is that in many cases, when economists are discussing efficiency issues (such as optimal taxation) listeners are hearing distribution issues. So we economists would do better to begin with a discussion showing that there are efficiency (“size of the pie”) effects before showing what they are in a particular case. That is, we should show that taxation can affect total income before showing how it does so in a particular case. I call this “really basic economics,” which should be taught before basic economics. It is sometimes said that experts understand their field so well that they are “mind blind” to the basics, and that is the situation here.
I then show that competition is an improper metaphor for economics. Discussions of competition brings up sports (and in economics the notion of competition was borrowed from sports) and sports is zero-sum. Thus, when economists discuss competition, they reinforce people’s notion that economics is zero sum. People do not like competition. A quote from the book:
Here are some common modifiers of “competition” and the number of Google references to each:
“Cutthroat competition” (256,000), “excessive competition” (159,000), “destructive competition” (105,000), “ruthless competition” (102,000), “ferocious competition” (66,700), “vicious competition” (53,500), “unfettered competition” (37,000), “unrestrained competition” (34,500), “harmful competition” (18,000), and “dog-eat-dog competition” (15, 000). Conversely, for “beneficial competition” there are 16,400 references. For “beneficial cooperation” there are 548,000 references, and almost no references to any of the negative modifiers of cooperation.
The final point, and what ties it all together, is a
discussion showing that the economy is actually more cooperative than it is
competitive. There are more cooperative relationships in an economy than there
are competitive interactions. The basic
economic element is a transaction, and transactions are cooperative. Competition chooses the best agents to
cooperate with, but cooperation does the work and creates the consumer surplus.
Thus, referring to markets as “cooperative” rather than “competitive” would not
only reduce hostility towards markets, but would also be more accurate.
An economist reading this book would probably not learn much economics. I do not advocate any major change in economic theory from competition to cooperation. But I propose a different way to view the economy, and one that might help us better explain what we are doing to students and to policy makers, including voters.
After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.
I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)
As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”
This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.
Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.
In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).
The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.
The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.
What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.
Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.
As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.
The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?
It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.
A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”
The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.
The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.
Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.
In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.
Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.
Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.
But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.
Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.
Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.
The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.
If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.
Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.
He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.
What is bias?
We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.
In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.
Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.
Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.
The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).
“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.
On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.
A study with low statistical power has a reduced chance of detecting a true effect.
Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.
An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.
Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.
By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.
Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.
In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.
Now, we’re getting to power.
Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.
By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.
Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:
If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.
In other words, a low power test is more likely to convict the innocent than a high power test.
In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.
Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.
(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)
Is economics bogus?
It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:
Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.
For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:
[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.
Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.
In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”
Unexpectedly, on the day that the white copy of the upcoming repeal of the 2015 Open Internet Order was published, a mobile operator in Portugal with about 7.5 million subscribers is garnering a lot of attention. Curiously, it’s not because Portugal is a beautiful country (Iker Casillas’ Instagram feed is dope) nor because Portuguese is a beautiful romance language.
Rather it’s because old-fashioned misinformation is being peddled to perpetuate doomsday images that Portuguese ISPs have carved the Internet into pieces — and if the repeal of the 2015 Open Internet Order passes, the same butchery is coming to an AT&T store near you.
Much ado about data
This tempest in the teacup is about mobile data plans, specifically the ability of mobile subscribers to supplement their data plan (typically ranging from 200 MB to 3 GB per month) with additional 10 GB data packages containing specific bundles of apps – messaging apps, social apps, video apps, music apps, and email and cloud apps. Each additional 10 GB data package costs EUR 6.99 per month and Meo (the mobile operator) also offers its own zero rated apps. Similar plans have been offered in Portugal since at least 2012.
These data packages are a clear win for mobile subscribers, especially pre-paid subscribers who tend to be at a lower income level than post-paid subscribers. They allow consumers to customize their plan beyond their mobile broadband subscription, enabling them to consume data in ways that are better attuned to their preferences. Without access to these data packages, consuming an additional 10 GB of data would cost each user an additional EUR 26 per month and require her to enter into a two year contract.
These discounted data packages also facilitate product differentiation among mobile operators that offer a variety of plans. Keeping with the Portugal example, Vodafone Portugal offers 20 GB of additional data for certain apps (Facebook, Instagram, SnapChat, and Skype, among others) with the purchase of a 3 GB mobile data plan. Consumers can pick which operator offers the best plan for them.
In addition, data packages like the ones in question here tend to increase the overall consumption of content, reduce users’ cost of obtaining information, and allow for consumers to experiment with new, less familiar apps. In short, they are overwhelmingly pro-consumer.
Even if Portugal actually didn’t have net neutrality rules, this would be the furthest thing from the apocalypse critics make it out to be.
Net Neutrality in Portugal
But, contrary to activists’ misinformation, Portugal does have net neutrality rules. The EU implemented its net neutrality framework in November 2015 as a regulation, meaning that the regulation became the law of the EU when it was enacted, and national governments, including Portugal, did not need to transpose it into national legislation.
While the regulation was automatically enacted in Portugal, the regulation and the 2016 EC guidelines left the decision of whether to allow sponsored data and zero rating plans (the Regulation likely classifies data packages at issue here to be zero rated plans because they give users a lot of data for a low price) in the hands of national regulators. While Portugal is still formulating the standard it will use to evaluate sponsored data and zero rating under the EU’s framework, there is little reason to think that this common practice would be disallowed in Portugal.
On average, in fact, despite its strong net neutrality regulation, the EU appears to be softening its stance toward zero rating. This was evident in a recent EC competition policy authority (DG-Comp) study concluding that there is little reason to believe that such data practices raise concerns.
The activists’ willful misunderstanding of clearly pro-consumer data plans and purposeful mischaracterization of Portugal as not having net neutrality rules are inflammatory and deceitful. Even more puzzling for activists (but great for consumers) is their position given there is nothing in the 2015 Open Internet Order that would prevent these types of data packages from being offered in the US so long as ISPs are transparent with consumers.
A handful of increasingly noisy critics of intellectual property (IP) have emerged within free market organizations. Both the emergence and vehemence of this group has surprised most observers, since free market advocates generally support property rights. It’s true that there has long been a strain of IP skepticism among some libertarian intellectuals. However, the surprised observer would be correct to think that the latest critique is something new. In our experience, most free market advocates see the benefit and importance of protecting the property rights of all who perform productive labor – whether the results are tangible or intangible.
How do the claims of this emerging critique stand up? We have had occasion to examine the arguments of free market IP skeptics before. (For example, see here, here, here.) So far, we have largely found their claims wanting.
We have yet another occasion to examine their arguments, and once again we are underwhelmed and disappointed. We recently posted an essay at AEI’s Tech Policy Daily prompted by an odd report recently released by the Mercatus Center, a free-market think tank. The Mercatus report attacks recent research that supposedly asserts, in the words of the authors of the Mercatus report, that “the existence of intellectual property in an industry creates the jobs in that industry.” They contend that this research “provide[s] no theoretical or empirical evidence to support” its claims of the importance of intellectual property to the U.S. economy.
Our AEI essay responds to these claims by explaining how these IP skeptics both mischaracterize the studies that they are attacking and fail to acknowledge the actual historical and economic evidence on the connections between IP, innovation, and economic prosperity. We recommend that anyone who may be confused by the assertions of any IP skeptics waving the banner of property rights and the free market read our essay at AEI, as well as our previous essays in which we have called out similarly odd statements from Mercatus about IP rights.
The Mercatus report, though, exemplifies many of the concerns we raise about these IP skeptics, and so it deserves to be considered at greater length.
For instance, something we touched on briefly in our AEI essay is the fact that the authors of this Mercatus report offer no empirical evidence of their own within their lengthy critique of several empirical studies, and at best they invoke thin theoretical support for their contentions.
This is odd if only because they are critiquing several empirical studies that develop careful, balanced and rigorous models for testing one of the biggest economic questions in innovation policy: What is the relationship between intellectual property and jobs and economic growth?
Apparently, the authors of the Mercatus report presume that the burden of proof is entirely on the proponents of IP, and that a bit of hand waving using abstract economic concepts and generalized theory is enough to defeat arguments supported by empirical data and plausible methodology.
This move raises a foundational question that frames all debates about IP rights today: On whom should the burden rest? On those who claim that IP has beneficial economic effects? Or on those who claim otherwise, such as the authors of the Mercatus report?
The burden of proof here is an important issue. Too often, recent debates about IP rights have started from an assumption that the entire burden of proof rests on those investigating or defending IP rights. Quite often, IP skeptics appear to believe that their criticism of IP rights needs little empirical or theoretical validation, beyond talismanic invocations of “monopoly” and anachronistic assertions that the Framers of the US Constitution were utilitarians.
As we detail in our AEI essay, though, the problem with arguments like those made in the Mercatus report is that they contradict history and empirics. For the evidence that supports this claim, including citations to the many studies that are ignored by the IP skeptics at Mercatus and elsewhere, check out the essay.
Despite these historical and economic facts, one may still believe that the US would enjoy even greater prosperity without IP. But IP skeptics who believe in this counterfactual world face a challenge. As a preliminary matter, they ought to acknowledge that they are the ones swimming against the tide of history and prevailing belief. More important, the burden of proof is on them – the IP skeptics – to explain why the U.S. has long prospered under an IP system they find so odious and destructive of property rights and economic progress, while countries that largely eschew IP have languished. This obligation is especially heavy for one who seeks to undermine empirical work such as the USPTO Report and other studies.
In sum, you can’t beat something with nothing. For IP skeptics to contest this evidence, they should offer more than polemical and theoretical broadsides. They ought to stop making faux originalist arguments that misstate basic legal facts about property and IP, and instead offer their own empirical evidence. The Mercatus report, however, is content to confine its empirics to critiques of others’ methodology – including claims their targets did not make.
For example, in addition to the several strawman attacks identified in our AEI essay, the Mercatus report constructs another strawman in its discussion of studies of copyright piracy done by Stephen Siwek for the Institute for Policy Innovation (IPI). Mercatus inaccurately and unfairly implies that Siwek’s studies on the impact of piracy in film and music assumed that every copy pirated was a sale lost – this is known as “the substitution rate problem.” In fact, Siwek’s methodology tackled that exact problem.
IPI and Siwek never seem to get credit for this, but Siwek was careful to avoid the one-to-one substitution rate estimate that Mercatus and others foist on him and then critique as empirically unsound. If one actually reads his report, it is clear that Siwek assumes that bootleg physical copies resulted in a 65.7% substitution rate, while illegal downloads resulted in a 20% substitution rate. Siwek’s methodology anticipates and renders moot the critique that Mercatus makes anyway.
After mischaracterizing these studies and their claims, the Mercatus report goes further in attacking them as supporting advocacy on behalf of IP rights. Yes, the empirical results have been used by think tanks, trade associations and others to support advocacy on behalf of IP rights. But does that advocacy make the questions asked and resulting research invalid? IP skeptics would have trumpeted results showing that IP-intensive industries had a minimal economic impact, just as Mercatus policy analysts have done with alleged empirical claims about IP in other contexts. In fact, IP skeptics at free-market institutions repeatedly invoke studies in policy advocacy that allegedly show harm from patent litigation, despite these studies suffering from farworseproblems than anything alleged in their critiques of the USPTO and other studies.
Finally, we noted in our AEI essay how it was odd to hear a well-known libertarian think tank like Mercatus advocate for more government-funded programs, such as direct grants or prizes, as viable alternatives to individual property rights secured to inventors and creators. There is even more economic work being done beyond the empirical studies we cited in our AEI essay on the critical role that property rights in innovation serve in a flourishing free market, as well as work on the economic benefits of IP rights over other governmental programs like prizes.
Today, we are in the midst of a full-blown moral panic about the alleged evils of IP. It’s alarming that libertarians – the very people who should be defending all property rights – have jumped on this populist bandwagon. Imagine if free market advocates at the turn of the Twentieth Century had asserted that there was no evidence that property rights had contributed to the Industrial Revolution. Imagine them joining in common cause with the populist Progressives to suppress the enforcement of private rights and the enjoyment of economic liberty. It’s a bizarre image, but we are seeing its modern-day equivalent, as these libertarians join the chorus of voices arguing against property and private ordering in markets for innovation and creativity.
It’s also disconcerting that Mercatus appears to abandon its exceptionally high standards for scholarly work-product when it comes to IP rights. Its economic analyses and policy briefs on such subjects as telecommunications regulation, financial and healthcare markets, and the regulatory state have rightly made Mercatus a respected free-market institution. It’s unfortunate that it has lent this justly earned prestige and legitimacy to stale and derivative arguments against property and private ordering in the innovation and creative industries. It’s time to embrace the sound evidence and back off the rhetoric.
Many more, who will do far more justice than I can, will have much more to say on this, so I will only note it here. Ronald Coase has passed away. He was 102. The University of Chicago Law School has a notice here.
The first thing I wrote on the board for my students this semester was simply his name, “Coase.” I told them only on Friday that he was still an active scholar at 102.
Recently, I’ve been blogging about the difference between so-called “bias” in vertically integrated economic relationships and consumer harm (e.g., here and here). The two are different. Indeed, vertical integration and contractual arrangements are generally pro-consumer and efficient. Many of the same arguments surrounded the net neutrality debate with critics largely skeptical that the legislation was not needed (antitrust could be used when such contractual arrangements actually generated competitive harm) and would chill pro-competitive behavior.
In January, the Federal Communications Commission has now received its first complaint under the Order against MetroPCS. So, is the complaint about a monopolist Internet Service Provider (ISP) employing vertical contracts to exclude rivals and harm consumers? You be the judge. My colleague Tom Hazlett describes the situation in his (always) excellent Financial Times column:
MetroPCS, hit with its first formal complaint, is an upstart wireless network offering low prices and short-term contracts. As part of their $40 a month “all you can eat” voice, text and data plan, they slipped in a bonus: free, unlimited YouTube videos, customised to run fast and clear. Activist groups, led by Free Press, went ballistic. Their petition to the FCC declared that the mobile provider was favouring YouTube over other video sites, creating just the sort of “walled garden” that would destroy the internet. “The new service plans offered by MetroPCS give a preview of the future in a world without adequate protections for mobile broadband users,” they wrote.
The complaint performs a great public service, revealing just how net neutrality would “adequately protect mobile broadband users”. In fact, MetroPCS advances the interests of consumers by supporting enhanced access to the applications most popular with users. Such arrangements do not sabotage internet development, but drive it.
But what about the possibility of consumer harm so prominent in the Net Neutrality Order? As Hazlett explains, not only is such a competitive threat unlikely, but the regulatory restrictions imposed by the Order will impede competition and hurt consumers (in this case, especially targeting the price sensitive customers). Indeed, the crux of the complaint surrounds an effort by MetroPCS and Google to offer consumers additional choices. Read on:
MetroPCS possesses no market power. With 8m customers, it is the country’s fifth largest mobile operator, less than one-tenth the size of Verizon. Under no theory could it force customers to patronise certain websites. It couldn’t extract monopoly cash if it tried to.
Indeed, low-cost prepaid plans of MetroPCS are popular with users who want to avoid long-term contracts and are price sensitive. Half its customers are ‘cord cutters’, subscribers whose only phone is wireless and usage is intense. Voice minutes per month average about 2,000, more than double that of larger carriers.
The $40 plan is cheap because it’s inexpensively delivered using 2G technology. It is not broadband (topping out, in third party reviews, at just 100 kbps), and has software and capacity issues. In general, voice over internet is not supported by the handsets and video streaming is not available on the network. The carrier deals with those limitations in three ways.
First, the $40 per month price tag extends a fat discount. Unlimited everything can cost $120 on faster networks. Second, it has also deployed new 4G technology, offering both a $40 tier similar to the 2G product (no video streaming), but also a pumped up version with video streaming, VoIP and everything else – without data caps – for $60 a month. Of course, this network has far larger capacity and is much zippier (reliable at 700 kbps). PC World rated the full-blown 4G service “dirt cheap”.
Third, to upgrade the cheaper-than-dirt 2G experience, MetroPCS got Google – owner of YouTube – to compress their videos for delivery over the older network. This allowed the mobile carrier to extend unlimited wildly popular YouTube content to its lowest tier subscribers. Busted! Favouring YouTube is said to violate neutrality. …
The FCC has already erred. Innovators such as MetroPCS and Google should need no
defence in supplying customers’ superior choices. Neither consumers nor the internet are “protected” by rules hostile to co-operative efforts – even if money were to pass between firms – that expand outputs and lower prices. If the FCC is to take such ill-targeted attacks on competitive rivalry seriously, it will do far more to deter the open internet than to preserve it.
Not an auspicious beginning for the Net Neutrality regime — or consumers.
Baker’s central thesis in Preserving a Political Bargain builds on earlier work concerning competition policy as an implicit political bargain that was reached during the 1940s between the more extreme positions of laissez-faire on the one hand and regulation on the other. The new piece tries to explain what Baker describes as the “non-interventionist” critique of monopolization enforcement within this framework. The piece is motivated, at least in part, by the Section 2 Report debates. Baker’s basic story is fairly straightforward. Under Baker’s account, competition policy is the outcome of the political bargaining process described above. The “competition policy bargain” was then successfully modified in the 1980s in response to the Chicago School critique. According to Baker, during the 1970s and 80s, “the Supreme Court revised many if not most of aspects of antitrust law along the lines suggested by legal and economic commentators loosely associated with the University of Chicago,” though this revolution changed the antitrust laws “dramatically but not fundamentally” and reflected a “bipartisan consensus in favor of reforming antitrust rules to enhance the efficiency gains arising from competition policy.”
Baker applies his “political bargain” framework to argue that the “modern non-interventionist critique,” unlike the successful attempt to modify the “terms” of the bargain in the 1980s, is highly likely to fail. Baker defines the non-interventionist critique as relying on a particular series of legal and economic arguments. For example, Baker describes the economic arguments deployed by the non-interventionists as that “markets are self-correcting,” “monopoly fosters economic growth,” “there is a single monopoly profit,” “excluded fringe rivals may not matter competitively,” “courts cannot reliably identify monopolization,” and so on. Animated by the Section 2 Hearings, Report, its withdrawal, and the subsequent controversy, Baker begins from the assumption the non-interventionists are trying to modify an existing bargain, since non-interventionists are “the primary source of recent criticism of monopolization standards.” From there, Baker argues that this concerted effort to modify the competition bargain in favor of less intervention is unlikely to succeed because such an attempted modification is unlikely to mobilize broader political support in the current social environment.
Let me start by saying that I agree entirely with the ultimate conclusion in so far as I don’t think there is any doubt that, in the current environment, it is unlikely that the implicit “policy bargain” will be modified in a way that makes it more difficult for monopolization plaintiffs. I have much more trouble with the premise of the exercise, and on how one knows a deviation from the current policy bargain when he sees one, and so will focus my critique on those issues.
Baker paints the picture of a dramatic and fundamental attack by non-interventionists on monopolization enforcement. My response to the premise of the paper was: “What non-interventionist effort to further relax monopolization standards?” To be sure, there are plenty of folks who have cautioned against expansive use of Section 2. It strikes me that the fundamental weakness in Baker’s analysis is that his starting point – the “terms” of the current political bargain — derives from assumptions that don’t seem to square with reality. In other words, rather than envisioning the current debates around Section 2 as an assault by non-interventionists, there is a much more compelling case that it is the interventionists attempting to “deviate” from whatever implicit political bargain exists with respect to competition policy. Christine Varney’s declaration that there is “no such thing as a false positive” – the presence of such being a seminal observation since The Limits of Antitrust (in 1984, no less) immediately leaps to mind. I will turn to making the case that it is the interventionists making the offer for modification below.
But first note that Baker leaves out of his list of “economic arguments” against Section 2 both error costs and that there is little empirical evidence that aggressive monopolization enforcement generates consumer benefits. This is, in my view, an important omission since Baker makes the point that all of the other economic arguments have attracted rebuttals. If there has been a rebuttal of the argument that the empirical evidence suggests that instances of anticompetitive exclusive dealing, RPM, tying and vertical integration are quite rare, or an empirical demonstration that monopolization enforcement has generated consumer welfare gains bet of error and administrative costs, I’d like to see it. Further, note that the original Chicago School argument, a la Director & Levi, against monopolization enforcement was not that anticompetitive exclusion was impossible, but rather that it was sufficiently rare in the world as an empirical matter as to be irrelevant to policy formation. Baker ignores this empirical, evidence-based non-interventionist critique, which, for example, has been the core of the position taken by modern academic skeptics of monopolization enforcement like myself, Dan Crane, Tim Muris, Bruce Kobayashi, Luke Froeb, and David Evans.
What is the evidence that there is a non-interventionist attack on the current competition policy bargain as it exists with respect to monopolization? Not much. The first is that the non-interventionists are the “source of criticism of recent monopolization standards.” In parts of the paper, Baker equates the non-interventionists with business interests. But under that formulation, there is not much evidence to support this proposition. If anything, and as Baker readily acknowledges in a footnote, the headlines seem to tell a story of AMD, Google, Microsoft, Adobe and others expending resources to instigate antitrust enforcement against rivals not to restrict the scope of Section 2.
Baker cites more generally the recent monopolization controversy as driven by the non-interventionist attempt to deviate from the status quo. But this part of the analysis reads to me as driven entirely by assertion that the competition policy preferences that Baker appears to prefer are in the “political bargain” and deeming opposition to those (interventionist) policies attempted “deviations.” Perhaps this is a problem of hammers and nails. Baker’s more interventionist than I and so sees obstacles between his ideal vision of antitrust law and reality as caused by non-interventionists. But I’ve got a different hammer and see different nails. For example, I read the Section 2 Report as largely (but not entirely) limited to a description of Section 2 law as it exists and the vigorously dissenting voices coming from the interventionist crowd. As George Priest has put it:
It’s fair enough for a succeeding administration to reject policies of its predecessor. But the Justice Department report was not authored by John Yoo or Alberto Gonzales. It was the work of a year-long study that considered recommendations from 29 panels and 119 witnesses, most of them critical of the minimalist Chicago School approach to antitrust law. The report’s conclusions basically track Supreme Court law with modest extensions in areas where the Supreme Court has not ruled. Ms. Varney denounced the report in its entirety.
Finding the evidence lacking of some strong non-interventionist attempt to impose dramatic change on Section 2 that deviates from the current political bargain, I offer an alternative hypothesis: it is the interventionists that are attempting to deviate from the current political bargain and propose change.
For starters, I think that Baker and I would agree that there actually is a “stable” competition policy bargain with respect to monopolization that has drawn bipartisan over the last twenty years – at least in the courts. Note that even restricting attention to decisions during the George W. Bush administration from 2004-08, the total vote count of these decisions was 86-9, with 7 of 11 decisions decided unanimously, and only Leegin attracted more than two votes of dissent (and more likely, as others have pointed out, for its implications with respect to abortion jurisprudence than anything to do with the antitrust analysis of vertical restraints!). The monopolization-related decisions of the modern era, including Trinko, Linkline, Credit Suisse, and Brooke Group have all made lift more difficult for plaintiffs in one way or another. But as I’ve written on this blog over and over again, the error-cost analysis embedded in these decisions is a key feature of modern Section 2 jurisprudence that is part of the current bargain. So as I understand it, these decisions must be part of the current bargain. It would be difficult, in fact, to find another area of law in which the Court has articulated principles with such overriding unanimity despite persistent attempts by some scholars to advocate for an alternate overarching legal framework. I think there is a much more compelling story – and one backed by greater evidence than Baker’s narrative — to tell about the modern attempt of the interventionists to renegotiate terms. Let’s discuss some of the evidence.
For starters, the strongly-toned dissents from the Section 2 Report from both Agencies after Hearings with witnesses and testimony from all possible sides of debate — even the parts that merely describe the law — suggest dissatisfaction with the terms of the modern bargain Baker describes and that are represented by the monopolization case law created over the past several decades by supermajority Supreme Court decisions. It is AAG Varney who recently, as Baker acknowledges in the paper, minimized the importance of Trinko under Section 2 in favor of “tried and true” cases like Aspen Skiing. This is, of course, to say nothing of AAG Varney’s endorsement of an antitrust policy free of error-cost considerations.
Further, it is the interventionists at the Federal Trade Commission that have turned to an expanded vision of Section 5 to evade the constraints imposed by Section 2. In fact, the Commission has explicitly announced that it does not think that the constraints imposed on plaintiffs under Section 2 should apply to the antitrust agencies! If this is not an attempt to deviate from the existing political bargain in an interventionist direction, I’m not sure what is. Put another way, interventionists are currently attempting to re-write existing Section 2 law – the “political bargain” – through Section 5. Given the Complaint in Intel and promised use of Section 5 in broad circumstances previously covered under the Section 2 law envisioned under the “stable” bargain that Baker describes as generating bipartisan support from Democrats and Republicans, surely this is an attempt to deviate from the prior bargain.
It is the interventionists that have provided new economic arguments in favor of greater antitrust enforcement. For example, the recent trend towards reliance on behavioral economics endorsed by the agencies emerges out of dissatisfaction with Chicago and Post-Chicago School theories that adopt rational actor models and, presumably, inability to get substantial traction in the federal courts from existing interventionist models provided by the Post-Chicago School.
The interventionist assault on the current implicit competition policy bargain goes further than the agencies though. Congress currently has in front of it pending legislation to take out of the courts the development of a rule of reason standard for minimum RPM, a Twombly-repealer, legislation to make reverse payments in pharmaceutical patent settlements illegal, and legislation to regulate interchange fees. Every one of these proposals represents an interventionist reaction attempting to overturn a judicial application of current competition law and suggest that perhaps the interventionists do not trust the courts to oversee the political bargain.
The premise of Baker’s analysis (that the non-interventionists are strongly challenging the current status quo) is either false to begin with or practically irrelevant in light of the much more important interventionist challenge. Note again that Baker’s claim is that the non-interventionists would fail in any attempt to reduce the scope of monopolization enforcement because they will not be able to generate more broad political support in the current environment. No doubt that is true. But what about the interventionists chances for success? Baker’s analysis provides a very interesting lens to analysis evaluate questions like whether the interventionists will be successful in renegotiating the terms of the competition policy bargain. At the moment, though things may be changing, they seem to have greater political support. I think the most interesting conflict arising out of Baker’s interesting conception of competition between stakeholders in antitrust policy is that it illuminates what might be a battle for supremacy in governing the bargain between agencies and courts. As Baker notes, the courts have been a critical part of establishing the terms of the bargain and adjudicating attempts to “re-negotiate” by private plaintiffs and agencies over time. Recently, interventionists have attempted to shift antitrust (and consumer protection) enforcement away from courts and towards administrative agencies, such as with Section 5 and the proposed CFPA. To me, these present more important and interesting policy questions than whether non-interventionists will be successful in further shrinking Section 2 law. I believe that the prediction emerging from Baker’s model depends on what happens with the political environment in the next few years.
My prediction, for what its worth, is that the current policy bargain will certainly hold together in the courts. The remarkable strength of the current Section 2 status quo is held together by a combination of the intuitive appeal of price theory for generalist judges relative to more interventionist Post-Chicago and Behavioral economic alternatives, the relative explanatory power of the so-called Chicago School theories relative to contenders. Nothing there has changed. I have less of a sense about the impact of Congressional changes, judicial nominations, and the rise of the EU as monopolization enforcer have on monopolization in the US.
Bill Northey, IA Ag Sec’y, sounds a bit like an economist (ah, turns out he has a degree in ag business and an MBA . . . ). Yes, price of seeds has gone up, but so has yield, and so has overall value. The issue, he says, is how to divide the surplus, and he suggests that it’s dividing the pie that drives farmer concerns. That’s not at all a surprise, but it’s also not much of an antitrust issue. Unless the pie could be bigger absent, say, Monsanto’s huge investment in seeds and the resulting relatively-concentrated market structure (and basing enforcement on the theoretical possibility of that counter-factual is a perilous enterprise, as Josh and I have suggested many times), this is just a question of pecuniary transfers. Sure, they matter a lot to the parties involved and there’s always an incentive to deputize the government to put a thumb on the scale of that dispute, but that’s not a matter of allocative efficiency, and not a matter for the antitrust laws.
Now we hear Iowa AG Miller pushing for the development of “the non-antitrust laws to deal with concentration.” By which he means the Packers and Stockyards Act. Maybe the DOJ has their Section 5 after all!
As if on cue, AG Miller trots out the pendulum story of antitrust enforcement–“how to bring the antitrust law back to the middle.” This is not really an accurate description, unfortunately. Even worse, it’s not an economically-sensible concept, and measuring the efficiency of antitrust enforcement by counting enforcement actions (or looking at rhetoric) is usually just flimsy cover for an essentially-political determination. Combine that with Miller’s suggestion that the P&S Act’s “unfair practices” language should be enlisted in the service of dealing with concentration, and the risk of false positives is much magnified. Which, of course, is a perfect lead-in for Christine Varney. Continue Reading…