Paul H. Rubin is the Dobbs Professor of Economics Emeritus, Emory University, and President, Southern Economic Association, 2013
I want to thank Geoff for inviting me to blog about my new book.
My book, The Capitalist Paradox: How Cooperation Enables Free Market Competition, Bombardier Books, 2019, has been published. The main question I address in this short book is: Given the obvious benefits of markets over socialism, why do so many still oppose markets? I have been concerned with this issue for many years. Given the current state of American politics, the question is even more important than when I began the book.
I begin by pointing out that humans are not good intuitive economists. Our minds evolved in a simple setting where the economy was simple, with little trade, little specialization (except by age and gender), and little capital. In this world there was no need for our brains to evolve to understand economics. (Politics is a different story.) The main takeaway from this world was that our minds evolved to view the world as zero-sum. Zero-sum thinking is the error behind most policy errors in economics.
The second part of the argument is that in many cases, when economists are discussing efficiency issues (such as optimal taxation) listeners are hearing distribution issues. So we economists would do better to begin with a discussion showing that there are efficiency (“size of the pie”) effects before showing what they are in a particular case. That is, we should show that taxation can affect total income before showing how it does so in a particular case. I call this “really basic economics,” which should be taught before basic economics. It is sometimes said that experts understand their field so well that they are “mind blind” to the basics, and that is the situation here.
I then show that competition is an improper metaphor for economics. Discussions of competition brings up sports (and in economics the notion of competition was borrowed from sports) and sports is zero-sum. Thus, when economists discuss competition, they reinforce people’s notion that economics is zero sum. People do not like competition. A quote from the book:
Here are some common modifiers of “competition” and the number of Google references to each:
“Cutthroat competition” (256,000), “excessive competition” (159,000), “destructive competition” (105,000), “ruthless competition” (102,000), “ferocious competition” (66,700), “vicious competition” (53,500), “unfettered competition” (37,000), “unrestrained competition” (34,500), “harmful competition” (18,000), and “dog-eat-dog competition” (15, 000). Conversely, for “beneficial competition” there are 16,400 references. For “beneficial cooperation” there are 548,000 references, and almost no references to any of the negative modifiers of cooperation.
The final point, and what ties it all together, is a
discussion showing that the economy is actually more cooperative than it is
competitive. There are more cooperative relationships in an economy than there
are competitive interactions. The basic
economic element is a transaction, and transactions are cooperative. Competition chooses the best agents to
cooperate with, but cooperation does the work and creates the consumer surplus.
Thus, referring to markets as “cooperative” rather than “competitive” would not
only reduce hostility towards markets, but would also be more accurate.
An economist reading this book would probably not learn much economics. I do not advocate any major change in economic theory from competition to cooperation. But I propose a different way to view the economy, and one that might help us better explain what we are doing to students and to policy makers, including voters.
After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.
I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)
As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”
This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.
Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.
In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).
The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.
The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.
What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.
Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.
As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.
The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?
It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.
A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”
The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.
The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.
Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.
In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.
Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.
Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.
But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.
Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.
Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.
The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.
If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.
Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.
He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.
What is bias?
We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.
In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.
Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.
Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.
The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).
“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.
On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.
A study with low statistical power has a reduced chance of detecting a true effect.
Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.
An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.
Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.
By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.
Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.
In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.
Now, we’re getting to power.
Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.
By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.
Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:
If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.
In other words, a low power test is more likely to convict the innocent than a high power test.
In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.
Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.
(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)
Is economics bogus?
It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:
Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.
For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:
[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.
Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.
In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”
Unexpectedly, on the day that the white copy of the upcoming repeal of the 2015 Open Internet Order was published, a mobile operator in Portugal with about 7.5 million subscribers is garnering a lot of attention. Curiously, it’s not because Portugal is a beautiful country (Iker Casillas’ Instagram feed is dope) nor because Portuguese is a beautiful romance language.
Rather it’s because old-fashioned misinformation is being peddled to perpetuate doomsday images that Portuguese ISPs have carved the Internet into pieces — and if the repeal of the 2015 Open Internet Order passes, the same butchery is coming to an AT&T store near you.
Much ado about data
This tempest in the teacup is about mobile data plans, specifically the ability of mobile subscribers to supplement their data plan (typically ranging from 200 MB to 3 GB per month) with additional 10 GB data packages containing specific bundles of apps – messaging apps, social apps, video apps, music apps, and email and cloud apps. Each additional 10 GB data package costs EUR 6.99 per month and Meo (the mobile operator) also offers its own zero rated apps. Similar plans have been offered in Portugal since at least 2012.
These data packages are a clear win for mobile subscribers, especially pre-paid subscribers who tend to be at a lower income level than post-paid subscribers. They allow consumers to customize their plan beyond their mobile broadband subscription, enabling them to consume data in ways that are better attuned to their preferences. Without access to these data packages, consuming an additional 10 GB of data would cost each user an additional EUR 26 per month and require her to enter into a two year contract.
These discounted data packages also facilitate product differentiation among mobile operators that offer a variety of plans. Keeping with the Portugal example, Vodafone Portugal offers 20 GB of additional data for certain apps (Facebook, Instagram, SnapChat, and Skype, among others) with the purchase of a 3 GB mobile data plan. Consumers can pick which operator offers the best plan for them.
In addition, data packages like the ones in question here tend to increase the overall consumption of content, reduce users’ cost of obtaining information, and allow for consumers to experiment with new, less familiar apps. In short, they are overwhelmingly pro-consumer.
Even if Portugal actually didn’t have net neutrality rules, this would be the furthest thing from the apocalypse critics make it out to be.
Net Neutrality in Portugal
But, contrary to activists’ misinformation, Portugal does have net neutrality rules. The EU implemented its net neutrality framework in November 2015 as a regulation, meaning that the regulation became the law of the EU when it was enacted, and national governments, including Portugal, did not need to transpose it into national legislation.
While the regulation was automatically enacted in Portugal, the regulation and the 2016 EC guidelines left the decision of whether to allow sponsored data and zero rating plans (the Regulation likely classifies data packages at issue here to be zero rated plans because they give users a lot of data for a low price) in the hands of national regulators. While Portugal is still formulating the standard it will use to evaluate sponsored data and zero rating under the EU’s framework, there is little reason to think that this common practice would be disallowed in Portugal.
On average, in fact, despite its strong net neutrality regulation, the EU appears to be softening its stance toward zero rating. This was evident in a recent EC competition policy authority (DG-Comp) study concluding that there is little reason to believe that such data practices raise concerns.
The activists’ willful misunderstanding of clearly pro-consumer data plans and purposeful mischaracterization of Portugal as not having net neutrality rules are inflammatory and deceitful. Even more puzzling for activists (but great for consumers) is their position given there is nothing in the 2015 Open Internet Order that would prevent these types of data packages from being offered in the US so long as ISPs are transparent with consumers.
A handful of increasingly noisy critics of intellectual property (IP) have emerged within free market organizations. Both the emergence and vehemence of this group has surprised most observers, since free market advocates generally support property rights. It’s true that there has long been a strain of IP skepticism among some libertarian intellectuals. However, the surprised observer would be correct to think that the latest critique is something new. In our experience, most free market advocates see the benefit and importance of protecting the property rights of all who perform productive labor – whether the results are tangible or intangible.
How do the claims of this emerging critique stand up? We have had occasion to examine the arguments of free market IP skeptics before. (For example, see here, here, here.) So far, we have largely found their claims wanting.
We have yet another occasion to examine their arguments, and once again we are underwhelmed and disappointed. We recently posted an essay at AEI’s Tech Policy Daily prompted by an odd report recently released by the Mercatus Center, a free-market think tank. The Mercatus report attacks recent research that supposedly asserts, in the words of the authors of the Mercatus report, that “the existence of intellectual property in an industry creates the jobs in that industry.” They contend that this research “provide[s] no theoretical or empirical evidence to support” its claims of the importance of intellectual property to the U.S. economy.
Our AEI essay responds to these claims by explaining how these IP skeptics both mischaracterize the studies that they are attacking and fail to acknowledge the actual historical and economic evidence on the connections between IP, innovation, and economic prosperity. We recommend that anyone who may be confused by the assertions of any IP skeptics waving the banner of property rights and the free market read our essay at AEI, as well as our previous essays in which we have called out similarly odd statements from Mercatus about IP rights.
The Mercatus report, though, exemplifies many of the concerns we raise about these IP skeptics, and so it deserves to be considered at greater length.
For instance, something we touched on briefly in our AEI essay is the fact that the authors of this Mercatus report offer no empirical evidence of their own within their lengthy critique of several empirical studies, and at best they invoke thin theoretical support for their contentions.
This is odd if only because they are critiquing several empirical studies that develop careful, balanced and rigorous models for testing one of the biggest economic questions in innovation policy: What is the relationship between intellectual property and jobs and economic growth?
Apparently, the authors of the Mercatus report presume that the burden of proof is entirely on the proponents of IP, and that a bit of hand waving using abstract economic concepts and generalized theory is enough to defeat arguments supported by empirical data and plausible methodology.
This move raises a foundational question that frames all debates about IP rights today: On whom should the burden rest? On those who claim that IP has beneficial economic effects? Or on those who claim otherwise, such as the authors of the Mercatus report?
The burden of proof here is an important issue. Too often, recent debates about IP rights have started from an assumption that the entire burden of proof rests on those investigating or defending IP rights. Quite often, IP skeptics appear to believe that their criticism of IP rights needs little empirical or theoretical validation, beyond talismanic invocations of “monopoly” and anachronistic assertions that the Framers of the US Constitution were utilitarians.
As we detail in our AEI essay, though, the problem with arguments like those made in the Mercatus report is that they contradict history and empirics. For the evidence that supports this claim, including citations to the many studies that are ignored by the IP skeptics at Mercatus and elsewhere, check out the essay.
Despite these historical and economic facts, one may still believe that the US would enjoy even greater prosperity without IP. But IP skeptics who believe in this counterfactual world face a challenge. As a preliminary matter, they ought to acknowledge that they are the ones swimming against the tide of history and prevailing belief. More important, the burden of proof is on them – the IP skeptics – to explain why the U.S. has long prospered under an IP system they find so odious and destructive of property rights and economic progress, while countries that largely eschew IP have languished. This obligation is especially heavy for one who seeks to undermine empirical work such as the USPTO Report and other studies.
In sum, you can’t beat something with nothing. For IP skeptics to contest this evidence, they should offer more than polemical and theoretical broadsides. They ought to stop making faux originalist arguments that misstate basic legal facts about property and IP, and instead offer their own empirical evidence. The Mercatus report, however, is content to confine its empirics to critiques of others’ methodology – including claims their targets did not make.
For example, in addition to the several strawman attacks identified in our AEI essay, the Mercatus report constructs another strawman in its discussion of studies of copyright piracy done by Stephen Siwek for the Institute for Policy Innovation (IPI). Mercatus inaccurately and unfairly implies that Siwek’s studies on the impact of piracy in film and music assumed that every copy pirated was a sale lost – this is known as “the substitution rate problem.” In fact, Siwek’s methodology tackled that exact problem.
IPI and Siwek never seem to get credit for this, but Siwek was careful to avoid the one-to-one substitution rate estimate that Mercatus and others foist on him and then critique as empirically unsound. If one actually reads his report, it is clear that Siwek assumes that bootleg physical copies resulted in a 65.7% substitution rate, while illegal downloads resulted in a 20% substitution rate. Siwek’s methodology anticipates and renders moot the critique that Mercatus makes anyway.
After mischaracterizing these studies and their claims, the Mercatus report goes further in attacking them as supporting advocacy on behalf of IP rights. Yes, the empirical results have been used by think tanks, trade associations and others to support advocacy on behalf of IP rights. But does that advocacy make the questions asked and resulting research invalid? IP skeptics would have trumpeted results showing that IP-intensive industries had a minimal economic impact, just as Mercatus policy analysts have done with alleged empirical claims about IP in other contexts. In fact, IP skeptics at free-market institutions repeatedly invoke studies in policy advocacy that allegedly show harm from patent litigation, despite these studies suffering from farworseproblems than anything alleged in their critiques of the USPTO and other studies.
Finally, we noted in our AEI essay how it was odd to hear a well-known libertarian think tank like Mercatus advocate for more government-funded programs, such as direct grants or prizes, as viable alternatives to individual property rights secured to inventors and creators. There is even more economic work being done beyond the empirical studies we cited in our AEI essay on the critical role that property rights in innovation serve in a flourishing free market, as well as work on the economic benefits of IP rights over other governmental programs like prizes.
Today, we are in the midst of a full-blown moral panic about the alleged evils of IP. It’s alarming that libertarians – the very people who should be defending all property rights – have jumped on this populist bandwagon. Imagine if free market advocates at the turn of the Twentieth Century had asserted that there was no evidence that property rights had contributed to the Industrial Revolution. Imagine them joining in common cause with the populist Progressives to suppress the enforcement of private rights and the enjoyment of economic liberty. It’s a bizarre image, but we are seeing its modern-day equivalent, as these libertarians join the chorus of voices arguing against property and private ordering in markets for innovation and creativity.
It’s also disconcerting that Mercatus appears to abandon its exceptionally high standards for scholarly work-product when it comes to IP rights. Its economic analyses and policy briefs on such subjects as telecommunications regulation, financial and healthcare markets, and the regulatory state have rightly made Mercatus a respected free-market institution. It’s unfortunate that it has lent this justly earned prestige and legitimacy to stale and derivative arguments against property and private ordering in the innovation and creative industries. It’s time to embrace the sound evidence and back off the rhetoric.
Many more, who will do far more justice than I can, will have much more to say on this, so I will only note it here. Ronald Coase has passed away. He was 102. The University of Chicago Law School has a notice here.
The first thing I wrote on the board for my students this semester was simply his name, “Coase.” I told them only on Friday that he was still an active scholar at 102.
Recently, I’ve been blogging about the difference between so-called “bias” in vertically integrated economic relationships and consumer harm (e.g., here and here). The two are different. Indeed, vertical integration and contractual arrangements are generally pro-consumer and efficient. Many of the same arguments surrounded the net neutrality debate with critics largely skeptical that the legislation was not needed (antitrust could be used when such contractual arrangements actually generated competitive harm) and would chill pro-competitive behavior.
In January, the Federal Communications Commission has now received its first complaint under the Order against MetroPCS. So, is the complaint about a monopolist Internet Service Provider (ISP) employing vertical contracts to exclude rivals and harm consumers? You be the judge. My colleague Tom Hazlett describes the situation in his (always) excellent Financial Times column:
MetroPCS, hit with its first formal complaint, is an upstart wireless network offering low prices and short-term contracts. As part of their $40 a month “all you can eat” voice, text and data plan, they slipped in a bonus: free, unlimited YouTube videos, customised to run fast and clear. Activist groups, led by Free Press, went ballistic. Their petition to the FCC declared that the mobile provider was favouring YouTube over other video sites, creating just the sort of “walled garden” that would destroy the internet. “The new service plans offered by MetroPCS give a preview of the future in a world without adequate protections for mobile broadband users,” they wrote.
The complaint performs a great public service, revealing just how net neutrality would “adequately protect mobile broadband users”. In fact, MetroPCS advances the interests of consumers by supporting enhanced access to the applications most popular with users. Such arrangements do not sabotage internet development, but drive it.
But what about the possibility of consumer harm so prominent in the Net Neutrality Order? As Hazlett explains, not only is such a competitive threat unlikely, but the regulatory restrictions imposed by the Order will impede competition and hurt consumers (in this case, especially targeting the price sensitive customers). Indeed, the crux of the complaint surrounds an effort by MetroPCS and Google to offer consumers additional choices. Read on:
MetroPCS possesses no market power. With 8m customers, it is the country’s fifth largest mobile operator, less than one-tenth the size of Verizon. Under no theory could it force customers to patronise certain websites. It couldn’t extract monopoly cash if it tried to.
Indeed, low-cost prepaid plans of MetroPCS are popular with users who want to avoid long-term contracts and are price sensitive. Half its customers are ‘cord cutters’, subscribers whose only phone is wireless and usage is intense. Voice minutes per month average about 2,000, more than double that of larger carriers.
The $40 plan is cheap because it’s inexpensively delivered using 2G technology. It is not broadband (topping out, in third party reviews, at just 100 kbps), and has software and capacity issues. In general, voice over internet is not supported by the handsets and video streaming is not available on the network. The carrier deals with those limitations in three ways.
First, the $40 per month price tag extends a fat discount. Unlimited everything can cost $120 on faster networks. Second, it has also deployed new 4G technology, offering both a $40 tier similar to the 2G product (no video streaming), but also a pumped up version with video streaming, VoIP and everything else – without data caps – for $60 a month. Of course, this network has far larger capacity and is much zippier (reliable at 700 kbps). PC World rated the full-blown 4G service “dirt cheap”.
Third, to upgrade the cheaper-than-dirt 2G experience, MetroPCS got Google – owner of YouTube – to compress their videos for delivery over the older network. This allowed the mobile carrier to extend unlimited wildly popular YouTube content to its lowest tier subscribers. Busted! Favouring YouTube is said to violate neutrality. …
The FCC has already erred. Innovators such as MetroPCS and Google should need no
defence in supplying customers’ superior choices. Neither consumers nor the internet are “protected” by rules hostile to co-operative efforts – even if money were to pass between firms – that expand outputs and lower prices. If the FCC is to take such ill-targeted attacks on competitive rivalry seriously, it will do far more to deter the open internet than to preserve it.
Not an auspicious beginning for the Net Neutrality regime — or consumers.
Baker’s central thesis in Preserving a Political Bargain builds on earlier work concerning competition policy as an implicit political bargain that was reached during the 1940s between the more extreme positions of laissez-faire on the one hand and regulation on the other. The new piece tries to explain what Baker describes as the “non-interventionist” critique of monopolization enforcement within this framework. The piece is motivated, at least in part, by the Section 2 Report debates. Baker’s basic story is fairly straightforward. Under Baker’s account, competition policy is the outcome of the political bargaining process described above. The “competition policy bargain” was then successfully modified in the 1980s in response to the Chicago School critique. According to Baker, during the 1970s and 80s, “the Supreme Court revised many if not most of aspects of antitrust law along the lines suggested by legal and economic commentators loosely associated with the University of Chicago,” though this revolution changed the antitrust laws “dramatically but not fundamentally” and reflected a “bipartisan consensus in favor of reforming antitrust rules to enhance the efficiency gains arising from competition policy.”
Baker applies his “political bargain” framework to argue that the “modern non-interventionist critique,” unlike the successful attempt to modify the “terms” of the bargain in the 1980s, is highly likely to fail. Baker defines the non-interventionist critique as relying on a particular series of legal and economic arguments. For example, Baker describes the economic arguments deployed by the non-interventionists as that “markets are self-correcting,” “monopoly fosters economic growth,” “there is a single monopoly profit,” “excluded fringe rivals may not matter competitively,” “courts cannot reliably identify monopolization,” and so on. Animated by the Section 2 Hearings, Report, its withdrawal, and the subsequent controversy, Baker begins from the assumption the non-interventionists are trying to modify an existing bargain, since non-interventionists are “the primary source of recent criticism of monopolization standards.” From there, Baker argues that this concerted effort to modify the competition bargain in favor of less intervention is unlikely to succeed because such an attempted modification is unlikely to mobilize broader political support in the current social environment.
Let me start by saying that I agree entirely with the ultimate conclusion in so far as I don’t think there is any doubt that, in the current environment, it is unlikely that the implicit “policy bargain” will be modified in a way that makes it more difficult for monopolization plaintiffs. I have much more trouble with the premise of the exercise, and on how one knows a deviation from the current policy bargain when he sees one, and so will focus my critique on those issues.
Baker paints the picture of a dramatic and fundamental attack by non-interventionists on monopolization enforcement. My response to the premise of the paper was: “What non-interventionist effort to further relax monopolization standards?” To be sure, there are plenty of folks who have cautioned against expansive use of Section 2. It strikes me that the fundamental weakness in Baker’s analysis is that his starting point – the “terms” of the current political bargain — derives from assumptions that don’t seem to square with reality. In other words, rather than envisioning the current debates around Section 2 as an assault by non-interventionists, there is a much more compelling case that it is the interventionists attempting to “deviate” from whatever implicit political bargain exists with respect to competition policy. Christine Varney’s declaration that there is “no such thing as a false positive” – the presence of such being a seminal observation since The Limits of Antitrust (in 1984, no less) immediately leaps to mind. I will turn to making the case that it is the interventionists making the offer for modification below.
But first note that Baker leaves out of his list of “economic arguments” against Section 2 both error costs and that there is little empirical evidence that aggressive monopolization enforcement generates consumer benefits. This is, in my view, an important omission since Baker makes the point that all of the other economic arguments have attracted rebuttals. If there has been a rebuttal of the argument that the empirical evidence suggests that instances of anticompetitive exclusive dealing, RPM, tying and vertical integration are quite rare, or an empirical demonstration that monopolization enforcement has generated consumer welfare gains bet of error and administrative costs, I’d like to see it. Further, note that the original Chicago School argument, a la Director & Levi, against monopolization enforcement was not that anticompetitive exclusion was impossible, but rather that it was sufficiently rare in the world as an empirical matter as to be irrelevant to policy formation. Baker ignores this empirical, evidence-based non-interventionist critique, which, for example, has been the core of the position taken by modern academic skeptics of monopolization enforcement like myself, Dan Crane, Tim Muris, Bruce Kobayashi, Luke Froeb, and David Evans.
What is the evidence that there is a non-interventionist attack on the current competition policy bargain as it exists with respect to monopolization? Not much. The first is that the non-interventionists are the “source of criticism of recent monopolization standards.” In parts of the paper, Baker equates the non-interventionists with business interests. But under that formulation, there is not much evidence to support this proposition. If anything, and as Baker readily acknowledges in a footnote, the headlines seem to tell a story of AMD, Google, Microsoft, Adobe and others expending resources to instigate antitrust enforcement against rivals not to restrict the scope of Section 2.
Baker cites more generally the recent monopolization controversy as driven by the non-interventionist attempt to deviate from the status quo. But this part of the analysis reads to me as driven entirely by assertion that the competition policy preferences that Baker appears to prefer are in the “political bargain” and deeming opposition to those (interventionist) policies attempted “deviations.” Perhaps this is a problem of hammers and nails. Baker’s more interventionist than I and so sees obstacles between his ideal vision of antitrust law and reality as caused by non-interventionists. But I’ve got a different hammer and see different nails. For example, I read the Section 2 Report as largely (but not entirely) limited to a description of Section 2 law as it exists and the vigorously dissenting voices coming from the interventionist crowd. As George Priest has put it:
It’s fair enough for a succeeding administration to reject policies of its predecessor. But the Justice Department report was not authored by John Yoo or Alberto Gonzales. It was the work of a year-long study that considered recommendations from 29 panels and 119 witnesses, most of them critical of the minimalist Chicago School approach to antitrust law. The report’s conclusions basically track Supreme Court law with modest extensions in areas where the Supreme Court has not ruled. Ms. Varney denounced the report in its entirety.
Finding the evidence lacking of some strong non-interventionist attempt to impose dramatic change on Section 2 that deviates from the current political bargain, I offer an alternative hypothesis: it is the interventionists that are attempting to deviate from the current political bargain and propose change.
For starters, I think that Baker and I would agree that there actually is a “stable” competition policy bargain with respect to monopolization that has drawn bipartisan over the last twenty years – at least in the courts. Note that even restricting attention to decisions during the George W. Bush administration from 2004-08, the total vote count of these decisions was 86-9, with 7 of 11 decisions decided unanimously, and only Leegin attracted more than two votes of dissent (and more likely, as others have pointed out, for its implications with respect to abortion jurisprudence than anything to do with the antitrust analysis of vertical restraints!). The monopolization-related decisions of the modern era, including Trinko, Linkline, Credit Suisse, and Brooke Group have all made lift more difficult for plaintiffs in one way or another. But as I’ve written on this blog over and over again, the error-cost analysis embedded in these decisions is a key feature of modern Section 2 jurisprudence that is part of the current bargain. So as I understand it, these decisions must be part of the current bargain. It would be difficult, in fact, to find another area of law in which the Court has articulated principles with such overriding unanimity despite persistent attempts by some scholars to advocate for an alternate overarching legal framework. I think there is a much more compelling story – and one backed by greater evidence than Baker’s narrative — to tell about the modern attempt of the interventionists to renegotiate terms. Let’s discuss some of the evidence.
For starters, the strongly-toned dissents from the Section 2 Report from both Agencies after Hearings with witnesses and testimony from all possible sides of debate — even the parts that merely describe the law — suggest dissatisfaction with the terms of the modern bargain Baker describes and that are represented by the monopolization case law created over the past several decades by supermajority Supreme Court decisions. It is AAG Varney who recently, as Baker acknowledges in the paper, minimized the importance of Trinko under Section 2 in favor of “tried and true” cases like Aspen Skiing. This is, of course, to say nothing of AAG Varney’s endorsement of an antitrust policy free of error-cost considerations.
Further, it is the interventionists at the Federal Trade Commission that have turned to an expanded vision of Section 5 to evade the constraints imposed by Section 2. In fact, the Commission has explicitly announced that it does not think that the constraints imposed on plaintiffs under Section 2 should apply to the antitrust agencies! If this is not an attempt to deviate from the existing political bargain in an interventionist direction, I’m not sure what is. Put another way, interventionists are currently attempting to re-write existing Section 2 law – the “political bargain” – through Section 5. Given the Complaint in Intel and promised use of Section 5 in broad circumstances previously covered under the Section 2 law envisioned under the “stable” bargain that Baker describes as generating bipartisan support from Democrats and Republicans, surely this is an attempt to deviate from the prior bargain.
It is the interventionists that have provided new economic arguments in favor of greater antitrust enforcement. For example, the recent trend towards reliance on behavioral economics endorsed by the agencies emerges out of dissatisfaction with Chicago and Post-Chicago School theories that adopt rational actor models and, presumably, inability to get substantial traction in the federal courts from existing interventionist models provided by the Post-Chicago School.
The interventionist assault on the current implicit competition policy bargain goes further than the agencies though. Congress currently has in front of it pending legislation to take out of the courts the development of a rule of reason standard for minimum RPM, a Twombly-repealer, legislation to make reverse payments in pharmaceutical patent settlements illegal, and legislation to regulate interchange fees. Every one of these proposals represents an interventionist reaction attempting to overturn a judicial application of current competition law and suggest that perhaps the interventionists do not trust the courts to oversee the political bargain.
The premise of Baker’s analysis (that the non-interventionists are strongly challenging the current status quo) is either false to begin with or practically irrelevant in light of the much more important interventionist challenge. Note again that Baker’s claim is that the non-interventionists would fail in any attempt to reduce the scope of monopolization enforcement because they will not be able to generate more broad political support in the current environment. No doubt that is true. But what about the interventionists chances for success? Baker’s analysis provides a very interesting lens to analysis evaluate questions like whether the interventionists will be successful in renegotiating the terms of the competition policy bargain. At the moment, though things may be changing, they seem to have greater political support. I think the most interesting conflict arising out of Baker’s interesting conception of competition between stakeholders in antitrust policy is that it illuminates what might be a battle for supremacy in governing the bargain between agencies and courts. As Baker notes, the courts have been a critical part of establishing the terms of the bargain and adjudicating attempts to “re-negotiate” by private plaintiffs and agencies over time. Recently, interventionists have attempted to shift antitrust (and consumer protection) enforcement away from courts and towards administrative agencies, such as with Section 5 and the proposed CFPA. To me, these present more important and interesting policy questions than whether non-interventionists will be successful in further shrinking Section 2 law. I believe that the prediction emerging from Baker’s model depends on what happens with the political environment in the next few years.
My prediction, for what its worth, is that the current policy bargain will certainly hold together in the courts. The remarkable strength of the current Section 2 status quo is held together by a combination of the intuitive appeal of price theory for generalist judges relative to more interventionist Post-Chicago and Behavioral economic alternatives, the relative explanatory power of the so-called Chicago School theories relative to contenders. Nothing there has changed. I have less of a sense about the impact of Congressional changes, judicial nominations, and the rise of the EU as monopolization enforcer have on monopolization in the US.
Bill Northey, IA Ag Sec’y, sounds a bit like an economist (ah, turns out he has a degree in ag business and an MBA . . . ). Yes, price of seeds has gone up, but so has yield, and so has overall value. The issue, he says, is how to divide the surplus, and he suggests that it’s dividing the pie that drives farmer concerns. That’s not at all a surprise, but it’s also not much of an antitrust issue. Unless the pie could be bigger absent, say, Monsanto’s huge investment in seeds and the resulting relatively-concentrated market structure (and basing enforcement on the theoretical possibility of that counter-factual is a perilous enterprise, as Josh and I have suggested many times), this is just a question of pecuniary transfers. Sure, they matter a lot to the parties involved and there’s always an incentive to deputize the government to put a thumb on the scale of that dispute, but that’s not a matter of allocative efficiency, and not a matter for the antitrust laws.
Now we hear Iowa AG Miller pushing for the development of “the non-antitrust laws to deal with concentration.” By which he means the Packers and Stockyards Act. Maybe the DOJ has their Section 5 after all!
As if on cue, AG Miller trots out the pendulum story of antitrust enforcement–“how to bring the antitrust law back to the middle.” This is not really an accurate description, unfortunately. Even worse, it’s not an economically-sensible concept, and measuring the efficiency of antitrust enforcement by counting enforcement actions (or looking at rhetoric) is usually just flimsy cover for an essentially-political determination. Combine that with Miller’s suggestion that the P&S Act’s “unfair practices” language should be enlisted in the service of dealing with concentration, and the risk of false positives is much magnified. Which, of course, is a perfect lead-in for Christine Varney. Continue Reading…