Yet another sad story was caught on camera this week showing a group of police officers killing an unarmed African-American man named George Floyd. While the officers were fired from the police department, there is still much uncertainty about what will happen next to hold those officers accountable as a legal matter.
A well-functioning legal system should protect the constitutional rights of American citizens to be free of unreasonable force from police officers, while also allowing police officers the ability to do their jobs safely and well. In theory, civil rights lawsuits are supposed to strike that balance.
In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.
However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity. Qualified immunity started as a mechanism to protect officers from suit when they acted in “good faith.” Over time, though, the doctrine has evolved away from a subjective test based upon the actor’s good faith to an objective test based upon notice in judicial precedent. As a result, courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it. In the words of the Supreme Court, qualified immunity protects “all but the plainly incompetent or those who knowingly violate the law.”
This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.
Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity. On top of that, the regular practice of governments is to indemnify officers even when there is a settlement or a judgment. The result is to encourage police officers to take insufficient care when making the choice about the level of force to use.
Economics 101 makes a clear prediction: When unreasonable uses of force are not held accountable, you get more unreasonable uses of force. Unfortunately, the news continues to illustrate the accuracy of this prediction.
In the wake of the launch of Facebook’s content oversight board, Republican Senator Josh Hawley and FCC Commissioner Brendan Carr, among others, have taken to Twitter to levy criticisms at the firm and, in the process, demonstrate just how far the Right has strayed from its first principles around free speech and private property. For his part, Commissioner Carr’s thread makes the case that the members of the board are highly partisan and mostly left-wing and can’t be trusted with the responsibility of oversight. While Senator Hawley took the approach that the Board’s very existence is just further evidence of the need to break Facebook up.
Both Hawley and Carr have been lauded in rightwing circles, but in reality their positions contradict conservative notions of the free speech and private property protections given by the First Amendment.
I have noted in severalplaces before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.
With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).
Commissioner Carr’s complaint and Senator Hawley’s antitrust approach of breaking up Facebook has much more in common with the views traditionally held by left-wing Democrats on the need for the government to regulate private actors in order to promote speech interests. Originalists and law & economics scholars, on the other hand, have consistently taken the opposite point of view that the First Amendment protects against government infringement of speech interests, including protecting the right to editorial discretion. While there is clearly a conflict of visions in First Amendment jurisprudence, the conservative (and, in my view, correct) point of view should not be jettisoned by Republicans to achieve short-term political gains.
The First Amendment restricts government action, not private action
The First Amendment, by its very text, only applies to government action: “Congress shall make no law . . . abridging the freedom of speech.” This applies to the “State[s]” through the Fourteenth Amendment. There is extreme difficulty in finding any textual hook to say the First Amendment protects against private action, like that of Facebook.
Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).
This was true at the adoption of the First Amendment and remains true today in a high-tech world. Federal district courts have consistently dismissed First Amendment lawsuits against Facebook on the grounds there is no state action.
For instance, in Nyawba v. Facebook, the plaintiff initiated a civil rights lawsuit against Facebook for restricting his use of the platform. The U.S. District Court for the Southern District of Texas dismissed the case, noting
Because the First Amendment governs only governmental restrictions on speech, Nyabwa has not stated a cause of action against FaceBook… Like his free speech claims, Nyabwa’s claims for violation of his right of association and violation of his due process rights are claims that may be vindicated against governmental actors pursuant to § 1983, but not a private entity such as FaceBook.
Similarly, in Young v. Facebook, the U.S. District Court for the Northern District of California rejected a claim that Facebook violated the First Amendment by deactivating the plaintiff’s Facebook page. The court declined to subject Facebook to the First Amendment analysis, stating that “because Young has not alleged any action under color of state law, she fails to state a claim under § 1983.”
The First Amendment restricts antitrust actions against Facebook, not Facebook’s editorial discretion over its platform
Far from restricting Facebook, the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.
There is no basis for concluding online platforms do not have editorial discretion under the law. In fact, the position of Facebook here is very similar to the newspaper in Miami Herald Publishing Co. v. Tornillo, in which the Supreme Court considered a state law giving candidates for public office a right to reply in newspapers to editorials written about them. The Florida Supreme Court upheld the statute, finding it furthered the “broad societal interest in the free flow of information to the public.” The U.S. Supreme Court, despite noting the level of concentration in the newspaper industry, nonetheless reversed. The Court explicitly found the newspaper had a First Amendment right to editorial discretion:
The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.
Online platforms have the same First Amendment protections for editorial discretion. For instance, in both Search King v. Google and Langdon v. Google, two different federal district courts ruled Google’s search results are subject to First Amendment protections, both citing Tornillo.
In Zhang v. Baidu.com, another district court went so far as to grant a Chinese search engine the right to editorial discretion in limiting access to democracy movements in China. The court found that the search engine “inevitably make[s] editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information.” Much like the search engine in Zhang, Facebook is clearly making editorial judgments about what information shows up in newsfeed and where to display it.
None of this changes because the generally applicable law is antitrust rather than some other form of regulation. For instance, in Tornillo, the Supreme Court took pains to distinguish the case from an earlier antitrust case against newspapers, Associated Press v. United States, which found that there was no broad exemption from antitrust under the First Amendment.
The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their `reason’ tells them should not be published.”
In other words, the Tornillo and Associated Press establish the government may not compel speech through regulation, including an antitrust remedy.
Once it is conceded that there is a speech interest here, the government must justify the use of antitrust law to compel Facebook to display the speech of users in the newsfeeds of others under the strict scrutiny test of the First Amendment. In other words, the use of antitrust law must be narrowly tailored to a compelling government interest. Even taking for granted that there may be a compelling government interest in facilitating a free and open platform (which is by no means certain), it is clear that this would not be narrowly tailored action.
First, “breaking up” Facebook is clearly overbroad as compared to the goal of promoting free speech on the platform. There is no need to break it up just because it has an Oversight Board that engages in editorial responsibilities. There are many less restrictive means, including market competition, which has greatly expanded consumer choice for communications and connections. Second, antitrust does not even really have a remedy for free speech issues complained of here, as it would require courts to engage in long-term oversight and engage in compelled speech foreclosed by Associated Press.
Note that this makes good sense from a law & economics perspective. Platforms like Facebook should be free to regulate the speech on their platforms as they see fit and consumers are free to decide which platforms they wish to use based upon that information. While there are certainly network effects to social media, the plethora of options currently available with low switching costs suggests that there is no basis for antitrust action against Facebook because consumers are unable to speak. In other words, the least restrictive means test of the First Amendment is best fulfilled by market competition in this case.
If there were a basis for antitrust intervention against Facebook, either through merger review or as a standalone monopoly claim, the underlying issue would be harm to competition. While this would have implications for speech concerns (which may be incorporated into an analysis through quality-adjusted price), it is inconceivable how an antitrust remedy could be formed on speech issues consistent with the First Amendment.
Despite now well-worn complaints by so-called conservatives in and out of the government about the baneful influence of Facebook and other Big Tech companies, the First Amendment forecloses government actions to violate the editorial discretion of these companies. Even if Commissioner Carr is right, this latest call for antitrust enforcement against Facebook by Senator Hawley should be rejected for principled conservative reasons.
John Maynard Keynes wrote in his famous General Theorythat “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”
This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society, New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning.
Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.
Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.”
Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s.
Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.
In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.
First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.
The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.
In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.
Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.
Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,
“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”
This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.
Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.
In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data.
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger…
One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.
In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.
Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:
U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.
Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).
Model 1: Unadjusted for demographics and content quality
Model 2: Adjusted for demographics but not content quality
Model 3: Adjusted for demographics and data usage
Model 4: Adjusted for demographics and content quality
Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:
The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing.
In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE.
Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition.
In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.
At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway. For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors.
So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”
For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in TheEconomists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.
Henry Manne was a great man, and a great father. He was, for me as for many others, one of the most important intellectual influences in my life. I will miss him dearly.
Following is his official obituary. RIP, dad.
Henry Girard Manne died on January 17, 2015 at the age of 86. A towering figure in legal education, Manne was one of the founders of the Law and Economics movement, the 20th century’s most important and influential legal academic discipline.
Manne is survived by his wife, Bobbie Manne; his children, Emily and Geoffrey Manne; two grandchildren, Annabelle and Lily Manne; and two nephews, Neal and Burton Manne. He was preceded in death by his parents, Geoffrey and Eva Manne, and his brother, Richard Manne.
Henry Manne was born on May 10, 1928, in New Orleans. The son of merchant parents, he was raised in Memphis, Tennessee. He attended Central High School in Memphis, and graduated with a BA in economics from Vanderbilt University in 1950. Manne received a JD from the University of Chicago in 1952, and a doctorate in law (SJD) from Yale University in 1966. He also held honorary degrees from Seattle University, Universidad Francesco Marroquin in Guatemala and George Mason University.
Following law school Manne served in the Air Force JAG Corps, stationed at Chanute Air Force Base in Illinois and McGuire Air Force Base in New Jersey. He practiced law briefly in Chicago before beginning his teaching career at St. Louis University in 1956. In subsequent years he also taught at the University of Wisconsin, George Washington University, the University of Rochester, Stanford University, the University of Miami, Emory University, George Mason University, the University of Chicago, and Northwestern University.
Throughout his career Henry Manne ’s writings originated, developed or anticipated an extraordinary range of ideas and themes that have animated the past forty years of law and economics scholarship. For his work, Manne was named a Life Member of the American Law and Economics Association and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics.
In the 1950s and 60s Manne pioneered the application of economic principles to the study of corporations and corporate law, authoring seminal articles that transformed the field. His article, “Mergers and the Market for Corporate Control,” published in 1965, is credited with opening the field of corporate law to economic analysis and with anticipating what has come to be known as the Efficient Market Hypothesis (for which economist Eugene Fama was awarded the Nobel Prize in 2013). Manne’s 1966 book, Insider Trading and the Stock Market was the first scholarly work to challenge the logic of insider trading laws, and remains the most influential book on the subject today.
In 1968 Manne moved to the University of Rochester with the aim of starting a new law school. Manne anticipated many of the current criticisms that have been aimed at legal education in recent years, and proposed a law school that would provide rigorous training in the economic analysis of law as well as specialized training in specific areas of law that would prepare graduates for practice immediately out of law school. Manne’s proposal for a new law school, however, drew the ire of incumbent law schools in upstate New York, which lobbied against accreditation of the new program.
While at Rochester, in 1971, Manne created the “Economics Institute for Law Professors,” in which, for the first time, law professors were offered intensive instruction in microeconomics with the aim of incorporating economics into legal analysis and theory. The Economics Institute was later moved to the University of Miami when Manne founded the Law &Economics Center there in 1974. While at Miami, Manne also began the John M. Olin Fellows Program in Law and Economics, which provided generous scholarships for professional economists to earn a law degree. That program (and its subsequent iterations) has gone on to produce dozens of professors of law and economics, as well as leading lawyers and influential government officials.
The creation of the Law & Economics Center (which subsequently moved to Emory University and then to George Mason Law School, where it continues today), was one of the foundational events in the Law and Economics Movement. Of particular importance to the development of US jurisprudence, its offerings were expanded to include economics courses for federal judges. At its peak a third of the federal bench and four members of the Supreme Court had attended at least one of its programs, and every major law school in the country today counts at least one law and economics scholar among its faculty. Nearly every legal field has been influenced by its scholarship and teaching.
When Manne became Dean of George Mason Law School in Arlington, Virginia, in 1986, he finally had the opportunity to implement the ideas he had originally developed at Rochester. Manne’s move to George Mason united him with economist James Buchanan, who was awarded the Nobel Prize for Economics in 1986 for his path-breaking work in the field of Public Choice economics, and turned George Mason University into a global leader in law and economics. His tenure as dean of George Mason, where he served as dean until 1997 and George Mason University Foundation Professor until 1999, transformed legal education by integrating a rigorous economic curriculum into the law school, and he remade George Mason Law School into one of the most important law schools in the country. The school’s Henry G. Manne Moot Court Competition for Law & Economics and the Henry G. Manne Program in Law and Economics Studies are named for him.
Manne was celebrated for his independence of mind and respect for sound reasoning and intellectual rigor, instead of academic pedigree. Soon after he left Rochester to start the Law and Economics Center, he received a call from Yale faculty member Ralph Winter (who later became a celebrated judge on the United States Court of Appeals) offering Manne a faculty position. As he recounted in an interview several years later, Manne told Winter, “Ralph, you’re two weeks and five years too late.” When Winter asked Manne what he meant, Manne responded, “Well, two weeks ago, I agreed that I would start this new center on law and economics.” When Winter asked, “And five years?” Manne responded, “And you’re five years too late for me to give a damn.”
The academic establishment’s slow and skeptical response to the ideas of law and economics eventually persuaded Manne that reform of legal education was unlikely to come from within the established order and that it would be necessary to challenge the established order from without. Upon assuming the helm at George Mason, Dean Manne immediately drew to the school faculty members laboring at less-celebrated law schools whom Manne had identified through his economics training seminars for law professors, including several alumni of his Olin Fellows programs. Today the law school is recognized as one of the world’s leading centers of law and economics.
Throughout his career, Manne was an outspoken champion of free markets and liberty. His intellectual heroes and intellectual peers were classical liberal economists like Friedrich Hayek, Ludwig Mises, Armen Alchian and Harold Demsetz, and these scholars deeply influenced his thinking. As economist Donald Boudreax said of Dean Manne, “I think what Henry saw in Alchian – and what Henry’s own admirers saw in Henry – was the reality that each unfailingly understood that competition in human affairs is an intrepid force…”
In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pelerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.
After leaving George Mason in 1999, Manne remained an active scholar and commenter on public affairs as a frequent contributor to the Wall Street Journal. He continued to provide novel insights on corporate law, securities law, and the reform of legal education. Following his retirement Manne became a Distinguished Visiting Professor at Ave Maria Law School in Naples, Florida. The Liberty Fund, of Indianapolis, Indiana, recently published The Collected Works of Henry G. Manne in three volumes.
For some, perhaps more than for all of his intellectual accomplishments Manne will be remembered as a generous bon vivant who reveled in the company of family and friends. He was an avid golfer (who never scheduled a conference far from a top-notch golf course), a curious traveler, a student of culture, a passionate eater (especially of ice cream and Peruvian rotisserie chicken from El Pollo Rico restaurant in Arlington, Virginia), and a gregarious debater (who rarely suffered fools gladly). As economist Peter Klein aptly remarked: “He was a charming companion and correspondent — clever, witty, erudite, and a great social and cultural critic, especially of the strange world of academia, where he plied his trade for five decades but always as a slight outsider.”
Scholar, intellectual leader, champion of individual liberty and free markets, and builder of a great law school—Manne’s influence on law and legal education in the Twentieth Century may be unrivaled. Today, the institutions he built and the intellectual movement he led continue to thrive and to draw sustenance from his intellect and imagination.
There will be a memorial service at George Mason University School of Law in Arlington, Virginia on Friday, February 13, at 4:00 pm. In lieu of flowers the family requests that donations be made in his honor to the Law & Economics Center at George Mason University School of Law, 3301 Fairfax Drive, Arlington, VA 22201 or online at http://www.masonlec.org.
As it begins its hundredth year, the FTC is increasingly becoming the Federal Technology Commission. The agency’s role in regulating data security, privacy, the Internet of Things, high-tech antitrust and patents, among other things, has once again brought to the forefront the question of the agency’s discretion and the sources of the limits on its power.Please join us this Monday, December 16th, for a half-day conference launching the year-long “FTC: Technology & Reform Project,” which will assess both process and substance at the FTC and recommend concrete reforms to help ensure that the FTC continues to make consumers better off.
FTC Commissioner Josh Wright will give a keynote luncheon address titled, “The Need for Limits on Agency Discretion and the Case for Section 5 UMC Guidelines.” Project members will discuss the themes raised in our inaugural report and how they might inform some of the most pressing issues of FTC process and substance confronting the FTC, Congress and the courts. The afternoon will conclude with a Fireside Chat with former FTC Chairmen Tim Muris and Bill Kovacic, followed by a cocktail reception.
Lunch and Keynote Address (12:00-1:00)
FTC Commissioner Joshua Wright
Introduction to the Project and the “Questions & Frameworks” Report (1:00-1:15)
Panel 2: Section 5 and the Future of the FTC (2:45-4:00)
Paul Rubin (Emory University Law and Economics | Former Director of Advertising Economics, BE)
James Cooper (GMU Law | Former Acting Director, OPP)
Gus Hurwitz (University of Nebraska Law)
Berin Szoka (TechFreedom) (moderator)
A Fireside Chat with Former FTC Chairmen (4:15-5:30)
Tim Muris (Former FTC Chairman | George Mason University) & Bill Kovacic (Former FTC Chairman | George Washington University)
Our conference is a “widely-attended event.” Registration is $75 but free for nonprofit, media and government attendees. Space is limited, so RSVP today!
Working Group Members:
Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A on Google+ that “our efforts at patent reform only went about halfway to where we need to go” and that he would like “to see if we can build some additional consensus on smarter patent laws.” So, Reps. DeFazio and Chaffetz introduced on March 1 the Saving High-tech Innovators from Egregious Legal Disputes (SHIELD) Act, which creates a “losing plaintiff patent-owner pays” litigation system for a single type of patent owner—patent licensing companies that purchase and license patents in the marketplace (and who sue infringers when infringers refuse their requests to license). To Google, to Representative DeFazio, and to others, these patent licensing companies are “patent trolls” who are destroyers of all things good—and the SHIELD Act will save us all from these dastardly “trolls” (is a troll anything but dastardly?).
As I and other scholars have pointed out, the “patent troll” moniker is really just a rhetorical epithet that lacks even an agreed-upon definition. The term is used loosely enough that it sometimes covers and sometimes excludes universities, Thomas Edison, Elias Howe (the inventor of the lockstitch in 1843), Charles Goodyear (the inventor of vulcanized rubber in 1839), and even companies like IBM. How can we be expected to have a reasonable discussion about patent policy when our basic terms of public discourse shift in meaning from blog to blog, article to article, speaker to speaker? The same is true of the new term, “Patent Assertion Entities,” which sounds more neutral, but has the same problem in that it also lacks any objective definition or usage.
Setting aside this basic problem of terminology for the moment, the SHIELD Act is anything but a “smarter patent law” (to quote President Obama). Some patent scholars, like Michael Risch, have begun to point out some of the serious problems with the SHIELD Act, such as its selectively discriminatory treatment of certain types of patent-owners. Moreover, as Professor Risch ably identifies, this legislation was so cleverly drafted to cover only a limited set of a specific type of patent-owner that it ended up being too clever. Unlike the previous version introduced last year, the 2013 SHIELD Act does not even apply to the flavor-of-the-day outrage over patent licensing companies—the owner of the podcast patent. (Although you wouldn’t know this if you read the supporters of the SHIELD Act like the EFF who falsely claim that this law will stop patent-owners like the podcast patent-owning company.)
There are many things wrong with the SHIELD Act, but one thing that I want to highlight here is that it based on a falsehood: the oft-repeated claim that two Boston University researchers have proven in a study that “patent troll suits cost American technology companies over $29 billion in 2011 alone.” This is what Rep. DeFazio said when he introduced the SHIELD Act on March 1. This claim was repeated yesterday by House Members during a hearing on “Abusive Patent Litigation.” The claim that patent licensing companies cost American tech companies $29 billion in a single year (2011) has become gospel since this study, The Direct Costs from NPE Disputes, was released last summer on the Internet. (Another name of patent licensing companies is “Non Practicing Entity” or “NPE.”) A Google search of “patent troll 29 billion” produces 191,000 hits. A Google search of “NPE 29 billion” produces 605,000 hits. Such is the making of conventional wisdom.
The problem with conventional wisdom is that it is usually incorrect, and the study that produced the claim of “$29 billion imposed by patent trolls” is no different. The $29 billion cost study is deeply and fundamentally flawed, as explained by two noted professors, David Schwartz and Jay Kesan, who are also highly regarded for their empirical and economic work in patent law. In their essay, Analyzing the Role of Non-Practicing Entities in the Patent System, also released late last summer, they detailed at great length serious methodological and substantive flaws in The Direct Costs from NPE Disputes. Unfortunately, the Schwartz and Kesan essay has gone virtually unnoticed in the patent policy debates, while the $29 billion cost claim has through repetition become truth.
In the hope that at least a few more people might discover the Schwartz and Kesan essay, I will briefly summarize some of their concerns about the study that produced the $29 billion cost figure. This is not merely an academic exercise. Since Rep. DeFazio explicitly relied on the $29 billion cost claim to justify the SHIELD Act, and he and others keep repeating it, it’s important to know if it is true, because it’s being used to drive proposed legislation in the real world. If patent legislation is supposed to secure innovation, then it behooves us to know if this legislation is based on actual facts. Yet, as Schwartz and Kesan explain in their essay, the $29 billion cost claim is based on a study that is fundamentally flawed in both substance and methodology.
In terms of its methodological flaws, the study supporting the $29 billion cost claim employs an incredibly broad definition of “patent troll” that covers almost every person, corporation or university that sues someone for infringing a patent that it is not currently being used to manufacture a product at that moment. While the meaning of the “patent troll” epithet shifts depending on the commentator, reporter, blogger, or scholar who is using it, one would be extremely hard pressed to find anyone embracing this expansive usage in patent scholarship or similar commentary today.
There are several reasons why the extremely broad definition of “NPE” or “patent troll” in the study is unusual even compared to uses of this term in other commentary or studies. First, and most absurdly, this definition, by necessity, includes every universityin the world that sues someone for infringing one of its patents, as universities don’t manufacture goods. Second, it includes every individual and start-up company who plans to manufacture a patented invention, but is forced to sue an infringer-competitor who thwarted these business plans by its infringing sales in the marketplace. Third, it includes commercial firms throughout the wide-ranging innovation industries—from high tech to biotech to traditional manufacturing—that have at least one patent among a portfolio of thousands that is not being used at the moment to manufacture a product because it may be “well outside the area in which they make products” and yet they sue infringers of this patent (the quoted language is from the study). So, according to this study, every manufacturer becomes an “NPE” or “patent troll” if it strays too far from what somebody subjectively defines as its rightful “area” of manufacturing. What company is not branded an “NPE” or “patent troll” under this definition, or will necessarily become one in the future given inevitable changes in one’s business plans or commercial activities? This is particularly true for every person or company whose only current opportunity to reap the benefit of their patented invention is to license the technology or to litigate against the infringers who refuse license offers.
So, when almost every possible patent-owning person, university, or corporation is defined as a “NPE” or “patent troll,” why are we surprised that a study that employs this virtually boundless definition concludes that they create $29 billion in litigation costs per year? The only thing surprising is that the number isn’t even higher!
There are many other methodological flaws in the $29 billion cost study, such as its explicit assumption that patent litigation costs are “too high” without providing any comparative baseline for this conclusion. What are the costs in other areas of litigation, such as standard commercial litigation, tort claims, or disputes over complex regulations? We are not told. What are the historical costs of patent litigation? We are not told. On what basis then can we conclude that $29 billion is “too high” or even “too low”? We’re supposed to be impressed by a number that exists in a vacuum and that lacks any empirical context by which to evaluate it.
The $29 billion cost study also assumes that all litigation transaction costs are deadweight losses, which would mean that the entire U.S. court system is a deadweight loss according to the terms of this study. Every lawsuit, whether a contract, tort, property, regulatory or constitutional dispute is, according to the assumption of the $29 billion cost study, a deadweight loss. The entire U.S. court system is an inefficient cost imposed on everyone who uses it. Really? That’s an assumption that reduces itself to absurdity—it’s a self-imposed reductio ad absurdum!
In addition to the methodological problems, there are also serious concerns about the trustworthiness and quality of the actual data used to reach the $29 billion claim in the study. All studies rely on data, and in this case, the $29 billion study used data from a secret survey done by RPX of its customers. For those who don’t know, RPX’s business model is to defend companies against these so-called “patent trolls.” So, a company whose business model is predicated on hyping the threat of “patent trolls” does a secret survey of its paying customers, and it is now known that RPX informed its customers in the survey that their answers would be used to lobby for changes in the patent laws.
As every reputable economist or statistician will tell you, such conditions encourage exaggeration and bias in a data sample by motivating participation among those who support changes to the patent law. Such a problem even has a formal name in economic studies: self-selection bias. But one doesn’t need to be an economist or statistician to be able to see the problems in relying on the RPX data to conclude that NPEs cost $29 billion per year. As the classic adage goes, “Something is rotten in the state of Denmark.”
Even worse, as I noted above, the RPX survey was confidential. RPX has continued to invoke “client confidences” in refusing to disclose its actual customer survey or the resulting data, which means that the data underlying the $29 billion claim is completely unknown and unverifiable for anyone who reads the study. Don’t worry, the researchers have told us in a footnote in the study, they looked at the data and confirmed it is good. Again, it doesn’t take economic or statistical training to know that something is not right here. Another classic cliché comes to mind at this point: “it’s not the crime, it’s the cover-up.”
In fact, keeping data secret in a published study violates well-established and longstanding norms in all scientific research that data should always be made available for testing and verification by third parties. No peer-reviewed medical or scientific journal would publish a study based on a secret data set in which the researchers have told us that we should simply trust them that the data is accurate. Its use of secret data probably explains why the $29 billion study has not yet appeared in a peer-reviewed journal, and, if economics has any claim to being an actual science, this study never will. If a study does not meet basic scientific standards for verifying data, then why are Reps. DeFazio and Chaffetz relying on it to propose national legislation that directly impacts the patent system and future innovation? If heads-in-the-clouds academics would know to reject such a study as based on unverifiable, likely biased claptrap, then why are our elected officials embracing it to create real-world legal rules?
And, to continue our running theme of classic clichés, there’s the rub. The more one looks at the actual legal requirements of the SHIELD Act, the more, in the words of Professor Risch, one is left “scratching one’s head” in bewilderment. The more one looks at the supporting studies and arguments in favor of the SHIELD Act, the more one is left, in the words of Professor Risch, “scratching one’s head.” The more and more one thinks about the SHIELD Act, the more one realizes what it is—legislation that has been crafted at the behest of the politically powerful (such as an Internet company who can get the President to do a special appearance on its own social media website) to have the government eliminate a smaller, publicly reviled, and less politically-connected group.
In short, people may have legitimate complaints about the ways in which the court system in the U.S. generally has problems. Commentators and Congresspersons could even consider revising the general legal rules governing patent ligtiation for all plaintiffs and defendants to make the ligitation system work better or more efficiently (by some established metric). Professor Risch has done exactly this in a recent Wired op-ed. But it’s time to call a spade a spade: the SHIELD Act is a classic example of rent-seeking, discriminatory legislation.
Truth on the Market and the International Center for Law & Economics are delighted (if a bit saddened) to announce that President Obama intends to nominate Joshua Wright, Research Director and Member of the Board of Directors of ICLE and Professor of Law at George Mason University School of Law, to be the next Commissioner at the Federal Trade Commission.
Josh holds economics and law degrees from UCLA, and he is one of only a small handful of young antitrust scholars in the legal academy to hold both a PhD in economics as well as a JD. If confirmed, he will also be only the fourth economist to serve as FTC Commissioner (following Jim Miller, George Douglas and Dennis Yao) and the first JD/PhD.
Josh’s scholarship and approach to antitrust are firmly grounded in the UCLA economics tradition, exemplified by the members of Josh’s dissertation committee — Armen Alchian, Harold Demsetz & Benjamin Klein.
For my part, I couldn’t be happier with Josh’s nomination. Josh’s “error cost” approach to antitrust and consumer protection law will be a tremendous asset to the Commission. His work is rigorous, empirically grounded, and ever-mindful of the complexities of the institutional settings in which businesses act and in which regulators enforce. I am honored to have co-authored several articles with Josh, and, like many of the readers of this blog, I have learned an incredible amount about antitrust law and economics from my interactions with him. The Commissioners and staff at the FTC will surely similarly profit from his time there.