[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Valentin Mircea, a Senior Partner at Mircea and Partners Law Firm, Bucharest, Romania.

The enforcement of competition rules in the European Union is at historic heights. Competition enforcers at the European Commission seem to think that they have reached a point of perfect equilibrium, or perfection in enforcement. “Everything we do is right,” they seem to say, because for decades no significant competition decision by the Commission has been annulled on substance. Meanwhile, the objectives of EU competition law multiply continuously, as DG Competition assumes more and more public policy objectives. Indeed, so wide is DG Competition’s remit that it has become a kind of government in itself, charged with many areas and facing several problems looking for a cure.

The consumer welfare standard is merely affirmed and rarely pursued in the enforcement of the EU competition rules, where even the abuse of dominance tends to be considered as a per se infringement, at least until the European Court of Justice had its say in Intel. It helps that this standard has been always of a secondary importance in the European Union, where the objective of market integration prevailed over time.

Now other issues are catching the eye of the European Commission and the easiest way to handle things such as the increasing power of the technology companies was to make use of the toolkit of the EU competition enforcement.  A technology giant such as Google has already been hit three times with significant fines; but beyond the transient glory of these decisions, nothing significant happened in the market, to other companies or to consumers. Or it did? I’m not sure and nobody seems to check or even care. But the impetus in investigating and applying fines on the technology companies is unshaken — and is likely to remain so at least until the European Court of Justice has its say in a new roster of cases, which will not happen very soon.

The EU competition rules look both over- and under-enforced. This seeming paradox is explained by the formalistic approach of the European Commission and its willingness to serve political purposes, often the result of lobbying from various industries.  In the European Union, competition enforcement increasingly resembles Swiss Army knife; it is good for quick fixes of various problems, while not solving entirely any of them. 

The pursuit of political goals is not necessarily bad in itself; it seems obvious that competition enforcers should listen to the worries of the societies in which they live. Once objectives such as welfare seem to have been attained, it is thus not entirely surprising that enforcement should move towards fixing other societal problems. Take the case of the antitrust laws in the United States, the enactment of which was not determined by an overwhelming concern for consumer welfare or economic efficiency but by powerful lobbies that convinced Congress to act as a referee for their long-lasting disputes with different industries.  In spite of this not-so-glorious origin, the resultant antitrust rules have generated many benefits throughout the world and are an essential part of the efforts to keep markets competitive and ensure a level-playing field. So, why worry that the European Commission – and, more recently, even certain national competition authorities (such as Germany) – have developed a tendency to use powerful competition rules to make order in other areas, where the public opinion, irrespective if it is or not aware of the real causes of concern, requires it?

But in fact, what is happening today is bad and is setting precedents never seen before.  The speed at which new fronts are being opened, where the enforcement of the EU competition rules is an essential part of the weaponry, gives rise to two main areas of concern.

First, EU competition enforcers are generally ill-equipped to address sensitive technical issues that even big experts in the field do not understand properly, such as the use of the Big Data (a vague concept itself, open to various interpretations).  While creating a different set of rules and a new toolkit for the digital economy does not seem to be warranted (debates are still raging on this subject), a dose of humility as to the right level of knowledge required for a proper understanding of the interactions and for proper enforcement, would be most welcome.  Venturing into territories where conventional economics does not apply to its full extent, such as the absence of a price, an essential element of competition, requires a prudent and diligent enforcer to hold back, advance cautiously, and act only where deemed necessary, in an appropriate and proportionate way. So doing is more likely to have an observably beneficial impact, in contrast to the illusory glory of simply confronting the tech giants.

Second, given the limited resources of the European Commission and the national competition authorities in the Member States, exaggerated attention to cases in the technology and digital economy sectors will result in less enforcement in the traditional economy, where cartels and other harmful behaviors still happen, with often more visible negative effects on consumers and the economy. It is no longer fashionable to tackle such cases, as they do not draw the same attention from the media and their outcomes are not likely to create the same fame to the EU competition enforcers.

More recently, in an interesting move, the new European Commission unified the competition and the digital economy portfolios under the astute supervision of commissioner Margrethe Vestager. Beyond the anomaly to put together ex-ante and ex-post powers, the move signals an even larger propensity towards using competition enforcement tools in order to investigate and try to rein in the power of the behemoths of the digital economy.  The change is a powerful political message that EU competition enforcement will be even more prone to cases and decisions motivated by the pursuit of various public policy goals.

I am not saying that the approach taken by the EU competition enforcers has no chance of generating benefits for European consumers. But I am worried that moving ahead with the same determination and with the same limited expertise of the case handlers as has so far been demonstrated, is unlikely to deliver such a beneficial outcome. Moreover, contrary to the stated intention of the policy, it is likely to chill further the prospects for EU technology ventures. 

Last but not least, courageous enforcement of EU competition rules is not a panacea for the unwanted effects on the evidentiary tier, which might put in danger the credibility of this enforcement, its most valuable feature. Indeed, EU competition enforcement may be at its heights but there is no certainty that it won’t fall from there — and falling could be as spectacular as the cases which made the European Commission get to this point. I thus advocate for DG Competition to be wise and humble, to take one step at a time, to acknowledge that markets are generally able to self-correct, and to remember that the history of the economy is little more than a cemetery of forgotten giants that were once assumed to be unshakeable and unstoppable.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Geoffrey A. Manne, president and founder of the International Center for Law & Economics, and Alec Stapp, Research Fellow at the International Center for Law & Economics.

Source: The Economist

Is there a relationship between concentrated economic power and political power? Do big firms have success influencing politicians and regulators to a degree that smaller firms — or even coalitions of small firms — could only dream of? That seems to be the narrative that some activists, journalists, and scholars are pushing of late. And, to be fair, it makes some intuitive sense (before you look at the data). The biggest firms have the most resources — how could they not have an advantage in the political arena?

The argument that corporate power leads to political power faces at least four significant challenges, however. First, the little empirical research there is does not support the claim. Second, there is almost no relationship between market capitalization (a proxy for economic power) and lobbying expenditures (a, admittedly weak, proxy for political power). Third, the absolute level of spending on lobbying is surprisingly low in the US given the potential benefits from rent-seeking (this is known as the Tullock paradox). Lastly, the proposed remedy for this supposed problem is to make antitrust more political — an intervention that is likely to make the problem worse rather than better (assuming there is a problem to begin with).

The claims that political power follows economic power

The claim that large firms or industry concentration causes political power (and thus that under-enforcement of antitrust laws is a key threat to our democratic system of government) is often repeated, and accepted as a matter of faith. Take, for example, Robert Reich’s March 2019 Senate testimony on “Does America Have a Monopoly Problem?”:

These massive corporations also possess substantial political clout. That’s one reason they’re consolidating: They don’t just seek economic power; they also seek political power.

Antitrust laws were supposed to stop what’s been going on.

* * *

[S]uch large size and gigantic capitalization translate into political power. They allow vast sums to be spent on lobbying, political campaigns, and public persuasion. (emphasis added)

Similarly, in an article in August of 2019 for The Guardian, law professor Ganesh Sitaraman argued there is a tight relationship between economic power and political power:

[R]eformers recognized that concentrated economic power — in any form — was a threat to freedom and democracy. Concentrated economic power not only allowed for localized oppression, especially of workers in their daily lives, it also made it more likely that big corporations and wealthy people wouldn’t be subject to the rule of law or democratic controls. Reformers’ answer to the concentration of economic power was threefold: break up economic power, rein it in through regulation, and tax it.

It was the reformers of the Gilded Age and Progressive Era who invented America’s antitrust laws — from the Sherman Antitrust Act of 1890 to the Clayton Act and Federal Trade Commission Acts of the early 20th century. Whether it was Republican trust-buster Teddy Roosevelt or liberal supreme court justice Louis Brandeis, courageous leaders in this era understood that when companies grow too powerful they threatened not just the economy but democratic government as well. Break-ups were a way to prevent the agglomeration of economic power in the first place, and promote an economic democracy, not just a political democracy. (emphasis added)

Luigi Zingales made a similar argument in his 2017 paper “Towards a Political Theory of the Firm”:

[T]he interaction of concentrated corporate power and politics is a threat to the functioning of the free market economy and to the economic prosperity it can generate, and a threat to democracy as well. (emphasis added)

The assumption that economic power leads to political power is not a new one. Not only, as Zingales points out, have political thinkers since Adam Smith asserted versions of the same, but more modern social scientists have continued the claims with varying (but always indeterminate) degrees of quantification. Zingales quotes Adolf Berle and Gardiner Means’ 1932 book, The Modern Corporation and Private Property, for example:

The rise of the modern corporation has brought a concentration of economic power which can compete on equal terms with the modern state — economic power versus political power, each strong in its own field. 

Russell Pittman (an economist at the DOJ Antitrust Division) argued in 1988 that rent-seeking activities would be undertaken only by firms in highly concentrated industries because:

if the industry in question is unconcentrated, then the firm may decide that the level of benefits accruing to the industry will be unaffected by its own level of contributions, so that the benefits may be enjoyed without incurrence of the costs. Such a calculation may be made by other firms in the industry, of course, with the result that a free-rider problem prevents firms individually from making political contributions, even if it is in their collective interest to do so.

For the most part the claims are virtually entirely theoretical and their support anecdotal. Reich, for example, supports his claim with two thin anecdotes from which he draws a firm (but, in fact, unsupported) conclusion: 

To take one example, although the European Union filed fined [sic] Google a record $2.7 billion for forcing search engine users into its own shopping platforms, American antitrust authorities have not moved against the company.

Why not?… We can’t be sure why the FTC chose not to pursue Google. After all, section 5 of the Federal Trade Commission Act of 1914 gives the Commission broad authority to prevent unfair acts or practices. One distinct possibility concerns Google’s political power. It has one of the biggest lobbying powerhouses in Washington, and the firm gives generously to Democrats as well as Republicans.

A clearer example of an abuse of power was revealed last November when the New York Times reported that Facebook executives withheld evidence of Russian activity on their platform far longer than previously disclosed.

Even more disturbing, Facebook employed a political opposition research firm to discredit critics. How long will it be before Facebook uses its own data and platform against critics? Or before potential critics are silenced even by the possibility? As the Times’s investigation made clear, economic power cannot be separated from political power. (emphasis added)

The conclusion — that “economic power cannot be separated from political power” — simply does not follow from the alleged evidence. 

The relationship between economic power and political power is extremely weak

Few of these assertions of the relationship between economic and political power are backed by empirical evidence. Pittman’s 1988 paper is empirical (as is his previous 1977 paper looking at the relationship between industry concentration and contributions to Nixon’s re-election campaign), but it is also in direct contradiction to several other empirical studies (Zardkoohi (1985); Munger (1988); Esty and Caves (1983)) that find no correlation between concentration and political influence; Pittman’s 1988 paper is indeed a response to those papers, in part. 

In fact, as one study (Grier, Muger & Roberts (1991)) summarizes the evidence:

[O]f ten empirical investigations by six different authors/teams…, relatively few of the studies find a positive, significant relation between contributions/level of political activity and concentration, though a variety of measures of both are used…. 

There is little to recommend most of these studies as conclusive one way or the other on the question of interest. Each one suffers from a sample selection or estimation problem that renders its results suspect. (emphasis added)

And, as they point out, there is good reason to question the underlying theory of a direct correlation between concentration and political influence:

[L]egislation or regulation favorable to an industry is from the perspective of a given firm a public good, and therefore subject to Olson’s collective action problem. Concentrated industries should suffer less from this difficulty, since their sparse numbers make bargaining cheaper…. [But at the same time,] concentration itself may affect demand, suggesting that the predicted correlation between concentration and political activity may be ambiguous, or even negative. 

* * *

The only conclusion that seems possible is that the question of the correct relation between the structure of an industry and its observed level of political activity cannot be resolved theoretically. While it may be true that firms in a concentrated industry can more cheaply solve the collective action problem that inheres in political action, they are also less likely to need to do so than their more competitive brethren…. As is so often the case, the interesting question is empirical: who is right? (emphasis added)

The results of Grier, Muger & Roberts (1991)’s own empirical study are ambiguous at best (and relate only to political participation, not success, and thus not actual political power):

[A]re concentrated industries more or less likely to be politically active? Numerous previous studies have addressed this topic, but their methods are not comparable and their results are flatly contradictory. 

On the side of predicting a positive correlation between concentration and political activity is the theory that Olson’s “free rider” problem has more bite the larger the number of participants and the smaller their respective individual benefits. Opposing this view is the claim that it precisely because such industries are concentrated that they have less need for government intervention. They can act on their own to gamer the benefits of cartelization that less concentrated industries can secure only through political activity. 

Our results indicate that both sides are right, over some range of concentration. The relation between political activity and concentration is a polynomial of degree 2, rising and then falling, achieving a peak at a four-firm concentration ratio slightly below 0.5. (emphasis added)

Despite all of this, Zingales (like others) explicitly claims that there is a clear and direct relationship between economic power and political power:

In the last three decades in the United States, the power of corporations to shape the rules of the game has become stronger… [because] the size and market share of companies has increased, which reduces the competition across conflicting interests in the same sector and makes corporations more powerful vis-à-vis consumers’ interest.

But a quick look at the empirical data continues to call this assertion into serious question. Indeed, if we look at the lobbying expenditures of the top 50 companies in the US by market capitalization, we see an extremely weak (at best) relationship between firm size and political power (as proxied by lobbying expenditures):

Of course, once again, this says little about the effectiveness of efforts to exercise political power, which could, in theory, correlate with market power but not expenditures. Yet the evidence on this suggests that, while concentration “increases both [political] activity and success…, [n]either firm size nor industry size has a robust influence on political activity or success.” (emphasis added). Of course there are enormous and well-known problems with measuring industry concentration, and it’s not clear that even this attribute is well-correlated with political activity or success (and, interestingly for the argument that profits are a big part of the story because firms in more concentrated industries from lax antitrust realize higher profits have more money to spend on political influence, even concentration in the Esty and Caves study is not correlated with political expenditures.)

Indeed, a couple of examples show the wide range of lobbying expenditures for a given firm size. Costco, which currently has a market cap of $130 billion, has spent only $210,000 on lobbying so far in 2019. By contrast, Amgen, which has a $144 billion market cap, has spent $8.54 million, or more than 40 times as much. As shown in the chart above, this variance is the norm. 

However, discussing the relative differences between these companies is less important than pointing out the absolute levels of expenditure. Spending eight and a half million dollars per year would not be prohibitive for literally thousands of firms in the US. If access is this cheap, what’s going on here?

Why is there so little money in US politics?

The Tullock paradox asks why, if the return to rent-seeking is so high — which it plausibly is because the government spends trillions of dollars each year — is so little money spent on influencing policymakers?

Considering the value of public policies at stake and the reputed influence of campaign contributors in policymaking, Gordon Tullock (1972) asked, why is there so little money in U.S. politics? In 1972, when Tullock raised this question, campaign spending was about $200 million. Assuming a reasonable rate of return, such an investment could have yielded at most $250-300 million over time, a sum dwarfed by the hundreds of billions of dollars worth of public expenditures and regulatory costs supposedly at stake.

A recent article by Scott Alexander updated the numbers for 2019 and compared the total to the $12 billion US almond industry:

[A]ll donations to all candidates, all lobbying, all think tanks, all advocacy organizations, the Washington Post, Vox, Mic, Mashable, Gawker, and Tumblr, combined, are still worth a little bit less than the almond industry.

Maybe it’s because spending money on donations, lobbying, think tanks, journalism and advocacy is ineffective on net (i.e., spending by one group is counterbalanced by spending by another group) and businesses know it?

In his paper on elections, Ansolabehere focuses on the corporate perspective. He argues that money neither makes a candidate much more likely to win, nor buys much influence with a candidate who does win. Corporations know this, which is why they don’t bother spending more. (emphasis added)

To his credit, Zingales acknowledges this issue:

To the extent that US corporations are exercising political influence, it seems that they are choosing less-visible but perhaps more effective ways. In fact, since Gordon Tullock’s (1972) famous article, it has been a puzzle in political science why there is so little money in politics (as discussed in this journal by Ansolabehere, de Figueiredo, and Snyder 2003).

So, what are these “less-visible but perhaps more effective” ways? Unfortunately, the evidence in support of this claim is anecdotal and unconvincing. As noted above, Reich offers only speculation and extremely weak anecdotal assertions. Meanwhile, Zingales tells the story of Robert (mistakenly identified in the paper as “Richard”) Rubin pushing through repeal of Glass-Steagall to benefit Citigroup, then getting hired for $15 million a year when he left the government. Assuming the implication is actually true, is that amount really beyond the reach of all but the largest companies? How many banks with an interest in the repeal of Glass-Steagall were really unlikely at the time to be able to credibly offer future compensation because they would be out of business? Very few, and no doubt some of the biggest and most powerful were arguably at greater risk of bankruptcy than some of the smaller banks.

Maybe only big companies have an interest in doing this kind of thing because they have more to lose? But in concentrated industries they also have more to lose by conferring the benefit on their competitors. And it’s hard to make the repeal or passage of a law, say, apply only to you and not everyone else in the industry. Maybe they collude? Perhaps, but is there any evidence of this? Zingales offers only pure speculation here, as well. For example, why was the US Google investigation dropped but not the EU one? Clearly because of White House visits, says Zingales. OK — but how much do these visits cost firms? If that’s the source of political power, it surely doesn’t require monopoly profits to obtain it. And it’s virtually impossible that direct relationships of this kind are beyond the reach of coalitions of smaller firms, or even small firms, full stop.  

In any case, the political power explanation turns mostly on doling out favors in exchange for individuals’ payoffs — which just aren’t that expensive, and it’s doubtful that the size of a firm correlates with the quality of its one-on-one influence brokering, except to the extent that causation might run the other way — which would be an indictment not of size but of politics. Of course, in the Hobbesian world of political influence brokering, as in the Hobbesian world of pre-political society, size alone is not determinative so long as alliances can be made or outcomes turn on things other than size (e.g., weapons in the pre-Hobbesian world; family connections in the world of political influence)

The Noerr–Pennington doctrine is highly relevant here as well. In Noerr, the Court ruled that “no violation of the [Sherman] Act can be predicated upon mere attempts to influence the passage or enforcement of laws” and “[j]oint efforts to influence public officials do not violate the antitrust laws even though intended to eliminate competition.” This would seem to explain, among other things, the existence of trade associations and other entities used by coalitions of small (and large) firms to influence the policymaking process.

If what matters for influence peddling is ultimately individual relationships and lobbying power, why aren’t the biggest firms in the world the lobbying firms and consultant shops? Why is Rubin selling out for $15 million a year if the benefit to Citigroup is in the billions? And, if concentration is the culprit, why isn’t it plausibly also the solution? It isn’t only the state that keeps the power of big companies in check; it’s other big companies, too. What Henry G. Manne said in his testimony on the Industrial Reorganization Act of 1973 remains true today: 

There is simply no correlation between the concentration ratio in an industry, or the size of its firms, and the effectiveness of the industry in the halls of Government.

In addition to the data presented earlier, this analysis would be incomplete if it did not mention the role of advocacy groups in influencing outcomes, the importance and size of large foundations, the role of unions, and the role of individual relationships.

Maybe voters matter more than money?

The National Rifle Association spends very little on direct lobbying efforts (less than $10 million over the most recent two-year cycle). The organization’s total annual budget is around $400 million. In the grand scheme of things, these are not overwhelming resources. But the NRA is widely-regarded as one of the most powerful political groups in the country, particularly within the Republican Party. How could this be? In short, maybe it’s not Sturm Ruger, Remington Outdoor, and Smith & Wesson — the three largest gun manufacturers in the US — that influence gun regulations; maybe it’s the highly-motivated voters who like to buy guns. 

The NRA has 5.5 million members, many of whom vote in primaries with gun rights as one of their top issues  — if not the top issue. And with low turnout in primaries — only 8.7% of all registered voters participated in 2018 Republican primaries — a candidate seeking the Republican nomination all but has to secure an endorsement from the NRA. On this issue at least, the deciding factor is the intensity of voter preferences, not the magnitude of campaign donations from rent-seeking corporations.

The NRA is not the only counterexample to arguments like those from Zingales. Auto dealers are a constituency that is powerful not necessarily due to its raw size but through its dispersed nature. At the state level, almost every political district has an auto dealership (and the owners are some of the wealthiest and best-connected individuals in the area). It’s no surprise then that most states ban the direct sale of cars from manufacturers (i.e., you have to go through a dealer). This results in higher prices for consumers and lower output for manufacturers. But the auto dealership industry is not highly concentrated at the national level. The dealers don’t need to spend millions of dollars lobbying federal policymakers for special protections; they can do it on the local level — on a state-by-state basis — for much less money (and without merging into consolidated national chains).

Another, more recent, case highlights the factors besides money that may affect political decisions. President Trump has been highly critical of Jeff Bezos and the Washington Post (which Bezos owns) since the beginning of his administration because he views the newspaper as a political enemy. In October, Microsoft beat out Amazon for a $10 billion contract to provide cloud infrastructure for the Department of Defense (DoD). Now, Amazon is suing the government, claiming that Trump improperly influenced the competitive bidding process and cost the company a fair shot at the contract. This case is a good example of how money may not be determinative at the margin, and also how multiple “monopolies” may have conflicting incentives and we don’t know how they net out.

Politicizing antitrust will only make this problem worse

At the FTC’s “Hearings on Competition and Consumer Protection in the 21st Century,” Barry Lynn of the Open Markets Institute advocated using antitrust to counter the political power of economically powerful firms:

[T]he main practical goal of antimonopoly is to extend checks and balances into the political economy. The foremost goal is not and must never be efficiency. Markets are made, they do not exist in any platonic ether. The making of markets is a political and moral act.

In other words, the goal of breaking up economic power is not to increase economic benefits but to decrease political influence. 

But as the author of one of the empirical analyses of the relationship between economic and political power notes the asserted “solution” to the unsupported “problem” of excess political influence by economically powerful firms — more and easier antitrust enforcement — may actually make the alleged problem worse:

Economic rents may be obtained through the process of market competition or be obtained by resorting to governmental protection. Rational firms choose the least costly alternative. Collusion to obtain governmental protection will be less costly, the higher the concentration, ceteris paribus. However, high concentration in itself is neither necessary nor sufficient to induce governmental protection.

The result that rent-seeking activity is triggered when firms are affected by government regulation has a clear implication: to reduce rent-seeking waste, governmental interference in the market place needs to be attenuated. Pittman’s suggested approach, however, is “to maintain a vigorous antitrust policy” (p. 181). In fact, a more strict antitrust policy may exacerbate rent-seeking. For example, the firms which will be affected by a vigorous application of antitrust laws would have incentive to seek moderation (or rents) from Congress or from the enforcement officials.

Rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration. And imbuing antitrust with an ill-defined set of vague political objectives (as many proponents of these arguments desire), would also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so. 

And if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? With an expanded basis for increased enforcement, the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might find that we end up with even more concentration because the exceptions could subsume the rules. All of which of course highlights the fundamental, underlying irony of claims that we need to diminish the economic content of antitrust in order to reduce the political power of private firms: If you make antitrust more political, you’ll get less democratic, more politically determined, results.

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Luigi Zingales, Robert C. McCormack Distinguished Service Professor of Entrepreneurship and Finance, and Charles M. Harper Faculty Fellow, the University of Chicago Booth School of Business. Director, the George J. Stigler Center for the Study of the Economy and the State, and Filippo Maria Lancieri, Fellow, George J. Stigler Center for the Study of the Economy and the State. JSD Candidate, The University of Chicago Law School.

This symposium discusses the “The Politicization of Antitrust.” As the invite itself stated, this is an umbrella topic that encompasses a wide range of subjects: from incorporating environmental or labor concerns in antitrust enforcement, to political pressure in enforcement decision-making, to national security laws (CFIUS-type enforcement), protectionism, federalism, and more. This contribution will focus on the challenges of designing a system that protects the open markets and democracy that are the foundation of modern economic and social development.

The “Chicago School of antitrust” was highly critical of the antitrust doctrine prevailing during the Warren-era Supreme Court. A key objection was that the vague legal standards of the Sherman, Clayton and the Federal Trade Commission Acts allowed for the enforcement of antitrust policy based on what Bork called “inferential analysis from casuistic observations.” That is, without clearly defined goals and without objective standards against which to measure these goals, antitrust enforcement would become arbitrary or even a tool that governments could wield against a political enemy. To address this criticism, Bork and other key members of the Chicago School narrowed the scope of antitrust to a single objective—the maximization of allocative efficiency/total welfare (coined as “consumer welfare”)—and advocated the use of price theory as a method to reduce judicial discretion. It was up to markets and Congress/politics, not judges (and antitrust), to redistribute economic surplus or protect small businesses. Developments in economic theory and econometrics over the next decades increased the number of tools regulators and Courts could rely on to measure the short-term price/output impacts of many specific types of conduct. A more conservative judiciary translated much of the Chicago School’s teaching into policy, including the triumph of Bork’s narrow interpretation of “consumer welfare.”

The Chicago School’s criticism of traditional antitrust struck many correct points. Some of the Warren-era Supreme Court cases are perplexing to say the least (e.g., Brown Shoe, Von’s Grocery, Utah Pie, Schwinn). Antitrust is a very powerful tool that covers almost the entire economy. In the United States, enforcement can be initiated by multiple federal and state regulators and by private parties (for whom treble damages encourage litigation). If used without clear and objective standards, antitrust remedies could easily add an extra layer of uncertainty or could even outright prohibit perfectly legitimate conduct, which would depress competition, investment, and growth. The Chicago School was also right in warning against the creation of what it understood as extensive and potentially unchecked governmental powers to intervene in the economic sphere. At best, such extensive powers can generate rent-seeking and cronyism. At worst, they can become an instrument of political vendettas. While these concerns are always present, they are particularly worrisome now: a time of increased polarization, dysfunctional politics, and constant weakening of many governmental institutions. If “politicizing antitrust” is understood as advocating for a politically driven, uncontrolled enforcement policy, we are similarly concerned about it. Changes to antitrust policy that rely primarily on vague objectives may lead to an unmitigated disaster.

Administrability is certainly a key feature of any regulatory regime hoping to actually increase consumer welfare. Bork’s narrow interpretation of “consumer welfare” unquestionably has three important features: Its objectives are i) clearly defined, ii) clearly ranked, and iii) (somewhat) objectively measurable. Yet, whilst certainly representing some gains over previous definitions, Bork’s “consumer welfare” is not the end of history for antitrust policy. Indeed, even the triumph of “consumer welfare” is somewhat bittersweet. With time, academics challenged many of the doctrine’s key tenets. US antitrust policy also constantly accepts some form of external influences that are antagonistic to this narrow, efficiency-focused “consumer welfare” view—the “post-Chicago” United States has explicit exemptions for export cartels, State Action, the Noerr-Pennington doctrine, and regulated markets (solidified in Trinko), among others. Finally, as one of us has indicated elsewhere, while prevailing in the United States, Chicago School ideas find limited footing around the world. While there certainly are irrational or highly politicized regimes, there is little evidence that antitrust enforcement in mature jurisdictions such as the EU or even Brazil is arbitrary, is employed in political vendettas, or reflects outright protectionist policies.

Governments do not function in a vacuum. As economic, political, and social structures change, so must public policies such as antitrust. It must be possible to develop a well-designed and consistent antitrust policy that focuses on goals other than imperfectly measured short-term price/output effects—one that sits in between a narrow “consumer welfare” and uncontrolled “politicized antitrust.” An example is provided by the Stigler Committee on Digital Platforms Final Report, which defends changes to current US antitrust enforcement as a way to increase competition in digital markets. There are many similarly well-grounded proposals for changes to other specific areas, such as vertical relationships. We have not yet seen an all-encompassing, well-grounded, and generalizable framework to move beyond the “consumer welfare” standard. Nonetheless, this is simply the current state of the art, not an impossibility theorem. Academia contributes the most to society when it provides new ways to tackle hard, important questions. The Chicago School certainly did so a few decades ago. There is no reason why academia and policymakers cannot do it again.   

This is exactly why we are dedicating the 2020 Stigler Center annual antitrust conference to the topic of “monopolies and politics.” Competitive markets and democracy are often (and rightly) celebrated as the most important engines of economic and social development. Still, until recently, the relationship between the two was all but ignored. This topic had been popular in the 1930s and 1940s because many observers linked the rise of Hitler, Mussolini, and the nationalist government in Japan to the industrial concentration in the three Axis countries. Indeed, after WWII, the United States set up a “Decartelization Office” in Germany and passed the Celler-Kefauver Act to prevent gigantic conglomerates from destroying democracies. In 1949, Congressman Emanuel Celler, who sponsored the Act, declared:

“There are two main reasons why l am concerned about concentration of economic power in the United States. One is that concentration of business unavoidably leads to some kind of socialism, which is not the desire of the American people. The other is that a concentrated system is inefficient, compared with a system of free competition.

We have seen what happened in the other industrial countries of the Western World. They allowed a free growth of monopolies and cartels; until these private concentrations grew so strong that either big business would own the government or the government would have to seize control of big business. The most extreme case was in Germany, where the big business men thought they could take over the government by using Adolf Hitler as their puppet. So Germany passed from private monopoly to dictatorship and disaster.”

There are many reasons why these concerns around monopolies and democracy are resurfacing now. A key one is that freedom is in decline worldwide and so is trust in democracy, particularly amongst newer generations. At the same time, there is growing evidence that market concentration is on the rise. Correlation is not causation, thus we cannot jump to hasty conclusions. Yet, the stakes are so high that these coincidences need to be investigated further.  

Moreover, even if the correlation between monopolies and fascism were spurious, the correlation between economic concentration and political dissatisfaction in democracy might not be. The fraction of people who feel their interests are represented in government fell from almost 80% in the 1950s to 20% today. Whilst this dynamic is impacted by many different drivers, one of them could certainly be increased market concentration.

Political capture is a reality, and it seems straightforward to assume that firms’ ability to influence the political system greatly depends not only on their size but also on the degree of concentration of the markets they operate in. The reasons are numerous. In concentrated markets, legislators only hear one version of the story, and there are fewer sophisticated stakeholders to ring the alarm when wrongdoing is present, thus making it easier for the incumbents to have their way. Similarly, in concentrated markets, the one or two incumbent firms represent the main or only source of employment for retiring regulators, ensuring an incumbent’s long-term influence over policy. Concentrated markets also restrict the pool of potential employers/customers for technical experts, making it difficult for them to survive if they are hostile to the incumbent behemoths—an issue particularly concerning in complex markets where talent is both necessary and scarce. Finally, firms with market power can use their increased rents to influence public policy through lobbying or some other legal form of campaign contributions.

In other words, as markets become more concentrated, incumbent firms become better at distorting the political process in their favor. Therefore, an increase in dissatisfaction with democracy might not just be a coincidence, but might partially reflect increases in market concentration that drive politicians and regulators away from the preference of voters and closer to that of behemoths.   

We are well aware that, at the moment, these are just theories—albeit quite plausible ones. For this reason, the first day of the 2020 Stigler Center Antitrust Conference will be dedicated to presenting and critically reviewing the evidence currently available on the connections between market concentration and adverse political outcomes.

If a connection is established, then the question becomes how an antitrust (or other similar) policy aimed at preserving free markets and democracy can be implemented in a rational and consistent manner. The “consumer welfare” standard has generated measures of concentration and measures of possible harm to be used in trial. The “democratic welfare” approach would have to do the same. Fortunately, in the last 50 years political science and political economy have made great progress, so there is a growing number of potential alternative theories, evidence, and methods. For this reason, the second day of the 2020 Stigler Center Antitrust Conference will be dedicated to discussing the pros and cons of these alternatives. We are hoping to use the conference to spur further reflection on how to develop a methodology that is predictable, restricts discretion, and makes a “democratic antitrust” administrable.  As mentioned above, we agree that simply “politicizing” the current antitrust regime would be very dangerous for the economic well-being of nations. Yet, ignoring the political consequences of economic concentration on democracy can be even more dangerous—not just for the economic, but also for the democratic well-being of nations. Progress is not achieved by returning to the past nor by staying religiously fixed on the current status quo, but by moving forward: by laying new bricks on the layers of knowledge accumulated in the past. The Chicago School helped build some important foundations of modern antitrust policy. Those foundations should not become a prison; instead, they should be the base for developing new standards capable of enhancing both economic welfare and democratic values in the spirit of what Senator John Sherman, Congressman Emanuel Celler, and other early antitrust advocates envisioned.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Steven J. Cernak, Partner at Bona Law and Adjunct Professor, University of Michigan Law School and Western Michigan University Thomas M. Cooley Law School. This paper represents the current views of the author alone and not necessarily the views of any past, present or future employer or client.

When some antitrust practitioners hear “the politicization of antitrust,” they cringe while imagining, say, merger approval hanging on the size of the bribe or closeness of the connection with the right politician.  Even a more benign interpretation of the phrase “politicization of antitrust” might drive some antitrust technocrats up the wall:  “Why must the mainstream media and, heaven forbid, politicians start weighing in on what antitrust interpretations, policy and law should be?  Don’t they know that we have it all figured out and, if we decide it needs any tweaks, we’ll make those over drinks at the ABA Antitrust Section Spring Meeting?”

While I agree with the reaction to the cringe-worthy interpretation of “politicization,” I think members of the antitrust community should not be surprised or hostile to the second interpretation, that is, all the new attention from new people.  Such attention is not unusual historically; more importantly, it provides an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect. 

The Sherman Act itself, along with its state-level predecessors, was the product of a political reaction to perceived problems of the late 19th Century – hence all of today’s references to a “new gilded age” as echoes of the political arguments of 1890.  Since then, the Sherman Act has not been immutable.  The U.S. antitrust laws have changed – and new antitrust enforcers have even been added – when the political debates convinced enough that change was necessary.  Today’s political discussion could be surprising to so many members of the antitrust community because they were not even alive when the last major change was debated and passed

More generally, the U.S. political position on other government regulation of – or intervention or participation in – free markets has varied considerably over the years.  While controversial when they were passed, we now take Medicare and Medicaid for granted and debate “Medicare for all” – why shouldn’t an overhaul of the Sherman Act also be a legitimate political discussion?  The Interstate Commerce Commission might be gone and forgotten but at one time it garnered political support to regulate the most powerful industries of the late 19th and early 20th Century – why should a debate on new ways to regulate today’s powerful industries be out of the question? 

So today’s antitrust practitioners should avoid the temptation to proclaim an “end of history” and that all antitrust policy questions have been asked and answered and instead, as some of us have been suggesting since at least the last election cycle, join the political debate.  But now, for those of us who are generally supportive of the U.S. antitrust status quo, the question is how? 

Some have been pushing back on the supposed evidence that a change in antitrust or other governmental policies is necessary.  For instance, in late 2015 the White House Council of Economic Advisers published a paper on increased concentration in many industries which others have used as evidence of a failure of antitrust law to protect competition.  Josh Wright has used several platforms to point out that the industry measurement was too broad and the concentration level too low to be useful in these discussions.  Also, he reminded readers that concentration and levels of competition are different concepts that are not necessarily linked.  On questions surrounding inequality and stagnation of standards of living, Russ Roberts has produced a series of videos that try to explain why any such questions are difficult to answer with the easy numbers available and why, perhaps, it is not correct that “the rich got all the gains.” 

Others, like Dan Crane for instance, have advanced the debate by trying to get those commentators who are unhappy with the status quo to explain what they see as the problems and the proposed fixes.  While it might be too much to ask for unanimity among a diverse group of commentators, the debate might be more productive now that some more specific complaints and solutions have begun to emerge

Even if the problems are properly identified, we should not allow anyone to blithely assume that any – or any particular – increase in government oversight will solve it without creating different issues.  The Federal Trade Commission tackled this issue in its final hearing on Competition and Consumer Protection in the 21st Century with a panel on Frank Easterbrook’s seminal “Limits of Antitrust” paper.  I was fortunate enough to be on that panel and tried to summarize the ongoing importance of “Limits,” and advance the broader debate, by encouraging those who would change antitrust policy and increase supervision of the market to have appropriate “regulatory humility” (a term borrowed from former FTC Chairman Maureen Ohlhausen) about what can be accomplished.

I identified three varieties of humility present in “Limits” and pertinent here.  First, there is the humility to recognize that mastering anything as complex as an economy or any significant industry will require knowledge of innumerable items, some unseen or poorly understood, and so could be impossible.  Here, Easterbrook echoes Friedrich Hayek’s “Pretense of Knowledge” Nobel acceptance speech. 

Second, there is the humility to recognize that any judge or enforcer, like any other human being, is subject to her own biases and predilections, whether based on experience or the institutional framework within which she works.  While market participants might not be perfect, great thinkers from Madison to Kovacic have recognized that “men (or any agency leaders) are not angels” either.  As Thibault Schrepel has explained, it would be “romantic” to assume that any newly-empowered government enforcer will always act in the best interest of her constituents. 

Finally, there is the humility to recognize that humanity has been around a long time and faced a number of issues and that we might learn something from how our predecessors reacted to what appear to be similar issues in history.  Given my personal history and current interests, I have focused on events from the automotive industry; however, the story of the unassailable power (until it wasn’t) of A&P and how it spawned the Robinson-Patman Act, ably told by Tim Muris and Jonathan Neuchterlein, might be more pertinent here.  So challenging those advocating for big changes to explain why they are so confident this time around can be useful. 

But while all those avenues of argument can be effective in explaining why greater government intervention in the form of new antitrust policies might be worse than the status quo, we also must do a better job at explaining why antitrust and the market forces it protects are actually good for society.  If democratic capitalism really has “lengthened the life span, made the elimination of poverty and famine thinkable, enlarged the range of human choice” as claimed by Michael Novak in The Spirit of Democratic Capitalism, we should do more to spread that good news. 

Maybe we need to spend more time telling and retelling the “I, Pencil” or “It’s a Wonderful Loaf” stories about how well markets can and do work at coordinating the self-interested behavior of many to the benefit of even more.  Then we can illustrate the limited role of antitrust in that complex effort – say, punishing any collusion among the mills or bakers in those two stories to ensure the process works as beautifully and simply displayed.  For the first time in decades, politicians and real people, like the consumers whose welfare we are supposed to be protecting, are paying attention to our wonderful world of antitrust.  We should seize the opportunity to explain what we do and why it matters and discuss if any improvements can be made.

The operative text of the Sherman Antitrust Act of 1890 is a scant 100 words:

Section 1:

Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony…

Section 2:

Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony…

Its short length and broad implications (“Every contract… in restraint of trade… is declared to be illegal”) didn’t give the courts much to go on in terms of textualism. As for originalism, the legislative history of the Sherman Act is mixed, and no consensus currently exists among experts. In practice, that means enforcement of the antitrust laws in the US has been a product of the evolutionary common law process (and has changed over time due to economic learning). 

Over the last fifty years, academics, judges, and practitioners have generally converged on the consumer welfare standard as the best approach for protecting market competition. Although some early supporters of aggressive enforcement (e.g., Brandeis and, more recently, Pitofsky) advocated for a more political conception of antitrust, that conception of the law has been decisively rejected by the courts as the contours of the law have evolved through judicial decisionmaking. 

In the last few years, however, a movement has reemerged to expand antitrust beyond consumer welfare to include political and social issues, ranging from broadly macroeconomic matters like rising income inequality and declining wages, to sociopolitical concerns like increasing political concentration, environmental degradation, a struggling traditional news industry, and declining localism. 

Although we at ICLE are decidedly in the consumer welfare camp, the contested “original intent” of the antitrust laws and the simple progress of evolving interpretation could conceivably support a broader, more-political interpretation. It is, at the very least, a timely and significant question whether and how political and social issues might be incorporated into antitrust law. Yet much of the discussion of politics and antitrust has been heavy on rhetoric and light on substance; it is dominated by non-expert, ideologically driven opinion. 

In this blog symposium we seek to offer a more substantive and balanced discussion of the issue. To that end, we invited a number of respected economists, legal scholars, and practitioners to offer their perspectives. 

The symposium comprises posts by Steve Cernak, Luigi Zingales and Filippo Maria Lancieri, Geoffrey A. Manne and Alec Stapp, Valentin MirceaRamsi Woodcock, Kristian Stout, and Cento Veljanoski.

Both Steve Cernak and Zingales and Lancieri offer big picture perspectives. Cernak sees the current debate as, “an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect.” He then urges “regulatory humility” and outlines what this means in the context of antitrust.  

Zingales and Lancieri note that “simply “politicizing” the current antitrust regime would be very dangerous for the economic well-being of nations.” More specifically, they observe that “If used without clear and objective standards, antitrust remedies could easily add an extra layer of uncertainty or could even outright prohibit perfectly legitimate conduct, which would depress competition, investment, and growth.” Nonetheless, they argue that nuanced changes to the application of antitrust law may be justified because, “as markets become more concentrated, incumbent firms become better at distorting the political process in their favor.”

Manne and Stapp question the existence of a causal relationship between market concentration and political power, noting that there is little empirical support for such a claim.  Moreover, they warn that politicizing antitrust will inevitably result in more politicized antitrust enforcement actions to the detriment of consumers and democracy. 

Mircea argues that antitrust enforcement in the EU is already too political and that enforcement has been too focused on “Big Tech” companies. The result has been to chill investment in technology firms in the EU while failing to address legitimate antitrust violations in other sectors. 

Woodcock argues that the excessive focus on “Big Tech” companies as antitrust villains has come in no small part from a concerted effort by “Big Ink” (i.e. media companies), who resent the loss of advertising revenue that has resulted from the emergence of online advertising platforms. Woodcock suggests that the solution to this problem is to ban advertising. (We suspect that this cure would be worse than the disease but will leave substantive criticism to another blog post.)

Stout argues that while consumers may have legitimate grievances with Big Tech companies, these grievances do not justify widening the scope of antitrust, noting that “Concerns about privacy, hate speech, and, more broadly, the integrity of the democratic process are critical issues to wrestle with. But these aren’t antitrust problems.”

Finally, Veljanovski highlights potential problems with per se rules against cartels, noting that in some cases (most notably regulation of common pool resources such as fisheries), long-run consumer welfare may be improved by permitting certain kinds of cartel. However, he notes that in the case of polluting firms, a cartel that raises prices and lowers output is not likely to be the most efficient way to reduce the harms associated with pollution. This is of relevance given the DOJ’s case against certain automobile manufacturers, which are accused of colluding with California to set emission standards that are stricter than required under federal law.

It is tempting to conclude that U.S. antitrust law is not fundamentally broken, so does not require a major fix. Indeed, if any fix is needed, it is that the CWS should be more widely applied both in the U.S. and internationally.

Jonathan B. Baker, Nancy L. Rose, Steven C. Salop, and Fiona Scott Morton don’t like vertical mergers:

Vertical mergers can harm competition, for example, through input foreclosure or customer foreclosure, or by the creation of two-level entry barriers.  … Competitive harms from foreclosure can occur from the merged firm exercising its increased bargaining leverage to raise rivals’ costs or reduce rivals’ access to the market. Vertical mergers also can facilitate coordination by eliminating a disruptive or “maverick” competitor at one vertical level, or through information exchange. Vertical mergers also can eliminate potential competition between the merging parties. Regulated firms can use vertical integration to evade rate regulation. These competitive harms normally occur when at least one of the markets has an oligopoly structure. They can lead to higher prices, lower output, quality reductions, and reduced investment and innovation.

Baker et al. go so far as to argue that any vertical merger in which the downstream firm is subject to price regulation should face a presumption that the merger is anticompetitive.

George Stigler’s well-known article on vertical integration identifies several ways in which vertical integration increases welfare by subverting price controls:

The most important of these other forces, I believe, is the failure of the price system (because of monopoly or public regulation) to clear markets at prices within the limits of the marginal cost of the product (to the buyer if he makes it) and its marginal-value product (to the seller if he further fabricates it). This phenomenon was strikingly illustrated by the spate of vertical mergers in the United States during and immediately after World War II, to circumvent public and private price control and allocations. A regulated price of OA was set (Fig. 2), at which an output of OM was produced. This quantity had a marginal value of OB to buyers, who were rationed on a nonprice basis. The gain to buyers  and sellers combined from a free price of NS was the shaded area, RST, and vertical integration was the simple way of obtaining this gain. This was the rationale of the integration of radio manufacturers into cabinet manufacture, of steel firms into fabricated products, etc.

Stigler was on to something:

  • In 1947, Emerson Radio acquired Plastimold, a maker of plastic radio cabinets. The president of Emerson at the time, Benjamin Abrams, stated “Plastimold is an outstanding producer of molded radio cabinets and gives Emerson an assured source of supply of one of the principal components in the production of radio sets.” [emphasis added] 
  • In the same year, the Congressional Record reported, “Admiral Corp. like other large radio manufacturers has reached out to take over a manufacturer of radio cabinets, the Chicago Cabinet Corp.” 
  • In 1948, the Federal Trade Commission ascribed wartime price controls and shortages as reasons for vertical mergers in the textiles industry as well as distillers’ acquisitions of wineries.

While there may have been some public policy rationale for price controls, it’s clear the controls resulted in shortages and a deadweight loss in many markets. As such, it’s likely that vertical integration to avoid the price controls improved consumer welfare (if only slightly, as in the figure above) and reduced the deadweight loss.

Rather than leading to monopolization, Stigler provides examples in which vertical integration was employed to circumvent monopolization by cartel quotas and/or price-fixing: “Almost every raw-material cartel has had trouble with customers who wish to integrate backward, in order to negate the cartel prices.”

In contrast to Stigler’s analysis, Salop and Daniel P. Culley begin from an implied assumption that where price regulation occurs, the controls are good for society. Thus, they argue avoidance of the price controls are harmful or against the public interest:

Example: The classic example is the pre-divestiture behavior of AT&T, which allegedly used its purchases of equipment at inflated prices from its wholly-owned subsidiary, Western Electric, to artificially increase its costs and so justify higher regulated prices.

This claim is supported by the court in U.S. v. AT&T [emphasis added]:

The Operating Companies have taken these actions, it is said, because the existence of rate of return regulation removed from them the burden of such additional expense, for the extra cost could simply be absorbed into the rate base or expenses, allowing extra profits from the higher prices to flow upstream to Western rather than to its non-Bell competition.

Even so, the pass-through of higher costs seems only a minor concern to the court relative to the “three hats” worn by AT&T and its subsidiaries in the (1) setting of standards, (2) counseling of operating companies in their equipment purchases, and (3) production of equipment for sale to the operating companies [emphasis added]:

The government’s evidence has depicted defendants as sole arbiters of what equipment is suitable for use in the Bell System a role that carries with it a power of subjective judgment that can be and has been used to advance the sale of Western Electric’s products at the expense of the general trade. First, AT&T, in conjunction with Bell Labs and Western Electric, sets the technical standards under which the telephone network operates and the compatibility specifications which equipment must meet. Second, Western Electric and Bell Labs … serve as counselors to the Operating Companies in their procurement decisions, ostensibly helping them to purchase equipment that meets network standards. Third, Western also produces equipment for sale to the Operating Companies in competition with general trade manufacturers.

The upshot of this “wearing of three hats” is, according to the government’s evidence, a rather obviously anticompetitive situation. By setting technical or compatibility standards and by either not communicating these standards to the general trade or changing them in mid-stream, AT&T has the capacity to remove, and has in fact removed, general trade products from serious consideration by the Operating Companies on “network integrity” grounds. By either refusing to evaluate general trade products for the Operating Companies or producing biased or speculative evaluations, AT&T has been able to influence the Operating Companies, which lack independent means to evaluate general trade products, to buy Western. And the in-house production and sale of Western equipment provides AT&T with a powerful incentive to exercise its “approval” power to discriminate against Western’s competitors.

It’s important to keep in mind that rate of return regulation was not thrust upon AT&T, it was a quid pro quo in which state and federal regulators acted to eliminate AT&T/Bell competitors in exchange for price regulation. In a floor speech to Congress in 1921, Rep. William J. Graham declared:

It is believed to be better policy to have one telephone system in a community that serves all the people, even though it may be at an advanced rate, property regulated by State boards or commissions, than it is to have two competing telephone systems.

For purposes of Salop and Culley’s integration-to-evade-price-regulation example, it’s important to keep in mind that AT&T acquired Western Electric in 1882, or about two decades before telephone pricing regulation was contemplated and eight years before the Sherman Antitrust Act. While AT&T may have used vertical integration to take advantage of rate-of-return price regulation, it’s simply not true that AT&T acquired Western Electric to evade price controls.

Salop and Culley provide a more recent example:

Example: Potential evasion of regulation concerns were raised in the FTC’s analysis in 2008 of the Fresenius/Daiichi Sankyo exclusive sub-license for a Daiichi Sankyo pharmaceutical used in Fresenius’ dialysis clinics, which potentially could allow evasion of Medicare pricing regulations.

As with the AT&T example, this example is not about evasion of price controls. Rather it raises concerns about taking advantage of Medicare’s pricing formula. 

At the time of the deal, Medicare reimbursed dialysis clinics based on a drug manufacturer’s Average Sales Price (“ASP”) plus six percent, where ASP was calculated by averaging the prices paid by all customers, including any discounts or rebates. 

The FTC argued by setting an artificially high transfer price of the drug to Fresenius, the ASP would increase, thereby increasing the Medicare reimbursement to all clinics providing the same drug (which not only would increase the costs to Medicare but also would increase income to all clinics providing the drug). Although the FTC claims this would be anticompetitive, the agency does not describe in what ways competition would be harmed.

The FTC introduces an interesting wrinkle in noting that a few years after the deal would have been completed, “substantial changes to the Medicare program relating to dialysis services … would eliminate the regulations that give rise to the concerns created by the proposed transaction.” Specifically, payment for dialysis services would shift from fee-for-service to capitation.

This wrinkle highlights a serious problem with a presumption that any purported evasion of price controls is an antitrust violation. Namely, if the controls go away, so does the antitrust violation. 

Conversely–as Salop and Culley seem to argue with their AT&T example–a vertical merger could be retroactively declared anticompetitive if price controls are imposed after the merger is completed (even decades later and even if the price regulations were never anticipated at the time of the merger). 

It’s one thing to argue that avoiding price regulation runs counter to public interest, but it’s another thing to argue that avoiding price regulation is anticompetitive. Indeed, as Stigler argues, if the price controls stifle competition, then avoidance of the controls may enhance competition. Placing such mergers under heightened scrutiny, such as an anticompetitive presumption, is a solution in search of a problem.

Every 5 years, Congress has to reauthorize the sunsetting provisions of the Satellite Television Extension and Localism Act (STELA). And the deadline for renewing the law is quickly approaching (Dec. 31). While sunsetting is, in the abstract, seemingly a good thing to ensure rules don’t become outdated, there are an interlocking set of interest groups who, generally speaking, only support reauthorizing the law because they are locked in a regulatory stalemate. STELA no longer represents an optimal outcome for many if not most of the affected parties. The time is now for finally allowing STELA to sunset, and using this occasion to further reform the underlying regulatory morass it is built upon.

Since the creation of STELA in 1988, much has changed in the marketplace. At the time of the 1992 Cable Act (the first year data from the FCC’s Video Competition Reports is available), cable providers served 95% of multichannel video subscribers. Now, the power of cable has waned to the extent that 2 of the top 4 multichannel video programming distributors (MVPDs) are satellite providers, without even considering the explosion in competition from online video distributors like Netflix and Amazon Prime.

Given these developments, Congress should reconsider whether STELA is necessary at all, along with the whole complex regulatory structure undergirding it, and consider the relative simplicity with which copyright and antitrust law are capable of adequately facilitating the market for broadcast content negotiations. An approach building upon that contemplated in the bipartisan Modern Television Act of 2019 by Congressman Steve Scalise (R-LA) and Congresswoman Anna Eshoo (D-CA)—which would repeal the compulsory license/retransmission consent regime for both cable and satellite—would be a step in the right direction.

A brief history of STELA

STELA, originally known as the 1988 Satellite Home Viewer Act, was originally justified as necessary to promote satellite competition against incumbent cable networks and to give satellite companies stronger negotiating positions against network broadcasters. In particular, the goal was to give satellite providers the ability to transmit terrestrial network broadcasts to subscribers. To do this, this regulatory structure modified the Communications Act and the Copyright Act. 

With the 1988 Satellite Home Viewer Act, Congress created a compulsory license for satellite retransmissions under Section 119 of the Copyright Act. This compulsory license provision mandated, just as the Cable Act did for cable providers, that satellite would have the right to certain network broadcast content in exchange for a government-set price (despite the fact that local network affiliates don’t necessarily own the copyrights themselves). The retransmission consent provision requires satellite providers (and cable providers under the Cable Act) to negotiate with network broadcasters for the fee to be paid for the right to network broadcast content. 

Alternatively, broadcasters can opt to impose must-carry provisions on cable and satellite  in lieu of retransmission consent negotiations. These provisions require satellite and cable operators to carry many channels from network broadcasters in order to have access to their content. As ICLE President Geoffrey Manne explained to Congress previously:

The must-carry rules require that, for cable providers offering 12 or more channels in their basic tier, at least one-third of these be local broadcast retransmissions. The forced carriage of additional, less-favored local channels results in a “tax on capacity,” and at the margins causes a reduction in quality… In the end, must-carry rules effectively transfer significant programming decisions from cable providers to broadcast stations, to the detriment of consumers… Although the ability of local broadcasters to opt in to retransmission consent in lieu of must-carry permits negotiation between local broadcasters and cable providers over the price of retransmission, must-carry sets a floor on this price, ensuring that payment never flows from broadcasters to cable providers for carriage, even though for some content this is surely the efficient transaction.

The essential question about the reauthorization of STELA regards the following provisions: 

  1. an exemption from retransmission consent requirements for satellite operators for the carriage of distant network signals to “unserved households” while maintaining the compulsory license right for those signals (modification of the compulsory license/retransmission consent regime);
  2. the prohibition on exclusive retransmission consent contracts between MVPDs and network broadcasters (per se ban on a business model); and
  3. the requirement that television broadcast stations and MVPDs negotiate in good faith (nebulous negotiating standard reviewed by FCC).

This regulatory scheme was supposed to sunset after 5 years. Instead of actually sunsetting, Congress has consistently reauthorized STELA ( in 1994, 1999, 2004, 2010, and 2014).

Each time, satellite companies like DirecTV & Dish Network, as well as interest groups representing rural customers who depend heavily on satellite for cable television, strongly supported the renewal of the legislation. Over time, though, the reauthorization has led to amendments supported by major players from each side of the negotiating table and broad support for what is widely considered “must-pass” legislation. In other words, every affected industry found something they liked about the compromise legislation.

As it stands, the sunset provision of STELA has meant that it gives each side negotiating leverage during the next round of reauthorization talks, and often concessions are drawn. But rather than simplifying this regulatory morass, STELA reauthorization simply extends rules that have outlived their purpose.

Current marketplace competition undermines the necessity of STELA reauthorization

The marketplace is very different in 2019 than it was when STELA’s predecessors were adopted and reauthorized. No longer is it the case that cable dominates and that satellite and other providers need a leg up just to compete. Moreover, there are now services that didn’t even exist when the STELA framework was first developed. Competition is thriving.

Wikipedia:

RankServiceSubscribersProviderType
1.Xfinity21,986,000ComcastCable
2.DirecTV19,222,000AT&TSatellite
3.Spectrum16,606,000CharterCable
4.Dish9,905,000Dish NetworkSatellite
5.Verizon Fios TV4,451,000VerizonFiber-Optic
6.Cox Cable TV4,015,000Cox EnterprisesCable
7.U-Verse TV3,704,000AT&TFiber-Optic
8.Optimum/Suddenlink3,307,500Altice USACable
9.Sling TV*2,417,000Dish NetworkLive Streaming
10.Hulu with Live TV2,000,000Hulu(Disney, Comcast, AT&T)Live Streaming
11.DirecTV Now1,591,000AT&TLive Streaming
12.YouTube TV1,000,000Google(Alphabet)Live Streaming
13.Frontier FiOS838,000FrontierFiber-Optic
14.Mediacom776,000MediacomCable
15.PlayStation Vue500,000SonyLive Streaming
16.CableOne Cable TV326,423Cable OneCable
17.FuboTV250,000FuboTVLive Streaming

A 2018 accounting of the largest MVPDs by subscribers shows that satellite is 2 of the top 4, and that over-the-top services like Sling TV, Hulu with LiveTV, and YouTube TV are gaining significantly. And this does not even consider (non-live) streaming services such as Netflix (approximately 60 million US subscribers), Hulu (about 28 million US subscribers) and Amazon Prime Video (which has about 40 million users in the US). It is not clear from these numbers that satellite needs special rules in order to compete with cable, or that the complex regulatory regime underlying STELA is necessary anymore.

On the contrary, there seems to be a lot of reason to believe that content is king, and the market for the distribution of that content is thriving. Competition among platforms is intense, not only among MVPDs like Comcast, DirecTV, Charter, and Dish Network, but from streaming services like Netflix, Amazon Prime Video, Hulu, and HBONow. Distribution networks heavily invest in exclusive content to attract consumers. There is no reason to think that we need selective forbearance from the byzantine regulations in this space in order to promote satellite adoption when satellite companies are just as good as any at contracting for high-demand content (for instance DirecTV with NFL Sunday Ticket). 

A better way forward: Streamlined regulation in the form of copyright and antitrust

As Geoffrey Manne said in his Congressional testimony on STELA reauthorization back in 2013: 

behind all these special outdated regulations are laws of general application that govern the rest of the economy: antitrust and copyright. These are better, more resilient rules. They are simple rules for a complex world. They will stand up far better as video technology evolves–and they don’t need to be sunsetted.

Copyright law establishes clearly defined rights, thereby permitting efficient bargaining between content owners and distributors. But under the compulsory license system, the copyright holders’ right to a performance license is fundamentally abridged. Retransmission consent normally requires fees to be paid for the content that MVPDs have available to them. But STELA exempts certain network broadcasts (“distant signals” for “unserved households”) from retransmission consent requirements. This reduces incentives to develop content subject to STELA, which at the margin harms both content creators and viewers. It also gives satellite an unfair advantage vis-a-vis cable in those cases it does not need to pay ever-rising retransmission consent fees. Ironically, it also reduces the incentive for satellite providers (DirecTV, at least) to work to provide local content to some rural consumers. Congress should reform the law to allow copyright holders to have their full rights under the Copyright Act again. Congress should also repeal the compulsory license and must-carry provisions that work at cross-purposes and allow true marketplace negotiations.

The initial allocation of property rights guaranteed under copyright law would allow for MVPDs, including satellite providers, to negotiate with copyright holders for content, and thereby realize a more efficient set of content distribution outcomes than is otherwise possible. Under the compulsory license/retransmission consent regime underlying both STELA and the Cable Act, the outcomes at best approximate those that would occur through pure private ordering but in most cases lead to economically inefficient results because of the thumb on the scale in favor of the broadcasters. 

In a similar way, just as copyright law provides a superior set of bargaining conditions for content negotiation, antitrust law provides a superior mechanism for policing potentially problematic conduct between the firms involved. Under STELA, the FCC polices transactions with a “good faith” standard. In an important sense, this ambiguous regulatory discretion provides little information to prospective buyers and sellers of licenses as to what counts as “good faith” negotiations (aside from the specific practices listed).

By contrast, antitrust law, guided by the consumer welfare standard and decades of case law, is designed both to deter potential anticompetitive foreclosure and also to provide a clear standard for firms engaged in the marketplace. The effect of relying on antitrust law to police competitive harms is — as the name of the standard suggest — a net increase in the welfare of consumers, the ultimate beneficiaries of a well functioning market. 

For instance, consider a hypothetical dispute between a network broadcaster and a satellite provider. Under the FCC’s “good faith” oversight, bargaining disputes, which are increasingly resulting in blackouts, are reviewed for certain negotiating practices deemed to be unfair, 47 CFR § 76.65(b)(1), and by a more general “totality of the circumstances” standard, 47 CFR § 76.65(b)(2). This is both over- and under-inclusive as the negotiating practices listed in (b)(1) may have procompetitive benefits in certain circumstances, and the (b)(2) totality of the circumstances standard is vague and ill-defined. By comparison, antitrust claims would be adjudicated through a foreseeable process with reference to a consumer welfare standard illuminated by economic evidence and case law.

If a satellite provider alleges anticompetitive foreclosure by a refusal to license, its claims would be subject to analysis under the Sherman Act. In order to prove its case, it would need to show that the network broadcaster has power in a properly defined market and is using that market power to foreclose competition by leveraging its ownership over network content to the detriment of consumer welfare. A court would then analyze whether this refusal of a duty to deal is a violation of antitrust law under the Trinko and Aspen Skiing standards. Economic evidence would need to be introduced that supports the allegation. 

And, critically, in this process, the defendants would be entitled to raise evidence in their case — both evidence suggesting that there was no foreclosure, as well as evidence of procompetitive justifications for decisions that otherwise may be considered foreclosure. Ultimately, a court, bound by established, nondiscretionary standards would weigh the evidence and make a determination. It is, of course, possible, that a review for “good faith” conduct could reach the correct result, but there is simply not a similarly rigorous process available to consistently push it in that direction.

The above-mentioned Modern Television Act of 2019 does represent a step in the right direction, as it would repeal the compulsory license/retransmission consent regime applied to both cable and satellite operators. However, it is imperfect as it does leave must carry requirements in place for local content and retains the “good faith” negotiating standard to be enforced by the FCC. 

Expiration is better than the status quo even if fundamental reform is not possible

Some scholars who have written on this issue, and very much agree that fundamental reform is needed, nonetheless argue that STELA should be renewed if more fundamental reforms like those described above can’t be achieved. For instance, George Ford recently wrote that 

With limited days left in the legislative calendar before STELAR expires, there is insufficient time for a sensible solution to this complex issue. Senate Commerce Committee Chairman Roger Wicker (R-Miss.) has offered a “clean” STELAR reauthorization bill to maintain the status quo, which would provide Congress with some much-needed breathing room to begin tackling the gnarly issue of how broadcast signals can be both widely retransmitted and compensated. Congress and the Trump administration should welcome this opportunity.

However, even in a world without more fundamental reform, it is not clear that satellite needs distant signals in order to compete with cable. The number of “short markets”—i.e. those without access to all four local network broadcasts—implicated by the loss of distant signals is relatively few. Regardless of how bad the overall regulatory scheme needs to be updated, it makes no sense to continue to preserve STELA’s provisions that benefit satellite when it is no longer necessary on competition grounds.

Conclusion

Congress should not only let STELA sunset, but it should consider reforming the entire compulsory license/retransmission consent regime as the Modern Television Act of 2019 aims to do. In fact, reformers should look to go even further in repealing must-carry provisions and the good faith negotiating standard enforced by the FCC. Copyright and antitrust law are much better rules for this constantly evolving space than the current sector-specific rules. 

For previous work from ICLE on STELA see The Future of Video Marketplace Regulation (written testimony of ICLE President Geoffrey Manne from June 12, 2013) and Joint Comments of ICLE and TechFreedom, In the Matter of STELA Reauthorization and Video Programming Reform (March 19, 2014). 

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing  technology resources available to Congress are relatively modest. 

Although a new OTA is unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails. 

The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem. 

The OTA is a poor fit for the modern world

The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls —  have sought to revive the OTA. 

To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA. 

Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process. 

A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”

The upshot?

A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges

The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties. 

The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.

According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive[] to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.

The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports. 

A new OTA unnecessarily increases the surface area for regulatory capture

Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:

This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.

Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.

The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable. 

The real problem to solve is a lack of modern governance

Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents. 

Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.

Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws. 

The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.  

It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that

[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.

Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing. 

Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood: 

Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.

The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory  bases covered —  has little relevance.

The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.

*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.

This is the second in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here). It draws on research from a soon-to-be published ICLE white paper.

(Left, Android 10 Website; Right, iOS 13 Website)

In a previous post, I argued that the Commission failed to adequately define the relevant market in its recently published Google Android decision

This improper market definition might not be so problematic if the Commission had then proceeded to undertake a detailed (and balanced) assessment of the competitive conditions that existed in the markets where Google operates (including the competitive constraints imposed by Apple). 

Unfortunately, this was not the case. The following paragraphs respond to some of the Commission’s most problematic arguments regarding the existence of barriers to entry, and the absence of competitive constraints on Google’s behavior.

The overarching theme is that the Commission failed to quantify its findings and repeatedly drew conclusions that did not follow from the facts cited. As a result, it was wrong to conclude that Google faced little competitive pressure from Apple and other rivals.

1. Significant investments and network effects ≠ barriers to entry

In its decision, the Commission notably argued that significant investments (millions of euros) are required to set up a mobile OS and App store. It also argued that market for licensable mobile operating systems gave rise to network effects. 

But contrary to the Commission’s claims, neither of these two factors is, in and of itself, sufficient to establish the existence of barriers to entry (even under EU competition law’s loose definition of the term, rather than Stigler’s more technical definition)

Take the argument that significant investments are required to enter the mobile OS market.

The main problem is that virtually every market requires significant investments on the part of firms that seek to enter. Not all of these costs can be seen as barriers to entry, or the concept would lose all practical relevance. 

For example, purchasing a Boeing 737 Max airplane reportedly costs at least $74 million. Does this mean that incumbents in the airline industry are necessarily shielded from competition? Of course not. 

Instead, the relevant question is whether an entrant with a superior business model could access the capital required to purchase an airplane and challenge the industry’s incumbents.

Returning to the market for mobile OSs, the Commission should thus have questioned whether as-efficient rivals could find the funds required to produce a mobile OS. If the answer was yes, then the investments highlighted by the Commission were largely immaterial. As it happens, several firms have indeed produced competing OSs, including CyanogenMod, LineageOS and Tizen.

The same is true of Commission’s conclusion that network effects shielded Google from competitors. While network effects almost certainly play some role in the mobile OS and app store markets, it does not follow that they act as barriers to entry in competition law terms. 

As Paul Belleflamme recently argued, it is a myth that network effects can never be overcome. And as I have written elsewhere, the most important question is whether users could effectively coordinate their behavior and switch towards a superior platform, if one arose (See also Dan Spulber’s excellent article on this point).

The Commission completely ignored this critical interrogation during its discussion of network effects.

2. The failure of competitors is not proof of barriers to entry

Just as problematically, the Commission wrongly concluded that the failure of previous attempts to enter the market was proof of barriers to entry. 

This is the epitome of the Black Swan fallacy (i.e. inferring that all swans are white because you have never seen a relatively rare, but not irrelevant, black swan).

The failure of rivals is equally consistent with any number of propositions: 

  • There were indeed barriers to entry; 
  • Google’s products were extremely good (in ways that rivals and the Commission failed to grasp); 
  • Google responded to intense competitive pressure by continuously improving its product (and rivals thus chose to stay out of the market); 
  • Previous rivals were persistently inept (to take the words of Oliver Williamson); etc. 

The Commission did not demonstrate that its own inference was the right one, nor did it even demonstrate any awareness that other explanations were at least equally plausible.

3. First mover advantage?

Much of the same can be said about the Commission’s observation that Google enjoyed a first mover advantage

The elephant in the room is that Google was not the first mover in the smartphone market (and even less so in the mobile phone industry). The Commission attempted to sidestep this uncomfortable truth by arguing that Google was the first mover in the Android app store market. It then concluded that Google had an advantage because users were familiar with Android’s app store.

To call this reasoning “naive” would be too kind. Maybe consumers are familiar with Google’s products today, but they certainly weren’t when Google entered the market. 

Why would something that did not hinder Google (i.e. users’ lack of familiarity with its products, as opposed to those of incumbents such as Nokia or Blackberry) have the opposite effect on its future rivals? 

Moreover, even if rivals had to replicate Android’s user experience (and that of its app store) to prove successful, the Commission did not show that there was anything that prevented them from doing so — a particularly glaring omission given the open-source nature of the Android OS.

The result is that, at best, the Commission identified a correlation but not causality. Google may arguably have been the first, and users might have been more familiar with its offerings, but this still does not prove that Android flourished (and rivals failed) because of this.

4. It does not matter that users “do not take the OS into account” when they purchase a device

The Commission also concluded that alternatives to Android (notably Apple’s iOS and App Store) exercised insufficient competitive constraints on Google. Among other things, it argued that this was because users do not take the OS into account when they purchase a smartphone (so Google could allegedly degrade Android without fear of losing users to Apple)..

In doing so, the Commission failed to grasp that buyers might base their purchases on a devices’ OS without knowing it.

Some consumers will simply follow the advice of a friend, family member or buyer’s guide. Acutely aware of their own shortcomings, they thus rely on someone else who does take the phone’s OS into account. 

But even when they are acting independently, unsavvy consumers may still be driven by technical considerations. They might rely on a brand’s reputation for providing cutting edge devices (which, per the Commission, is the most important driver of purchase decisions), or on a device’s “feel” when they try it in a showroom. In both cases, consumers’ choices could indirectly be influenced by a phone’s OS.

In more technical terms, a phone’s hardware and software are complementary goods. In these settings, it is extremely difficult to attribute overall improvements to just one of the two complements. For instance, a powerful OS and chipset are both equally necessary to deliver a responsive phone. The fact that consumers may misattribute a device’s performance to one of these two complements says nothing about their underlying contribution to a strong end-product (which, in turn, drives purchase decisions). Likewise, battery life is reportedly one of the most important features for users, yet few realize that a phone’s OS has a large impact on it.

Finally, if consumers were really indifferent to the phone’s operating system, then the Commission should have dropped at least part of its case against Google. The Commission’s claim that Google’s anti-fragmentation agreements harmed consumers (by reducing OS competition) has no purchase if Android is provided free of charge and consumers are indifferent to non-price parameters, such as the quality of a phone’s OS. 

5. Google’s users were not “captured”

Finally, the Commission claimed that consumers are loyal to their smartphone brand and that competition for first time buyers was insufficient to constrain Google’s behavior against its “captured” installed base.

It notably found that 82% of Android users stick with Android when they change phones (compared to 78% for Apple), and that 75% of new smartphones are sold to existing users. 

The Commission asserted, without further evidence, that these numbers proved there was little competition between Android and iOS.

But is this really so? In almost all markets consumers likely exhibit at least some loyalty to their preferred brand. At what point does this become an obstacle to interbrand competition? The Commission offered no benchmark mark against which to assess its claims.

And although inter-industry comparisons of churn rates should be taken with a pinch of salt, it is worth noting that the Commission’s implied 18% churn rate for Android is nothing out of the ordinary (see, e.g., here, here, and here), including for industries that could not remotely be called anticompetitive.

To make matters worse, the Commission’s own claimed figures suggest that a large share of sales remained contestable (roughly 39%).

Imagine that, every year, 100 devices are sold in Europe (75 to existing users and 25 to new users, according to the Commission’s figures). Imagine further that the installed base of users is split 76–24 in favor of Android. Under the figures cited by the Commission, it follows that at least 39% of these sales are contestable.

According to the Commission’s figures, there would be 57 existing Android users (76% of 75) and 18 Apple users (24% of 75), of which roughly 10 (18%) and 4 (22%), respectively, switch brands in any given year. There would also be 25 new users who, even according to the Commission, do not display brand loyalty. The result is that out of 100 purchasers, 25 show no brand loyalty and 14 switch brands. And even this completely ignores the number of consumers who consider switching but choose not to after assessing the competitive options.

Conclusion

In short, the preceding paragraphs argue that the Commission did not meet the requisite burden of proof to establish Google’s dominance. Of course, it is one thing to show that the Commission’s reasoning was unsound (it is) and another to establish that its overall conclusion was wrong.

At the very least, I hope these paragraphs will convey a sense that the Commission loaded the dice, so to speak. Throughout the first half of its lengthy decision, it interpreted every piece of evidence against Google, drew significant inferences from benign pieces of information, and often resorted to circular reasoning.

The following post in this blog series argues that these errors also permeate the Commission’s analysis of Google’s allegedly anticompetitive behavior.

Today, I filed a regulatory comment in the FTC’s COPPA Rule Review on behalf of the International Center for Law & Economics. Building on prior work, I argue the FTC’s 2013 amendments to the COPPA Rule should be repealed. 

The amendments ignored the purpose of COPPA by focusing on protecting children from online targeted advertising rather than protecting children from online predators, as the drafters had intended. The amendment to the definition of personal information to include “persistent identifiers” by themselves is inconsistent with the statute’s text. The legislative history is explicit in identifying the protection of children from online predators as a purpose of COPPA, but there is nothing in the statute or the legislative history that states a purpose is to protect children from online targeted advertising.

The YouTube enforcement action and the resulting compliance efforts by YouTube will make the monetization of children-friendly content very difficult. Video game creators, family vloggers, toy reviewers, children’s apps, and educational technology will all be implicated by the changes on YouTube’s platform. The economic consequences are easy to predict: there will likely be less zero-priced family-friendly content available.

The 2013 amendments have uncertain benefits to children’s privacy. While some may feel there is a benefit to having less targeted advertising towards children, there is also a cost in restricting the ability of children’s content creators to monetize their work. The FTC should not presume parents do not balance costs and benefits about protecting their children from targeted advertising and often choose to allow their kids to use YouTube and apps on devices they bought for them.

The full comments are here.

Antitrust populists have a long list of complaints about competition policy, including: laws aren’t broad enough or tough enough, enforcers are lax, and judges tend to favor defendants over plaintiffs or government agencies. The populist push got a bump with the New York Times coverage of Lina Khan’s “Amazon’s Antitrust Paradox” in which she advocated breaking up Amazon and applying public utility regulation to platforms. Khan’s ideas were picked up by Sen. Elizabeth Warren, who has a plan for similar public utility regulation and promised to unwind earlier acquisitions by Amazon (Whole Foods and Zappos), Facebook (WhatsApp and Instagram), and Google (Waze, Nest, and DoubleClick).

Khan, Warren, and the other Break Up Big Tech populists don’t clearly articulate how consumers, suppliers — or anyone for that matter — would be better off with their mandated spinoffs. The Khan/Warren plan, however, requires a unique alignment of many factors: Warren must win the White House, Democrats must control both houses of Congress, and judges must substantially shift their thinking. It’s like turning a supertanker on a dime in the middle of a storm. Instead of publishing manifestos and engaging in antitrust hashtag hipsterism, maybe — just maybe — the populists can do something.

The populists seem to have three main grievances:

  • Small firms cannot enter the market or cannot thrive once they enter;
  • Suppliers, including workers, are getting squeezed; and
  • Speculation that someday firms will wake up, realize they have a monopoly, and begin charging noncompetitive prices to consumers.

Each of these grievances can be, and has been, already addressed by antitrust and competition litigation. And, in many cases these grievances were addressed in private antitrust litigation. For example:

In the US, private actions are available for a wide range of alleged anticompetitive conduct, including coordinated conduct (e.g., price-fixing), single-firm conduct (e.g., predatory pricing), and mergers that would substantially lessen competition. 

If the antitrust populists are so confident that concentration is rising and firms are behaving anticompetitively and consumers/suppliers/workers are being harmed, then why don’t they organize an antitrust lawsuit against the worst of the worst violators? If anticompetitive activity is so obvious and so pervasive, finding compelling cases should be easy.

For example, earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. Why not put Sussman’s theory to the test by building an antitrust case around it? The discovery process would unleash a treasure trove of cost data and probably more than a few “hot docs.”

Khan argues:

While predatory pricing technically remains illegal, it is extremely difficult to win predatory pricing claims because courts now require proof that the alleged predator would be able to raise prices and recoup its losses. 

However, in her criticism of the court in the Apple e-books litigation, she lays out a clear rationale for courts to revise their thinking on predatory pricing [emphasis added]:

Judge Cote, who presided over the district court trial, refrained from affirming the government’s conclusion. Still, the government’s argument illustrates the dominant framework that courts and enforcers use to analyze predation—and how it falls short. Specifically, the government erred by analyzing the profitability of Amazon’s e-book business in the aggregate and by characterizing the conduct as “loss leading” rather than potentially predatory pricing. These missteps suggest a failure to appreciate two critical aspects of Amazon’s practices: (1) how steep discounting by a firm on a platform-based product creates a higher risk that the firm will generate monopoly power than discounting on non-platform goods and (2) the multiple ways Amazon could recoup losses in ways other than raising the price of the same e-books that it discounted.

Why not put Khan’s cross-subsidy theory to the test by building an antitrust case around it? Surely there’d be a document explaining how the firm expects to recoup its losses. Or, maybe not. Maybe by the firm’s accounting, it’s not losing money on the discounted products. Without evidence, it’s just speculation.

In fairness, one can argue that recent court decisions have made pursuing private antitrust litigation more difficult. For example, the Supreme Court’s decision in Twombly requires an antitrust plaintiff to show more than mere speculation based on circumstantial evidence in order to move forward to discovery. Decisions in matters such as Ashcroft v. Iqbal have made it more difficult for plaintiffs to maintain antitrust claims. Wal-Mart v. Dukes and Comcast Corp v Behrend subject antitrust class actions to more rigorous analysis. In Ohio v. Amex the court ruled antitrust plaintiffs can’t meet the burden of proof by showing only some effect on some part of a two-sided market.

At the same time Jeld-Wen indicates third party plaintiffs can be awarded damages and obtain divestitures, even after mergers clear. In Jeld-Wen, a competitor filed suit to challenge the consummated Jeld-Wen/Craftmaster merger four years after the DOJ approved the merger without conditions. The challenge was lengthy, but successful, and a district court ordered damages and the divestiture of one of the combined firm’s manufacturing facilities six years after the merger was closed.

Despite the possible challenges of pursuing a private antitrust suit, Daniel Crane’s review of US federal court workload statistics concludes the incidence of private antitrust enforcement in the United States has been relatively stable since the mid-1980s — in the range of 600 to 900 new private antitrust filings a year. He also finds resolution by trial has been relatively stable at an average of less than 1 percent a year. Thus, it’s not clear that recent decisions have erected insurmountable barriers to antitrust plaintiffs.

In the US, third parties may fund private antitrust litigation and plaintiffs’ attorneys are allowed to work under a contingency fee arrangement, subject to court approval. A compelling case could be funded by deep-pocketed supporters of the populists’ agenda, big tech haters, or even investors. Perhaps the most well-known example is Peter Thiel’s bankrolling of Hulk Hogan’s takedown of Gawker. Before that, the savings and loan crisis led to a number of forced mergers which were later challenged in court, with the costs partially funded by the issuance of litigation tracking warrants.

The antitrust populist ranks are chock-a-block with economists, policy wonks, and go-getter attorneys. If they are so confident in their claims of rising concentration, bad behavior, and harm to consumers, suppliers, and workers, then they should put those ideas to the test with some slam dunk litigation. The fact that they haven’t suggests they may not have a case.