Archives For politics

[TOTM: The following is the seventh in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Cento Veljanoski, Managing Partner, Case Associates and IEA Fellow in Law and Economics, Institute of Economic Affairs.

The concept of a “good” or “efficient” cartel is generally regarded by competition authorities as an oxymoron. A cartel is seen as the worst type of antitrust violation and one that warrants zero tolerance. Agreements between competitors to raise prices and share the market are assumed unambiguously to reduce economic welfare. As such, even if these agreements are ineffective, the law should come down hard on attempts to rig prices. In this post, I argue that this view goes too far and that even ‘hard core’ cartels that lower output and increase prices can be efficient, and pro-competitive. I discuss three examples of where hard core cartels may be efficient.

Resuscitating the efficient cartel

Basic economic theory tells us that coordination can be efficient in many instances, and this is accepted in law, e.g. joint ventures and agreements on industry standards.  But where competitors agree on prices and the volume of sales – so called “hard core” cartels – there is intolerance.

Nonetheless there is a recognition that cartel-like arrangements can promote efficiency. For example, Article 101(3)TFEU exempts anticompetitive agreements or practices whose economic and/or technical benefits outweigh their restrictions on competition, provided a fair share of those benefits are passed-on to consumers. However, this so-called ‘efficiency defence’ is highly unlikely to be accepted for hard core cartels nor are the wider economic or non-economic considerations. But as will be shown, there are classes of hard core cartels and restrictive agreement which, while they reduce output, raise prices and foreclose entry, are nonetheless efficient and not anticompetitive.

Destructive competition and the empty core

The claim that cartels have beneficial effects precedes US antitrust law. Trusts were justified as necessary to prevent ‘ruinous’ or ‘destructive’ competition in industries with high fixed costs subject to frequent ‘price wars’. This was the unsuccessful defence in Trans-Missouri (166 U.S. 290 (1897), where 18 US railroad companies formed a trust to set their rates, arguing that absent their agreement there would be ruinous competition, eventual monopoly and even higher prices.  Since then industries such as steel, cement, paper, shipping and airlines have at various times claimed that competition was unsustainable and wasteful.

These seem patently self-serving claims.  But the idea that some industries are unstable without a competitive equilibrium has long been appreciated by economists.  Nearly a century after Trans-Missouri, economist Lester Telser (1996) refreshed the idea that cooperative arrangements among firms in some industries were not attempts to impose monopoly prices but a response to their inherent structural inefficiency. This was based on the concept of an ‘empty core’. While Tesler’s article uses some hideously dense mathematical game theory, the idea is simple to state.  A market is said to have a ‘core’ if there is a set of transactions between buyers and sellers such that there are no other transactions that could make some of the buyers or sellers better off. Such a core will survive in a competitive market if all firms can make zero economic profits. In a market with an empty core no coalition of firms will be able to earn zero profits; some firms will be able to earn a surplus and thereby attract entry, but because the core is empty the new entry will inflict losses on all firms. When firms exit due to their losses, the remaining firms again earn economic profits, and attract entry. There are no competitive long-run stable equilibria for these industries.

The literature suggests that an industry is likely to have an empty core: (1) where firms have fixed production capacities; (2) that are large relative to demand; (3) there are scale economies in production; (4) incremental costs are low, (5) demand is uncertain and fluctuates markedly; and (6) output cannot be stored cheaply. Industries which have frequently been cartelised share many of these features (see above).

In the 1980s several academic studies applied empty core theory to antitrust. Brittlingmayer (1982) claimed that the US iron pipe industry had an empty core and that the famous Addyston Pipe case was thus wrongly decided, and responsible for mergers in the industry.

Sjostrom (1989) and others have argued that conference lines were not attempts to overcharge shippers but to counteract an empty core that led to volatile market shares and freight rates due to excess capacity and fixed schedules.  This type of analysis formed the basis for their exemption from competition laws. Since the nineteenth century, liner conferences had been permitted to fix prices and regulate capacity on routes between Europe, and North America and the Far East. The EU block exemption (Council Regulation 4056/86) allowed them to set common freight rates, to take joint decisions on the limitation of supply and to coordinate timetables. However the justifications for these exemptions has worn thin. As from October 2008, these EU exemptions were removed based on scepticism that liner shipping is an empty core industry particularly because, with the rise of modern leasing and chartering techniques to manage capacity, the addition of shipping capacity is no longer a lumpy process. 

While the empty core argument may have merit, it is highly unlikely to persuade European competition authorities, and the experience with legal cartels that have been allowed in order to rationalise production and costs has not been good.

Where there are environmental problems

Cartels in industries with significant environmental problems – which produce economic ‘bads’ rather than goods – can have beneficial effects. Restricting the output of an economic bad is good. Take an extreme example. When most people hear the word cartel, they think of a Colombian drugs cartel. . A drugs cartel reduces drug trafficking to keep its profits high. Competition in the supply would  lead to an over-supply of cheaper drugs, and a cartel charging higher prices and lower output is superior to a competitive outcome.

The same logic applies also to industries in which bads, such as pollution, are a by-product of otherwise legitimate and productive activities.  An industry which generates pollution does not take the full costs of its activities into account, and hence output is over-expanded and prices too low. Economic efficiency requires a reduction in the harmful activities and the associated output.  It also requires the product’s price to increase to incorporate the pollution costs. A cartel that raises prices can move such an industry’s output and harm closer to the efficient level, although this would not be in response to higher pollution-inclusive costs – which makes this a second-best solution.

There has been a fleeting recognition that competition in the presence of external costs is not efficient and that restricting output does not necessarily distort competition. In 1999, the European Commission almost uniquely exempted a cartel-like restrictive agreement among producers and importers of washing machines under Article 101(3)TFEU (Case IV.F.1/36.718. CECED).  The agreement not to produce or import the least energy efficient washing machines representing 10-11% of then EC sales would adversely affect competition and increase prices since the most polluting machines were the least expensive ones.

The Commission has since rowed back from its broad application of Article 101(3)TFEU in CECED. In its 2001 Guidelines on Horizontal Agreements it devoted a chapter to environmental agreements which it removed from its revised 2011 Guidelines (para 329) which treated CECED as a standardisation agreement.

Common property industries

A more clear-cut case of an efficient cartel is where firms compete over a common property resource for which property rights are ill-defined or absent, such as is often the case for fisheries. In these industries, competition leads to excessive entry, over-exploitation, and the dissipation of the economic returns (rents).  A cartel – a ‘club’ of fisherman – having sole control of the fishing grounds would unambiguously increase efficiency even though it increased prices, reduced production and foreclosed entry. 

The benefits of such cartels have not been accepted in law by competition authorities. The Dutch competition authority’s (MNa Case No. 2269/330) and the European Commission’s (Case COMP/39633 Shrimps) shrimps decisions in 2013-14 imposed fines on Dutch shrimp fleet and wholesalers’ organisations for agreeing quotas and prices. One study showed that the Dutch agreement reduced the fishing catch by at least 12%-16% during the cartel period and increased wholesale prices. However, this output reduction and increase in prices was not welfare-reducing if the competitive outcome resulted over-fishing. This and subsequent cases have resulted in a vigorous policy debate in the Netherlands over the use of Article 101(3)TFEU to take the wider benefits into account (ACM Position Paper 2014).

Sustainability and Article 101(3)

There is a growing debate over the conflict between and antitrust and other policy objectives, such as sustainability and industrial policy. One strand of this debate focuses on expanding the efficiency defence under Article 101(3)TFEU.  As currently framed, it has not enabled the reduction in pollution costs or resource over-exploitation to exempt restrictive agreements which distort competition even though these agreements may be efficient. In the pollution case the benefits are generalised ones to third parties not consumers, which are difficult to quantify. In the fisheries case, the short-term welfare of existing consumers is unambiguously reduced as they pay higher prices for less fish; the benefits are long term (more sustainable fish stock which can continue to be consumed) and may not be realized at all by current consumers but rather will accrue to future generations.

To accommodate sustainability concerns and other efficiency factors Article 101(3)TFEU would have to be expanded into a public interest defence based on a wider total welfare objective, not just consumers’ welfare as it is now, which took into account the long-run interest of consumers and third parties potentially affected by a restrictive agreement. This would mark a radical and open-ended expansion of the objectives of European antitrust and the grounds for exemption. It would put sustainability on the same plank as the clamour that industrial policy be taken-into-account by antitrust authorities, which has been firmly resisted so far.  This is not to belittle both the economic and environmental grounds for a public interest defence, it is just to recognise that it is difficult to see how this can be coherently incorporated into Article 101(3)TFEU while at the same time as preserving the integrity and focus of European antitrust.

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Ramsi Woodcock, Assistant Professor, College of Law, and Assistant Professor, Department of Management at Gatton College of Business & Economics, University of Kentucky.

When in 2011 Paul Krugman attacked the press for bending over backwards to give equal billing to conservative experts on social security, even though the conservatives were plainly wrong, I celebrated. Social security isn’t the biggest part of the government’s budget, and calls to privatize it in order to save the country from bankruptcy were blatant fear mongering. Why should the press report those calls with a neutrality that could mislead readers into thinking the position reasonable?

Journalists’ ethic of balanced reporting looked, at the time, like gross negligence at best, and deceit at worst. But lost in the pathos of the moment was the rationale behind that ethic, which is not so much to ensure that the truth gets into print as to prevent the press from making policy. For if journalists do not practice balance, then they ultimately decide the angle to take.

And journalists, like the rest of us, will choose their own.

The dark underbelly of the engaged journalism unleashed by progressives like Krugman has nowhere been more starkly exposed than in the unfolding assault of journalists, operating as a special interest, on Google, Facebook, and Amazon, three companies that writers believe have decimated their earnings over the past decade.

In story after story, journalists have manufactured an antitrust movement aimed at breaking up these companies, even though virtually no expert in antitrust law or economics, on either the right or the left, can find an antitrust case against them, and virtually no expert would place any of these three companies at the top of the genuinely long list of monopolies in America that are due for an antitrust reckoning.

Bitter ledes

Headlines alone tell the story. We have: “What Happens After Amazon’s Domination Is Complete? Its Bookstore Offers Clues”; “Be Afraid, Jeff Bezos, Be Very Afraid”; “How Should Big Tech Be Reined In? Here Are 4 Prominent Ideas”;  “The Case Against Google”; and “Powerful Coalition Pushes Back on Anti-Tech Fervor.”

My favorite is: “It’s Time to Break Up Facebook.” Unlike the others, it belongs to an Op-Ed, so a bias is appropriate. Not appropriate, however, is the howler, contained in the article’s body, that “a host of legal scholars like Lina Khan, Barry Lynn and Ganesh Sitaraman are plotting a way forward” toward breakup. Lina Khan has never held an academic appointment. Barry Lynn does not even have a law degree. And Ganesh Sitaraman’s academic specialty is constitutional law, not antitrust. But editors let it through anyway.

As this unguarded moment shows, the press has treated these and other members of a small network of activists and legal scholars who operate on antitrust’s fringes as representative of scholarly sentiment regarding antitrust action. The only real antitrust scholar among them is Tim Wu, who, when you look closely at his public statements, has actually gone no further than to call for Facebook to unwind its acquisitions of Instagram and WhatsApp.

In more sober moments, the press has acknowledged that the law does not support antitrust attacks on the tech giants. But instead of helping readers to understand why, the press instead presents this as a failure of the law. “To Take Down Big Tech,” read one headline in The New York Times, “They First Need to Reinvent the Law.” I have documented further instances of unbalanced reporting here.

This is not to say that we don’t need more antitrust in America. Herbert Hovenkamp, who the New York Times once recognized as  “the dean of American antitrust law,” but has since downgraded to “an antitrust expert” after he came out against the breakup movement, has advocated stronger monopsony enforcement across labor markets. Einer Elhauge at Harvard is pushing to prevent index funds from inadvertently generating oligopolies in markets ranging from airlines to pharmacies. NYU economist Thomas Philippon has called for deconcentration of banking. Yale’s Fiona Morton has pointed to rising markups across the economy as a sign of lax antitrust enforcement. Jonathan Baker has argued with great sophistication for more antitrust enforcement in general.

But no serious antitrust scholar has traced America’s concentration problem to the tech giants.

Advertising monopolies old and new

So why does the press have an axe to grind with the tech giants? The answer lies in the creative destruction wrought by Amazon on the publishing industry, and Google and Facebook upon the newspaper industry.

Newspapers were probably the most durable monopolies of the 20th century, so lucrative that Warren Buffett famously picked them as his preferred example of businesses with “moats” around them. But that wasn’t because readers were willing to pay top dollar for newspapers’ reporting. Instead, that was because, incongruously for organizations dedicated to exposing propaganda of all forms on their front pages, newspapers have long striven to fill every other available inch of newsprint with that particular kind of corporate propaganda known as commercial advertising.

It was a lucrative arrangement. Newspapers exhibit powerful network effects, meaning that the more people read a paper the more advertisers want to advertise in it. As a result, many American cities came to have but one major newspaper monopolizing the local advertising market.

One such local paper, the Lorain Journal of Lorain, Ohio, sparked a case that has since become part of the standard antitrust curriculum in law schools. The paper tried to leverage its monopoly to destroy a local radio station that was competing for its advertising business. The Supreme Court affirmed liability for monopolization.

In the event, neither radio nor television ultimately undermined newspapers’ advertising monopolies. But the internet is different. Radio, television, and newspaper advertising can coexist, because they can target only groups, and often not the same ones, minimizing competition between them. The internet, by contrast, reaches individuals, making it strictly superior to group-based advertising. The internet also lets at least some firms target virtually all individuals in the country, allowing those firms to compete with all comers.

You might think that newspapers, which quickly became an important web destination, were perfectly positioned to exploit the new functionality. But being a destination turned out to be a problem. Consumers reveal far more valuable information about themselves to web gateways, like search and social media, than to particular destinations, like newspaper websites. But consumer data is the key to targeted advertising.

That gave Google and Facebook a competitive advantage, and because these companies also enjoy network effects—search and social media get better the more people use them—they inherited the newspapers’ old advertising monopolies.

That was a catastrophe for journalists, whose earnings and employment prospects plummeted. It was also a catastrophe for the public, because newspapers have a tradition of plowing their monopoly profits into investigative journalism that protects democracy, whereas Google and Facebook have instead invested their profits in new technologies like self-driving cars and cryptocurrencies.

The catastrophe of countervailing power

Amazon has found itself in journalists’ crosshairs for disrupting another industry that feeds writers: publishing. Book distribution was Amazon’s first big market, and Amazon won it, driving most brick and mortar booksellers to bankruptcy. Publishing, long dominated by a few big houses that used their power to extract high wholesale prices from booksellers, some of the profit from which they passed on to authors as royalties, now faced a distribution industry that was even more concentrated and powerful than was publishing. The Department of Justice stamped out a desperate attempt by publishers to cartelize in response, and profits, and author royalties, have continued to fall.

Journalists, of course, are writers, and the disruption of publishing, taken together with the disruption of news, have left journalists with the impression that they have nowhere to turn to escape the new economy.

The abuse of antitrust

Except antitrust.

Unschooled in the fine points of antitrust policy, it seems obvious to them that the Armageddon in newspapers and publishing is a problem of monopoly and that antitrust enforcers should do something about it.  

Only it isn’t and they shouldn’t. The courts have gone to great lengths over the past 130 years to distinguish between doing harm to competition, which is prohibited by the antitrust laws, and doing harm to competitors, which is not.

Disrupting markets by introducing new technologies that make products better is no antitrust violation, even if doing so does drive legacy firms into bankruptcy, and throws their employees out of work and into the streets. Because disruption is really the only thing capitalism has going for it. Disruption is the mechanism by which market economies generate technological advances and improve living standards in the long run. The antitrust laws are not there to preserve old monopolies and oligopolies such as those long enjoyed by newspapers and publishers.

In fact, by tearing down barriers to market entry, the antitrust laws strive to do the opposite: to speed the destruction and replacement of legacy monopolies with new and more innovative ones.

That’s why the entire antitrust establishment has stayed on the sidelines regarding the tech fight. It’s hard to think of three companies that have more obviously risen to prominence over the past generation by disrupting markets using superior technologies than Amazon, Google, and Facebook. It may be possible to find an anticompetitive practice here or there—I certainly have—but no serious antitrust scholar thinks the heart of these firms’ continued dominance lies other than in their technical savvy. The nuclear option of breaking up these firms just makes no sense.

Indeed, the disruption inflicted by these firms on newspapers and publishing is a measure of the extent to which these firms have improved book distribution and advertising, just as the vast disruption created by the industrial revolution was a symptom of the extraordinary technological advances of that period. Few people, and not even Karl Marx, thought that the solution to those disruptions lay with Ned Ludd. The solution to the disruption wrought by Google, Amazon, and Facebook today similarly does not lie in using the antitrust laws to smash the machines.

Governments eventually learned to address the disruption created by the original industrial revolution not by breaking up the big firms that brought that revolution about, but by using tax and transfer, and rate regulation, to ensure that the winners share their gains with the losers. However the press’s campaign turns out, rate regulation, not antitrust, is ultimately the approach that government will take to Amazon, Google, and Facebook if these companies continue to grow in power. Because we don’t have to decide between social justice and technological advance. We can have both. And voters will demand it.

The anti-progress wing of the progressive movement

Alas, smashing the machines is precisely what journalists and their supporters are demanding in calling for the breakup of Amazon, Google, and Facebook. Zephyr Teachout, for example, recently told an audience at Columbia Law School that she would ban targeted advertising except for newspapers. That would restore newspapers’ old advertising monopolies, but also make targeted advertising less effective, for the same reason that Google and Facebook are the preferred choice of advertisers today. (Of course, making advertising more effective might not be a good thing. More on this below.)

This contempt for technological advance has been coupled with a broader anti-intellectualism, best captured by an extraordinary remark made by Barry Lynn, director of the pro-breakup Open Markets Institute, and sometime advocate for the Author’s Guild. The Times quotes him saying that because the antitrust laws once contained a presumption against mergers to market shares in excess of 25%, all policymakers have to do to get antitrust right is “be able to count to four. We don’t need economists to help us count to four.”

But size really is not a good measure of monopoly power. Ask Nokia, which controlled more than half the market for cell phones in 2007, on the eve of Apple’s introduction of the iPhone, but saw its share fall almost to zero by 2012. Or Walmart, the nation’s largest retailer and a monopolist in many smaller retail markets, which nevertheless saw its stock fall after Amazon announced one-day shipping.

Journalists themselves acknowledge that size does not always translate into power when they wring their hands about the Amazon-driven financial troubles of large retailers like Macy’s. Determining whether a market lacks competition really does require more than counting the number of big firms in the market.

I keep waiting for a devastating critique of arguments that Amazon operates in highly competitive markets to emerge from the big tech breakup movement. But that’s impossible for a movement that rejects economics as a corporate plot. Indeed, even an economist as pro-antitrust as Thomas Philippon, who advocates a return to antitrust’s mid-20th century golden age of massive breakups of firms like Alcoa and AT&T, affirms in a new book that American retail is actually a bright spot in an otherwise concentrated economy.

But you won’t find journalists highlighting that. The headline of a Times column promoting Philippon’s book? “Big Business Is Overcharging you $5000 a Year.” I tend to agree. But given all the anti-tech fervor in the press, Philippon’s chapter on why the tech giants are probably not an antitrust problem ought to get a mention somewhere in the column. It doesn’t.

John Maynard Keynes famously observed that “though no one will believe it—economics is a technical and difficult subject.” So too antitrust. A failure to appreciate the field’s technical difficulty is manifest also in Democratic presidential candidate Elizabeth Warren’s antitrust proposals, which were heavily influenced by breakup advocates.

Warren has argued that no large firm should be able to compete on its own platforms, not seeming to realize that doing business means competing on your own platforms. To show up to work in the morning in your own office space is to compete on a platform, your office, from which you exclude competitors. The rule that large firms (defined by Warren as those with more than $25 billion in revenues) cannot compete on their own platforms would just make doing large amounts of business illegal, a result that Warren no doubt does not desire.

The power of the press

The press’s campaign against Amazon, Google, and Facebook is working. Because while they may not be as well financed as Amazon, Google, or Facebook, writers can offer their friends something more valuable than money: publicity.

That appears to have induced a slew of politicians, including both Senator Warren on the left and Senator Josh Hawley on the right, to pander to breakup advocates. The House antitrust investigation into the tech giants, led by a congressman who is simultaneously championing legislation advocated by the News Media Alliance, a newspaper trade group, to give newspapers an exemption from the antitrust laws, may also have similar roots. So too the investigations announced by dozens of elected state attorneys general.

The investigations recently opened by the FTC and Department of Justice may signal no more than a desire not to look idle while so many others act. Which is why the press has the power to turn fiction into reality. Moreover, under the current Administration, the Department of Justice has already undertaken two suspiciously partisan antitrust investigations, and President Trump has made clear his hatred for the liberal bastions that are Amazon, Google and Facebook. The fact that the press has made antitrust action against the tech giants a progressive cause provides convenient cover for the President to take down some enemies.

The future of the news

Rate regulation of Amazon, Google, or Facebook is the likely long-term resolution of concerns about these firms’ power. But that won’t bring back newspapers, which henceforth will always play the loom to Google and Facebook’s textile mills, at least in the advertising market.

Journalists and their defenders, like Teachout, have been pushing to restore newspapers’ old monopolies by government fiat. No doubt that would make existing newspapers, and their staffs, very happy. But what is good for Big News is not necessarily good for journalism in the long run.

The silver lining to the disruption of newspapers’ old advertising monopolies is that it has created an opportunity for newspapers to wean themselves off a funding source that has always made little sense for organizations dedicated to helping Americans make informed, independent decisions, free of the manipulation of others.

For advertising has always had a manipulative function, alongside its function of disseminating product information to consumers. And, as I have argued elsewhere, now that the vast amounts of product information available for free on the internet have made advertising obsolete as a source of product information, manipulation is now advertising’s only real remaining function.

Manipulation causes consumers to buy products they don’t really want, giving firms that advertise a competitive advantage that they don’t deserve. That makes for an antitrust problem, this time with real consequences not just for competitors, but also for technological advance, as manipulative advertising drives dollars away from superior products toward advertised products, and away from investment in innovation and toward investment in consumer seduction.

The solution is to ban all advertising, targeted or not, rather than to give newspapers an advertising monopoly. And to give journalism the state subsidies that, like all public goods, from defense to highways, are journalism’s genuine due. The BBC provides a model of how that can be done without fear of government influence.

Indeed, Teachout’s proposed newspaper advertising monopoly is itself just a government subsidy, but a subsidy extracted through an advertising medium that harms consumers. Direct government subsidization achieves the same result, without the collateral consumer harm.

The press’s brazen advocacy of antitrust action against the tech giants, without making clear how much the press itself has to gain from that action, and the utter absence of any expert support for this approach, represents an abdication by the press of its responsibility to create an informed citizenry that is every bit as profound as the press’s lapses on social security a decade ago.

I’m glad we still have social security. But I’m also starting to miss balanced journalism.

1/3/2020: Editor’s note – this post was edited for clarification and minor copy edits.

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Valentin Mircea, a Senior Partner at Mircea and Partners Law Firm, Bucharest, Romania.

The enforcement of competition rules in the European Union is at historic heights. Competition enforcers at the European Commission seem to think that they have reached a point of perfect equilibrium, or perfection in enforcement. “Everything we do is right,” they seem to say, because for decades no significant competition decision by the Commission has been annulled on substance. Meanwhile, the objectives of EU competition law multiply continuously, as DG Competition assumes more and more public policy objectives. Indeed, so wide is DG Competition’s remit that it has become a kind of government in itself, charged with many areas and facing several problems looking for a cure.

The consumer welfare standard is merely affirmed and rarely pursued in the enforcement of the EU competition rules, where even the abuse of dominance tends to be considered as a per se infringement, at least until the European Court of Justice had its say in Intel. It helps that this standard has been always of a secondary importance in the European Union, where the objective of market integration prevailed over time.

Now other issues are catching the eye of the European Commission and the easiest way to handle things such as the increasing power of the technology companies was to make use of the toolkit of the EU competition enforcement.  A technology giant such as Google has already been hit three times with significant fines; but beyond the transient glory of these decisions, nothing significant happened in the market, to other companies or to consumers. Or it did? I’m not sure and nobody seems to check or even care. But the impetus in investigating and applying fines on the technology companies is unshaken — and is likely to remain so at least until the European Court of Justice has its say in a new roster of cases, which will not happen very soon.

The EU competition rules look both over- and under-enforced. This seeming paradox is explained by the formalistic approach of the European Commission and its willingness to serve political purposes, often the result of lobbying from various industries.  In the European Union, competition enforcement increasingly resembles Swiss Army knife; it is good for quick fixes of various problems, while not solving entirely any of them. 

The pursuit of political goals is not necessarily bad in itself; it seems obvious that competition enforcers should listen to the worries of the societies in which they live. Once objectives such as welfare seem to have been attained, it is thus not entirely surprising that enforcement should move towards fixing other societal problems. Take the case of the antitrust laws in the United States, the enactment of which was not determined by an overwhelming concern for consumer welfare or economic efficiency but by powerful lobbies that convinced Congress to act as a referee for their long-lasting disputes with different industries.  In spite of this not-so-glorious origin, the resultant antitrust rules have generated many benefits throughout the world and are an essential part of the efforts to keep markets competitive and ensure a level-playing field. So, why worry that the European Commission – and, more recently, even certain national competition authorities (such as Germany) – have developed a tendency to use powerful competition rules to make order in other areas, where the public opinion, irrespective if it is or not aware of the real causes of concern, requires it?

But in fact, what is happening today is bad and is setting precedents never seen before.  The speed at which new fronts are being opened, where the enforcement of the EU competition rules is an essential part of the weaponry, gives rise to two main areas of concern.

First, EU competition enforcers are generally ill-equipped to address sensitive technical issues that even big experts in the field do not understand properly, such as the use of the Big Data (a vague concept itself, open to various interpretations).  While creating a different set of rules and a new toolkit for the digital economy does not seem to be warranted (debates are still raging on this subject), a dose of humility as to the right level of knowledge required for a proper understanding of the interactions and for proper enforcement, would be most welcome.  Venturing into territories where conventional economics does not apply to its full extent, such as the absence of a price, an essential element of competition, requires a prudent and diligent enforcer to hold back, advance cautiously, and act only where deemed necessary, in an appropriate and proportionate way. So doing is more likely to have an observably beneficial impact, in contrast to the illusory glory of simply confronting the tech giants.

Second, given the limited resources of the European Commission and the national competition authorities in the Member States, exaggerated attention to cases in the technology and digital economy sectors will result in less enforcement in the traditional economy, where cartels and other harmful behaviors still happen, with often more visible negative effects on consumers and the economy. It is no longer fashionable to tackle such cases, as they do not draw the same attention from the media and their outcomes are not likely to create the same fame to the EU competition enforcers.

More recently, in an interesting move, the new European Commission unified the competition and the digital economy portfolios under the astute supervision of commissioner Margrethe Vestager. Beyond the anomaly to put together ex-ante and ex-post powers, the move signals an even larger propensity towards using competition enforcement tools in order to investigate and try to rein in the power of the behemoths of the digital economy.  The change is a powerful political message that EU competition enforcement will be even more prone to cases and decisions motivated by the pursuit of various public policy goals.

I am not saying that the approach taken by the EU competition enforcers has no chance of generating benefits for European consumers. But I am worried that moving ahead with the same determination and with the same limited expertise of the case handlers as has so far been demonstrated, is unlikely to deliver such a beneficial outcome. Moreover, contrary to the stated intention of the policy, it is likely to chill further the prospects for EU technology ventures. 

Last but not least, courageous enforcement of EU competition rules is not a panacea for the unwanted effects on the evidentiary tier, which might put in danger the credibility of this enforcement, its most valuable feature. Indeed, EU competition enforcement may be at its heights but there is no certainty that it won’t fall from there — and falling could be as spectacular as the cases which made the European Commission get to this point. I thus advocate for DG Competition to be wise and humble, to take one step at a time, to acknowledge that markets are generally able to self-correct, and to remember that the history of the economy is little more than a cemetery of forgotten giants that were once assumed to be unshakeable and unstoppable.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Steven J. Cernak, Partner at Bona Law and Adjunct Professor, University of Michigan Law School and Western Michigan University Thomas M. Cooley Law School. This paper represents the current views of the author alone and not necessarily the views of any past, present or future employer or client.

When some antitrust practitioners hear “the politicization of antitrust,” they cringe while imagining, say, merger approval hanging on the size of the bribe or closeness of the connection with the right politician.  Even a more benign interpretation of the phrase “politicization of antitrust” might drive some antitrust technocrats up the wall:  “Why must the mainstream media and, heaven forbid, politicians start weighing in on what antitrust interpretations, policy and law should be?  Don’t they know that we have it all figured out and, if we decide it needs any tweaks, we’ll make those over drinks at the ABA Antitrust Section Spring Meeting?”

While I agree with the reaction to the cringe-worthy interpretation of “politicization,” I think members of the antitrust community should not be surprised or hostile to the second interpretation, that is, all the new attention from new people.  Such attention is not unusual historically; more importantly, it provides an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect. 

The Sherman Act itself, along with its state-level predecessors, was the product of a political reaction to perceived problems of the late 19th Century – hence all of today’s references to a “new gilded age” as echoes of the political arguments of 1890.  Since then, the Sherman Act has not been immutable.  The U.S. antitrust laws have changed – and new antitrust enforcers have even been added – when the political debates convinced enough that change was necessary.  Today’s political discussion could be surprising to so many members of the antitrust community because they were not even alive when the last major change was debated and passed

More generally, the U.S. political position on other government regulation of – or intervention or participation in – free markets has varied considerably over the years.  While controversial when they were passed, we now take Medicare and Medicaid for granted and debate “Medicare for all” – why shouldn’t an overhaul of the Sherman Act also be a legitimate political discussion?  The Interstate Commerce Commission might be gone and forgotten but at one time it garnered political support to regulate the most powerful industries of the late 19th and early 20th Century – why should a debate on new ways to regulate today’s powerful industries be out of the question? 

So today’s antitrust practitioners should avoid the temptation to proclaim an “end of history” and that all antitrust policy questions have been asked and answered and instead, as some of us have been suggesting since at least the last election cycle, join the political debate.  But now, for those of us who are generally supportive of the U.S. antitrust status quo, the question is how? 

Some have been pushing back on the supposed evidence that a change in antitrust or other governmental policies is necessary.  For instance, in late 2015 the White House Council of Economic Advisers published a paper on increased concentration in many industries which others have used as evidence of a failure of antitrust law to protect competition.  Josh Wright has used several platforms to point out that the industry measurement was too broad and the concentration level too low to be useful in these discussions.  Also, he reminded readers that concentration and levels of competition are different concepts that are not necessarily linked.  On questions surrounding inequality and stagnation of standards of living, Russ Roberts has produced a series of videos that try to explain why any such questions are difficult to answer with the easy numbers available and why, perhaps, it is not correct that “the rich got all the gains.” 

Others, like Dan Crane for instance, have advanced the debate by trying to get those commentators who are unhappy with the status quo to explain what they see as the problems and the proposed fixes.  While it might be too much to ask for unanimity among a diverse group of commentators, the debate might be more productive now that some more specific complaints and solutions have begun to emerge

Even if the problems are properly identified, we should not allow anyone to blithely assume that any – or any particular – increase in government oversight will solve it without creating different issues.  The Federal Trade Commission tackled this issue in its final hearing on Competition and Consumer Protection in the 21st Century with a panel on Frank Easterbrook’s seminal “Limits of Antitrust” paper.  I was fortunate enough to be on that panel and tried to summarize the ongoing importance of “Limits,” and advance the broader debate, by encouraging those who would change antitrust policy and increase supervision of the market to have appropriate “regulatory humility” (a term borrowed from former FTC Chairman Maureen Ohlhausen) about what can be accomplished.

I identified three varieties of humility present in “Limits” and pertinent here.  First, there is the humility to recognize that mastering anything as complex as an economy or any significant industry will require knowledge of innumerable items, some unseen or poorly understood, and so could be impossible.  Here, Easterbrook echoes Friedrich Hayek’s “Pretense of Knowledge” Nobel acceptance speech. 

Second, there is the humility to recognize that any judge or enforcer, like any other human being, is subject to her own biases and predilections, whether based on experience or the institutional framework within which she works.  While market participants might not be perfect, great thinkers from Madison to Kovacic have recognized that “men (or any agency leaders) are not angels” either.  As Thibault Schrepel has explained, it would be “romantic” to assume that any newly-empowered government enforcer will always act in the best interest of her constituents. 

Finally, there is the humility to recognize that humanity has been around a long time and faced a number of issues and that we might learn something from how our predecessors reacted to what appear to be similar issues in history.  Given my personal history and current interests, I have focused on events from the automotive industry; however, the story of the unassailable power (until it wasn’t) of A&P and how it spawned the Robinson-Patman Act, ably told by Tim Muris and Jonathan Neuchterlein, might be more pertinent here.  So challenging those advocating for big changes to explain why they are so confident this time around can be useful. 

But while all those avenues of argument can be effective in explaining why greater government intervention in the form of new antitrust policies might be worse than the status quo, we also must do a better job at explaining why antitrust and the market forces it protects are actually good for society.  If democratic capitalism really has “lengthened the life span, made the elimination of poverty and famine thinkable, enlarged the range of human choice” as claimed by Michael Novak in The Spirit of Democratic Capitalism, we should do more to spread that good news. 

Maybe we need to spend more time telling and retelling the “I, Pencil” or “It’s a Wonderful Loaf” stories about how well markets can and do work at coordinating the self-interested behavior of many to the benefit of even more.  Then we can illustrate the limited role of antitrust in that complex effort – say, punishing any collusion among the mills or bakers in those two stories to ensure the process works as beautifully and simply displayed.  For the first time in decades, politicians and real people, like the consumers whose welfare we are supposed to be protecting, are paying attention to our wonderful world of antitrust.  We should seize the opportunity to explain what we do and why it matters and discuss if any improvements can be made.

The operative text of the Sherman Antitrust Act of 1890 is a scant 100 words:

Section 1:

Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony…

Section 2:

Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony…

Its short length and broad implications (“Every contract… in restraint of trade… is declared to be illegal”) didn’t give the courts much to go on in terms of textualism. As for originalism, the legislative history of the Sherman Act is mixed, and no consensus currently exists among experts. In practice, that means enforcement of the antitrust laws in the US has been a product of the evolutionary common law process (and has changed over time due to economic learning). 

Over the last fifty years, academics, judges, and practitioners have generally converged on the consumer welfare standard as the best approach for protecting market competition. Although some early supporters of aggressive enforcement (e.g., Brandeis and, more recently, Pitofsky) advocated for a more political conception of antitrust, that conception of the law has been decisively rejected by the courts as the contours of the law have evolved through judicial decisionmaking. 

In the last few years, however, a movement has reemerged to expand antitrust beyond consumer welfare to include political and social issues, ranging from broadly macroeconomic matters like rising income inequality and declining wages, to sociopolitical concerns like increasing political concentration, environmental degradation, a struggling traditional news industry, and declining localism. 

Although we at ICLE are decidedly in the consumer welfare camp, the contested “original intent” of the antitrust laws and the simple progress of evolving interpretation could conceivably support a broader, more-political interpretation. It is, at the very least, a timely and significant question whether and how political and social issues might be incorporated into antitrust law. Yet much of the discussion of politics and antitrust has been heavy on rhetoric and light on substance; it is dominated by non-expert, ideologically driven opinion. 

In this blog symposium we seek to offer a more substantive and balanced discussion of the issue. To that end, we invited a number of respected economists, legal scholars, and practitioners to offer their perspectives. 

The symposium comprises posts by Steve Cernak, Luigi Zingales and Filippo Maria Lancieri, Geoffrey A. Manne and Alec Stapp, Valentin MirceaRamsi Woodcock, Kristian Stout, and Cento Veljanoski.

Both Steve Cernak and Zingales and Lancieri offer big picture perspectives. Cernak sees the current debate as, “an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect.” He then urges “regulatory humility” and outlines what this means in the context of antitrust.  

Zingales and Lancieri note that “simply “politicizing” the current antitrust regime would be very dangerous for the economic well-being of nations.” More specifically, they observe that “If used without clear and objective standards, antitrust remedies could easily add an extra layer of uncertainty or could even outright prohibit perfectly legitimate conduct, which would depress competition, investment, and growth.” Nonetheless, they argue that nuanced changes to the application of antitrust law may be justified because, “as markets become more concentrated, incumbent firms become better at distorting the political process in their favor.”

Manne and Stapp question the existence of a causal relationship between market concentration and political power, noting that there is little empirical support for such a claim.  Moreover, they warn that politicizing antitrust will inevitably result in more politicized antitrust enforcement actions to the detriment of consumers and democracy. 

Mircea argues that antitrust enforcement in the EU is already too political and that enforcement has been too focused on “Big Tech” companies. The result has been to chill investment in technology firms in the EU while failing to address legitimate antitrust violations in other sectors. 

Woodcock argues that the excessive focus on “Big Tech” companies as antitrust villains has come in no small part from a concerted effort by “Big Ink” (i.e. media companies), who resent the loss of advertising revenue that has resulted from the emergence of online advertising platforms. Woodcock suggests that the solution to this problem is to ban advertising. (We suspect that this cure would be worse than the disease but will leave substantive criticism to another blog post.)

Stout argues that while consumers may have legitimate grievances with Big Tech companies, these grievances do not justify widening the scope of antitrust, noting that “Concerns about privacy, hate speech, and, more broadly, the integrity of the democratic process are critical issues to wrestle with. But these aren’t antitrust problems.”

Finally, Veljanovski highlights potential problems with per se rules against cartels, noting that in some cases (most notably regulation of common pool resources such as fisheries), long-run consumer welfare may be improved by permitting certain kinds of cartel. However, he notes that in the case of polluting firms, a cartel that raises prices and lowers output is not likely to be the most efficient way to reduce the harms associated with pollution. This is of relevance given the DOJ’s case against certain automobile manufacturers, which are accused of colluding with California to set emission standards that are stricter than required under federal law.

It is tempting to conclude that U.S. antitrust law is not fundamentally broken, so does not require a major fix. Indeed, if any fix is needed, it is that the CWS should be more widely applied both in the U.S. and internationally.

FCC Commissioner Rosenworcel penned an article this week on the doublespeak coming out of the current administration with respect to trade and telecom policy. On one hand, she argues, the administration has proclaimed 5G to be an essential part of our future commercial and defense interests. But, she tells us, the administration has, on the other hand, imposed tariffs on Chinese products that are important for the development of 5G infrastructure, thereby raising the costs of roll-out. This is a sound critique: regardless where one stands on the reasonableness of tariffs, they unquestionably raise the prices of goods on which they are placed, and raising the price of inputs to the 5G ecosystem can only slow down the pace at which 5G technology is deployed.

Unfortunately, Commissioner Rosenworcel’s fervor for advocating the need to reduce the costs of 5G deployment seems animated by the courageous act of a Democratic commissioner decrying the policies of a Republican President and is limited to a context where her voice lacks any power to actually affect policy. Even as she decries trade barriers that would incrementally increase the costs of imported communications hardware, she staunchly opposes FCC proposals that would dramatically reduce the cost of deploying next generation networks.

Given the opportunity to reduce the costs of 5G deployment by a factor far more significant than that by which tariffs will increase them, her preferred role as Democratic commissioner is that of resistance fighter. She acknowledges that “we will need 800,000 of these small cells to stay competitive in 5G” — a number significantly above the “the roughly 280,000 traditional cell towers needed to blanket the nation with 4G”.  Yet, when she has had the opportunity to join the Commission on speeding deployment, she has instead dissented. Party over policy.

In this year’s “Historical Preservation” Order, for example, the Commission voted to expedite deployment on non-Tribal lands, and to exempt small cell deployments from certain onerous review processes under both the National Historic Preservation Act and the National Environmental Policy Act of 1969. Commissioner Rosenworcel dissented from the Order, claiming that that the FCC has “long-standing duties to consult with Tribes before implementing any regulation or policy that will significantly or uniquely affect Tribal governments, their land, or their resources.” Never mind that the FCC engaged in extensive consultation with Tribal governments prior to enacting this Order.

Indeed, in adopting the Order, the Commission found that the Order did nothing to disturb deployment on Tribal lands at all, and affected only the ability of Tribal authorities to reach beyond their borders to require fees and lengthy reviews for small cells on lands in which Tribes could claim merely an “interest.”

According to the Order, the average number of Tribal authorities seeking to review wireless deployments in a given geographic area nearly doubled between 2008 and 2017. During the same period, commenters consistently noted that the fees charged by Tribal authorities for review of deployments increased dramatically.

One environmental consultant noted that fees for projects that he was involved with increased from an average of $2,000.00 in 2011 to $11,450.00 in 2017. Verizon’s fees are $2,500.00 per small cell site just for Tribal review. Of the 8,100 requests that Verizon submitted for tribal review between 2012 and 2015, just 29 ( 0.3%) resulted in a finding that there would be an adverse effect on tribal historic properties. That means that Verizon paid over $20 million to Tribal authorities over that period for historic reviews that resulted in statistically nil action. Along the same lines, Sprint’s fees are so high that it estimates that “it could construct 13,408 new sites for what 10,000 sites currently cost.”

In other words, Tribal review practices — of deployments not on Tribal land — impose a substantial tariff upon 5G deployment, increasing its cost and slowing its pace.

There is a similar story in the Commission’s adoption of, and Commissioner Rosenworcel’s partial dissent from, the recent Wireless Infrastructure Order.  Although Commissioner Rosenworcel offered many helpful suggestions (for instance, endorsing the OTARD proposal that Brent Skorup has championed) and nodded to the power of the market to solve many problems, she also dissented on central parts of the Order. Her dissent shows an unfortunate concern for provincial, political interests and places those interests above the Commission’s mission of ensuring timely deployment of advanced wireless communication capabilities to all Americans.

Commissioner Rosenworcel’s concern about the Wireless Infrastructure Order is that it would prevent state and local governments from imposing fees sufficient to recover costs incurred by the government to support wireless deployments by private enterprise, or from imposing aesthetic requirements on those deployments. Stated this way, her objections seem almost reasonable: surely local government should be able to recover the costs they incur in facilitating private enterprise; and surely local government has an interest in ensuring that private actors respect the aesthetic interests of the communities in which they build infrastructure.

The problem for Commissioner Rosenworcel is that the Order explicitly takes these concerns into account:

[W]e provide guidance on whether and in what circumstances aesthetic requirements violate the Act. This will help localities develop and implement lawful rules, enable providers to comply with these requirements, and facilitate the resolution of disputes. We conclude that aesthetics requirements are not preempted if they are (1) reasonable, (2) no more burdensome than those applied to other types of infrastructure deployments, and (3) objective and published in advance

It neither prohibits localities from recovering costs nor imposing aesthetic requirements. Rather, it requires merely that those costs and requirements be reasonable. The purpose of the Order isn’t to restrict localities from engaging in reasonable conduct; it is to prohibit them from engaging in unreasonable, costly conduct, while providing guidance as to what cost recovery and aesthetic considerations are reasonable (and therefore permissible).

The reality is that localities have a long history of using cost recovery — and especially “soft” or subjective requirements such as aesthetics — to extract significant rents from communications providers. In the 1980s this slowed the deployment and increased the costs of cable television. In the 2000s this slowed the deployment and increase the cost of of fiber-based Internet service. Today this is slowing the deployment and increasing the costs of advanced wireless services. And like any tax — or tariff — the cost is ultimately borne by consumers.

Although we are broadly sympathetic to arguments about local control (and other 10th Amendment-related concerns), the FCC’s goal in the Wireless Infrastructure Order was not to trample upon the autonomy of small municipalities; it was to implement a reasonably predictable permitting process that would facilitate 5G deployment. Those affected would not be the small, local towns attempting to maintain a desirable aesthetic for their downtowns, but large and politically powerful cities like New York City, where the fees per small cell site can be more than $5,000.00 per installation. Such extortionate fees are effectively a tax on smartphone users and others who will utilize 5G for communications. According to the Order, it is estimated that capping these fees would stimulate over $2.4 billion in additional infrastructure buildout, with widespread benefits to consumers and the economy.

Meanwhile, Commissioner Rosenworcel cries “overreach!” “I do not believe the law permits Washington to run roughshod over state and local authority like this,” she said. Her federalist bent is welcome — or it would be, if it weren’t in such stark contrast to her anti-federalist preference for preempting states from establishing rules governing their own internal political institutions when it suits her preferred political objective. We are referring, of course, to Rosenworcel’s support for the previous administration’s FCC’s decision to preempt state laws prohibiting the extension of municipal governments’ broadband systems. The order doing so was plainly illegal from the moment it was passed, as every court that has looked at it has held. That she was ok with. But imposing reasonable federal limits on states’ and localities’ ability to extract political rents by abusing their franchising process is apparently beyond the pale.

Commissioner Rosenworcel is right that the FCC should try to promote market solutions like Brent’s OTARD proposal. And she is also correct in opposing dangerous and destructive tariffs that will increase the cost of telecommunications equipment. Unfortunately, she gets it dead wrong when she supports a stifling regulatory status quo that will surely make it unduly difficult and expensive to deploy next generation networks — not least for those most in need of them. As Chairman Pai noted in his Statement on the Order: “When you raise the cost of deploying wireless infrastructure, it is those who live in areas where the investment case is the most marginal — rural areas or lower-income urban areas — who are most at risk of losing out.”

Reconciling those two positions entails nothing more than pointing to the time-honored Washington tradition of Politics Over Policy. The point is not (entirely) to call out Commissioner Rosenworcel; she’s far from the only person in Washington to make this kind of crass political calculation. In fact, she’s far from the only FCC Commissioner ever to have done so.

One need look no further than the previous FCC Chairman, Tom Wheeler, to see the hypocritical politics of telecommunications policy in action. (And one need look no further than Tom Hazlett’s masterful book, The Political Spectrum: The Tumultuous Liberation of Wireless Technology, from Herbert Hoover to the Smartphone to find a catalogue of its long, sordid history).

Indeed, Larry Downes has characterized Wheeler’s reign at the FCC (following a lengthy recounting of all its misadventures) as having left the agency “more partisan than ever”:

The lesson of the spectrum auctions—one right, one wrong, one hanging in the balance—is the lesson writ large for Tom Wheeler’s tenure at the helm of the FCC. While repeating, with decreasing credibility, that his lodestone as Chairman was simply to encourage “competition, competition, completion” and let market forces do the agency’s work for it, the reality, as these examples demonstrate, has been something quite different.

The Wheeler FCC has instead been driven by a dangerous combination of traditional rent-seeking behavior by favored industry clients, potent pressure from radical advocacy groups and their friends in the White House, and a sincere if misguided desire by Wheeler to father the next generation of network technologies, which quickly mutated from sound policy to empty populism even as technology continued on its own unpredictable path.

* * *

And the Chairman’s increasingly autocratic management style has left the agency more political and more partisan than ever, quick to abandon policies based on sound legal, economic and engineering principles in favor of bait-and-switch proceedings almost certain to do more harm than good, if only unintentionally.

The great irony is that, while Commissioner Rosenworcel’s complaints are backed by a legitimate concern that the Commission has waited far too long to take action on spectrum issues, the criticism should properly fall not upon the current Chair, but — you guessed it — his predecessor, Chairman Wheeler (and his predecessor, Julius Genachowski). Of course, in true partisan fashion, Rosenworcel was fawning in her praise for her political ally’s spectrum agenda, lauding it on more than one occasion as going “to infinity and beyond!”

Meanwhile, Rosenworcel has taken virtually every opportunity to chide and castigate Chairman Pai’s efforts to get more spectrum into the marketplace, most often criticizing them as too little, too slow, and too late. Yet from any objective perspective, the current FCC has been addressing spectrum issues at a breakneck pace, as fast, or faster than any prior Commission. As with spectrum, there is an upper limit to the speed at which federal bureaucracy can work, and Chairman Pai has kept the Commission pushed right up against that limit.

It’s a shame Commissioner Rosenworcel prefers to blame Chairman Pai for the problems she had a hand in creating, and President Trump for problems she has no ability to correct. It’s even more a shame that, having an opportunity to address the problems she so often decries — by working to get more spectrum deployed and put into service more quickly and at lower cost to industry and consumers alike — she prefers to dutifully wear the hat of resistance, instead.

But that’s just politics, we suppose. And like any tariff, it makes us all poorer.

A recent exchange between Chris Walker and Philip Hamburger about Walker’s ongoing empirical work on the Chevron doctrine (the idea that judges must defer to reasonable agency interpretations of ambiguous statutes) gives me a long-sought opportunity to discuss what I view as the greatest practical problem with the Chevron doctrine: it increases both politicization and polarization of law and policy. In the interest of being provocative, I will frame the discussion below by saying that both Walker & Hamburger are wrong (though actually I believe both are quite correct in their respective critiques). In particular, I argue that Walker is wrong that Chevron decreases politicization (it actually increases it, vice his empirics); and I argue Hamburger is wrong that judicial independence is, on its own, a virtue that demands preservation. Rather, I argue, Chevron increases overall politicization across the government; and judicial independence can and should play an important role in checking legislative abdication of its role as a politically-accountable legislature in a way that would moderate that overall politicization.

Walker, along with co-authors Kent Barnett and Christina Boyd, has done some of the most important and interesting work on Chevron in recent years, empirically studying how the Chevron doctrine has affected judicial behavior (see here and here) as well as that of agencies (and, I would argue, through them the Executive) (see here). But the more important question, in my mind, is how it affects the behavior of Congress. (Walker has explored this somewhat in his own work, albeit focusing less on Chevron than on how the role agencies play in the legislative process implicitly transfers Congress’s legislative functions to the Executive).

My intuition is that Chevron dramatically exacerbates Congress’s worst tendencies, encouraging Congress to push its legislative functions to the executive and to do so in a way that increases the politicization and polarization of American law and policy. I fear that Chevron effectively allows, and indeed encourages, Congress to abdicate its role as the most politically-accountable branch by deferring politically difficult questions to agencies in ambiguous terms.

One of, and possibly the, best ways to remedy this situation is to reestablish the role of judge as independent decisionmaker, as Hamburger argues. But the virtue of judicial independence is not endogenous to the judiciary. Rather, judicial independence has an instrumental virtue, at least in the context of Chevron. Where Congress has problematically abdicated its role as a politically-accountable decisionmaker by deferring important political decisions to the executive, judicial refusal to defer to executive and agency interpretations of ambiguous statutes can force Congress to remedy problematic ambiguities. This, in turn, can return the responsibility for making politically-important decisions to the most politically-accountable branch, as envisioned by the Constitution’s framers.

A refresher on the Chevron debate

Chevron is one of the defining doctrines of administrative law, both as a central concept and focal debate. It stands generally for the proposition that when Congress gives agencies ambiguous statutory instructions, it falls to the agencies, not the courts, to resolve those ambiguities. Thus, if a statute is ambiguous (the question at “step one” of the standard Chevron analysis) and the agency offers a reasonable interpretation of that ambiguity (“step two”), courts are to defer to the agency’s interpretation of the statute instead of supplying their own.

This judicially-crafted doctrine of deference is typically justified on several grounds. For instance, agencies generally have greater subject-matter expertise than courts so are more likely to offer substantively better constructions of ambiguous statutes. They have more resources that they can dedicate to evaluating alternative constructions. They generally have a longer history of implementing relevant Congressional instructions so are more likely attuned to Congressional intent – both of the statute’s enacting and present Congresses. And they are subject to more direct Congressional oversight in their day-to-day operations and exercise of statutory authority than the courts so are more likely concerned with and responsive to Congressional direction.

Chief among the justifications for Chevron deference is, as Walker says, “the need to reserve political (or policy) judgments for the more politically accountable agencies.” This is at core a separation-of-powers justification: the legislative process is fundamentally a political process, so the Constitution assigns responsibility for it to the most politically-accountable branch (the legislature) instead of the least politically-accountable branch (the judiciary). In turn, the act of interpreting statutory ambiguity is an inherently legislative process – the underlying theory being that Congress intended to leave such ambiguity in the statute in order to empower the agency to interpret it in a quasi-legislative manner. Thus, under this view, courts should defer both to this Congressional intent that the agency be empowered to interpret its statute (and, should this prove problematic, it is up to Congress to change the statute or to face political ramifications), and the courts should defer to the agency interpretation of that statute because agencies, like Congress, are more politically accountable than the courts.

Chevron has always been an intensively studied and debated doctrine. This debate has grown more heated in recent years, to the point that there is regularly scholarly discussion about whether Chevron should be repealed or narrowed and what would replace it if it were somehow curtailed – and discussion of the ongoing vitality of Chevron has entered into Supreme Court opinions and the appointments process with increasing frequency. These debates generally focus on a few issues. A first issue is that Chevron amounts to a transfer of the legislature’s Constitutional powers and responsibilities over creating the law to the executive, where the law ordinarily is only meant to be carried out. This has, the underlying concern is, contributed to the increase in the power of the executive compared to the legislature. A second, related, issue is that Chevron contributes to the (over)empowerment of independent agencies – agencies that are already out of favor with many of Chevron’s critics as Constitutionally-infirm entities whose already-specious power is dramatically increased when Chevron limits the judiciary’s ability to check their use of already-broad Congressionally-delegated authority.

A third concern about Chevron, following on these first two, is that it strips the judiciary of its role as independent arbiter of judicial questions. That is, it has historically been the purview of judges to answer statutory ambiguities and fill in legislative interstices.

Chevron is also a focal point for more generalized concerns about the power of the modern administrative state. In this context, Chevron stands as a representative of a broader class of cases – State Farm, Auer, Seminole Rock, Fox v. FCC, and the like – that have been criticized as centralizing legislative, executive, and judicial powers in agencies, allowing Congress to abdicate its role as politically-accountable legislator, abdicating the judiciary’s role in interpreting the law, as well as raising due process concerns for those subject to rules promulgated by federal agencies..

Walker and his co-authors have empirically explored the effects of Chevron in recent years, using robust surveys of federal agencies and judicial decisions to understand how the doctrine has affected the work of agencies and the courts. His most recent work (with Kent Barnett and Christina Boyd) has explored how Chevron affects judicial decisionmaking. Framing the question by explaining that “Chevron deference strives to remove politics from judicial decisionmaking,” they ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?” They find that, empirically speaking, “the Chevron Court’s objective to reduce partisan judicial decision-making has been quite effective.” By instructing judges to defer to the political judgments (or just statutory interpretations) of agencies, judges are less political in their own decisionmaking.

Hamburger responds to this finding somewhat dismissively – and, indeed, the finding is almost tautological: “of course, judges disagree less when the Supreme Court bars them from exercising their independent judgment about what the law is.” (While a fair critique, I would temper it by arguing that it is nonetheless an important empirical finding – empirics that confirm important theory are as important as empirics that refute it, and are too often dismissed.)

Rather than focus on concerns about politicized decisionmaking by judges, Hamburger focuses instead on the importance of judicial independence – on it being “emphatically the duty of the Judicial Department to say what the law is” (quoting Marbury v. Madison). He reframes Walker’s results, arguing that “deference” to agencies is really “bias” in favor of the executive. “Rather than reveal diminished politicization, Walker’s numbers provide strong evidence of diminished judicial independence and even of institutionalized judicial bias.”

So which is it? Does Chevron reduce bias by de-politicizing judicial decisionmaking? Or does it introduce new bias in favor of the (inherently political) executive? The answer is probably that it does both. The more important answer, however, is that neither is the right question to ask.

What’s the correct measure of politicization? (or, You get what you measure)

Walker frames his study of the effects of Chevron on judicial decisionmaking by explaining that “Chevron deference strives to remove politics from judicial decisionmaking. Such deference to the political branches has long been a bedrock principle for at least some judicial conservatives.” Based on this understanding, his project is to ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?”

This framing, that one of Chevron’s goals is to remove politics from judicial decisionmaking, is not wrong. But this goal may be more accurately stated as being to prevent the judiciary from encroaching upon the political purposes assigned to the executive and legislative branches. This restatement offers an important change in focus. It emphasizes the concern about politicizing judicial decisionmaking as a separation of powers issue. This is in apposition to concern that, on consequentialist grounds, judges should not make politicized decisions – that is, judges should avoid political decisions because it leads to substantively worse outcomes.

It is of course true that, as unelected officials with lifetime appointments, judges are the least politically accountable to the polity of any government officials. Judges’ decisions, therefore, can reasonably be expected to be less representative of, or responsive to, the concerns of the voting public than decisions of other government officials. But not all political decisions need to be directly politically accountable in order to be effectively politically accountable. A judicial interpretation of an ambiguous law, for instance, can be interpreted as a request, or even a demand, that Congress be held to political account. And where Congress is failing to perform its constitutionally-defined role as a politically-accountable decisionmaker, it may do less harm to the separation of powers for the judiciary to make political decisions that force politically-accountable responses by Congress than for the judiciary to respect its constitutional role while the Congress ignores its role.

Before going too far down this road, I should pause to label the reframing of the debate that I have impliedly proposed. To my mind, the question isn’t whether Chevron reduces political decisionmaking by judges; the question is how Chevron affects the politicization of, and ultimately accountability to the people for, the law. Critically, there is no “conservation of politicization” principle. Institutional design matters. One could imagine a model of government where Congress exercises very direct oversight over what the law is and how it is implemented, with frequent elections and a Constitutional prohibition on all but the most express and limited forms of delegation. One can also imagine a more complicated form of government in which responsibilities for making law, executing law, and interpreting law, are spread across multiple branches (possibly including myriad agencies governed by rules that even many members of those agencies do not understand). And one can reasonably expect greater politicization of decisions in the latter compared to the former – because there are more opportunities for saying that the responsibility for any decision lies with someone else (and therefore for politicization) in the latter than in the “the buck stops here” model of the former.

In the common-law tradition, judges exercised an important degree of independence because their job was, necessarily and largely, to “say what the law is.” For better or worse, we no longer live in a world where judges are expected to routinely exercise that level of discretion, and therefore to have that level of independence. Nor do I believe that “independence” is necessarily or inherently a criteria for the judiciary, at least in principle. I therefore somewhat disagree with Hamburger’s assertion that Chevron necessarily amounts to a problematic diminution in judicial independence.

Again, I return to a consequentialist understanding of the purposes of judicial independence. In my mind, we should consider the need for judicial independence in terms of whether “independent” judicial decisionmaking tends to lead to better or worse social outcomes. And here I do find myself sympathetic to Hamburger’s concerns about judicial independence. The judiciary is intended to serve as a check on the other branches. Hamburger’s concern about judicial independence is, in my mind, driven by an overwhelmingly correct intuition that the structure envisioned by the Constitution is one in which the independence of judges is an important check on the other branches. With respect to the Congress, this means, in part, ensuring that Congress is held to political account when it does legislative tasks poorly or fails to do them at all.

The courts abdicate this role when they allow agencies to save poorly drafted statutes through interpretation of ambiguity.

Judicial independence moderates politicization

Hamburger tells us that “Judges (and academics) need to wrestle with the realities of how Chevron bias and other administrative power is rapidly delegitimizing our government and creating a profound alienation.” Huzzah. Amen. I couldn’t agree more. Preach! Hear-hear!

Allow me to present my personal theory of how Chevron affects our political discourse. In the vernacular, I call this Chevron Step Three. At Step Three, Congress corrects any mistakes made by the executive or independent agencies in implementing the law or made by the courts in interpreting it. The subtle thing about Step Three is that it doesn’t exist – and, knowing this, Congress never bothers with the politically costly and practically difficult process of clarifying legislation.

To the contrary, Chevron encourages the legislature expressly not to legislate. The more expedient approach for a legislator who disagrees with a Chevron-backed agency action is to campaign on the disagreement – that is, to politicize it. If the EPA interprets the Clean Air Act too broadly, we need to retake the White House to get a new administrator in there to straighten out the EPA’s interpretation of the law. If the FCC interprets the Communications Act too narrowly, we need to retake the White House to change the chair so that we can straighten out that mess! And on the other side, we need to keep the White House so that we can protect these right-thinking agency interpretations from reversal by the loons on the other side that want to throw out all of our accomplishments. The campaign slogans write themselves.

So long as most agencies’ governing statutes are broad enough that those agencies can keep the ship of state afloat, even if drifting rudderless, legislators have little incentive to turn inward to engage in the business of government with their legislative peers. Rather, they are freed to turn outward towards their next campaign, vilifying or deifying the administrative decisions of the current government as best suits their electoral prospects.

The sharp-eyed observer will note that I’ve added a piece to the Chevron puzzle: the process described above assumes that a new administration can come in after an election and simply rewrite all of the rules adopted by the previous administration. Not to put too fine a point on the matter, but this is exactly what administrative law allows (see Fox v. FCC and State Farm). The underlying logic, which is really nothing more than an expansion of Chevron, is that statutory ambiguity delegates to agencies a “policy space” within which they are free to operate. So long as agency action stays within that space – which often allows for diametrically-opposed substantive interpretations – the courts say that it is up to Congress, not the Judiciary, to provide course corrections. Anything else would amount to politically unaccountable judges substituting their policy judgments (this is, acting independently) for those of politically-accountable legislators and administrators.

In other words, the politicization of law seen in our current political moment is largely a function of deference and a lack of stare decisis combined. A virtue of stare decisis is that it forces Congress to act to directly address politically undesirable opinions. Because agencies are not bound by stare decisis, an alternative, and politically preferable, way for Congress to remedy problematic agency decisions is to politicize the issue – instead of addressing the substantive policy issue through legislation, individual members of Congress can campaign on it. (Regular readers of this blog will be familiar with one contemporary example of this: the recent net neutrality CRA vote, which is widely recognized as having very little chance of ultimate success but is being championed by its proponents as a way to influence the 2018 elections.) This is more directly aligned with the individual member of Congress’s own incentives, because, by keeping and placing more members of her party in Congress, her party will be able to control the leadership of the agency which will thus control the shape of that agency’s policy. In other words, instead of channeling the attention of individual Congressional actors inwards to work together to develop law and policy, it channels it outwards towards campaigning on the ills and evils of the opposing administration and party vice the virtues of their own party.

The virtue of judicial independence, of judges saying what they think the law is – or even what they think the law should be – is that it forces a politically-accountable decision. Congress can either agree, or disagree; but Congress must do something. Merely waiting for the next administration to come along will not be sufficient to alter the course set by the judicial interpretation of the law. Where Congress has abdicated its responsibility to make politically-accountable decisions by deferring those decisions to the executive or agencies, the political-accountability justification for Chevron deference fails. In such cases, the better course for the courts may well be to enforce Congress’s role under the separation of powers by refusing deference and returning the question to Congress.

 

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.

I am disappointed but not surprised to see that my former employer filed an official antitrust complaint against Google in the EU.  The blog post by Microsoft’s GC, Brad Smith, summarizing its complaint is here.

Most obviously, there is a tragic irony to the most antitrust-beleaguered company ever filing an antitrust complaint against its successful competitor.  Of course the specifics are not identical, but all of the atmospheric and general points that Microsoft itself made in response to the claims against it are applicable here.  It smacks of competitors competing not in the marketplace but in the regulators’ offices.  It promotes a kind of weird protectionism, directing the EU’s enforcement powers against a successful US company . . . at the behest of another US competitor.  Regulators will always be fighting last year’s battles to the great detriment of the industry.  Competition and potential competition abound, even where it may not be obvious (Linux for Microsoft; Facebook for Google, for example).  Etc.  Microsoft was once the world’s most powerful advocate for more sensible, restrained, error-cost-based competition policy.  That it now finds itself on the opposite side of this debate is unfortunate for all of us.

Brad’s blog post is eloquent (as he always is) and forceful.  And he acknowledges the irony.  And of course he may be right on the facts.  Unfortunately we’ll have to resort to a terribly-costly, irretrievably-flawed and error-prone process to find out–not that the process is likely to result in a very reliable answer anyway.  Where I think he is most off base is where he draws–and asks regulators to draw–conclusions about the competitive effects of the actions he describes.  It is certain that Google has another story and will dispute most or all of the facts.  But even without that information we can dispute the conclusions that Google’s actions, if true, are necessarily anticompetitive.  In fact, as Josh and I have detailed at length here and here, these sorts of actions–necessitated by the realities of complex, innovative and vulnerable markets and in many cases undertaken by the largest and the smallest competitors alike–are more likely pro-competitive.  More important, efforts to ferret out the anti-competitive among them will almost certainly harm welfare rather than help it–particularly when competitors are welcomed in to the regulators’ and politicians’ offices in the process.

As I said, disappointing.  It is not inherently inappropriate for Microsoft to resort to this simply because it has been the victim of such unfortunate “competition” in the past, nor is Microsoft obligated or expected to demonstrate intellectual or any other sort of consistency.  But knowing what it does about the irretrievable defects of the process and the inevitable costliness of its consequences, it is disingenuous or naive (the Nirvana fallacy) for it to claim that it is simply engaging in a reliable effort to smooth over a bumpy competitive landscape.  That may be the ideal of antitrust enforcement, but no one knows as well as Microsoft that the reality is far from that ideal.  To claim implicitly that, in this case, things will be different is, as I said, disingenuous.  And likely really costly in the end for all of us.

Josh has recently discussed his thoughts about the intellectual trajectory of the newly-minted CFPB and how that intellectual trajectory might influence the selection of the Bureau’s first director–presumed to be either Michale Barr or Elizabeth Warren.  His is a brief, dispassionate and intellectually-honest assessment.  But given Simon Johnson’s brief, intemperate and intellectually-devoid assessment of the issue, I’m afraid Josh may be a bit naive.

Johnson’s concerns are, as he presents them, just political.  After pointing out his own bottom line (“it would be a complete travesty not to put the strongest possible regulator in change of protecting consumers” [that means Elizabeth Warren, by the way]), he assesses the implications of the decision:

This can now go only one of two ways.

  1. Elizabeth Warren gets the job.  Bridges are mended and the White House regains some political capital.  Secretary Geithner is weakened slightly but he’ll recover.
  2. Someone else gets the job, despite Treasury’s claims that Elizabeth Warren was not blocked.  The deception in this scenario would be nauseating – and completely blatant.  “Everyone was considered on their merits” and “the best candidate won” will convince who [sic] exactly?

Despite the growing public reaction, outcome #2 is the most likely and the White House needs to understand this, plain and clear – there will be complete and utter revulsion at its handling of financial regulatory reform both on this specific issue and much more broadly.  The administration’s position in this area is already weak, its achievements remain minimal, its speaking points are lame, and the patience of even well-inclined people is wearing thin.

Failing to appoint Elizabeth Warren would be the straw that breaks the camel’s back.  It will go down in the history books as a turning point – downwards – for this administration.

What galls me about this kind of assessment is that it is, well, “nauseating – and completely blatant.”  It’s not an assessment, really.  It’s a threat.  It’s an effort to paint the politics of the situation in a way that makes the speaker’s preferred outcome (admittedly possibly arrived at in an intellectually-honest and sincere fashion) the only politically-viable outcome, in the process stripping all of the intellectual content out of the discussion and forcing intellectually-honest opponents of the speaker’s view to choose between intellectual honesty and, for example, the willful destruction of the entire Democratic agenda.  Hardly an environment for honest debate, but then I suppose that’s not really the goal.

Congratulations (or is it condolences?) to my friend, colleague and former dean at Lewis & Clark Law School, Jim Huffman, who has secured the Oregon Republican nomination for US Senate.  Jim now faces an arduous uphill battle against Ron Wyden in the general election.  As a point of reference: Wyden has more than $3 million in his coffers; Huffman has about $300,000.  But Jim is an appealing candidate in a state like Oregon that has traditionally sent moderates (both Republican and Democrat) to the senate.  Wyden is no moderate, and Jim’s strong libertarianism could place him in that sweet spot in the center that appeals to both the overwhelming Republican majority living in the Eastern part of the state, as well as a good portion of the Democratic base living in the larger cities in the Western part.  I’m not optimistic, but Jim is the first candidate I’ve ever supported, in any political race.  Maybe we’ll get him over for a guest post so we can grill him on the essential antitrust and financial regulatory issues of the day . . . .