Archives For political economy

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

Last week, the Internet Association (“IA”) — a trade group representing some of America’s most dynamic and fastest growing tech companies, including the likes of Google, Facebook, Amazon, and eBay — presented the incoming Trump Administration with a ten page policy paper entitled “Policy Roadmap for New Administration, Congress.”

The document’s content is not surprising, given its source: It is, in essence, a summary of the trade association’s members’ preferred policy positions, none of which is new or newly relevant. Which is fine, in principle; lobbying on behalf of members is what trade associations do — although we should be somewhat skeptical of a policy document that purports to represent the broader social welfare while it advocates for members’ preferred policies.

Indeed, despite being labeled a “roadmap,” the paper is backward-looking in certain key respects — a fact that leads to some strange syntax: “[the document is a] roadmap of key policy areas that have allowed the internet to grow, thrive, and ensure its continued success and ability to create jobs throughout our economy” (emphasis added). Since when is a “roadmap” needed to identify past policies? Indeed, as Bloomberg News reporter, Joshua Brustein, wrote:

The document released Monday is notable in that the same list of priorities could have been sent to a President-elect Hillary Clinton, or written two years ago.

As a wishlist of industry preferences, this would also be fine, in principle. But as an ostensibly forward-looking document, aimed at guiding policy transition, the IA paper is disappointingly un-self-aware. Rather than delineating an agenda aimed at improving policies to promote productivity, economic development and social cohesion throughout the economy, the document is overly focused on preserving certain regulations adopted at the dawn of the Internet age (when the internet was capitalized). Even more disappointing given the IA member companies’ central role in our contemporary lives, the document evinces no consideration of how Internet platforms themselves should strive to balance rights and responsibilities in new ways that promote meaningful internet freedom.

In short, the IA’s Roadmap constitutes a policy framework dutifully constructed to enable its members to maintain the status quo. While that might also serve to further some broader social aims, it’s difficult to see in the approach anything other than a defense of what got us here — not where we go from here.

To take one important example, the document reiterates the IA’s longstanding advocacy for the preservation of the online-intermediary safe harbors of the 20 year-old Digital Millennium Copyright Act (“DMCA”) — which were adopted during the era of dial-up, and before any of the principal members of the Internet Association even existed. At the same time, however, it proposes to reform one piece of legislation — the Electronic Communications Privacy Act (“ECPA”) — precisely because, at 30 years old, it has long since become hopelessly out of date. But surely if outdatedness is a justification for asserting the inappropriateness of existing privacy/surveillance legislation — as seems proper, given the massive technological and social changes surrounding privacy — the same concern should apply to copyright legislation with equal force, given the arguably even-more-substantial upheavals in the economic and social role of creative content in society today.

Of course there “is more certainty in reselling the past, than inventing the future,” but a truly valuable roadmap for the future from some of the most powerful and visionary companies in America should begin to tackle some of the most complicated and nuanced questions facing our country. It would be nice to see a Roadmap premised upon a well-articulated theory of accountability across all of the Internet ecosystem in ways that protect property, integrity, choice and other essential aspects of modern civil society.

Each of IA’s companies was principally founded on a vision of improving some aspect of the human condition; in many respects they have succeeded. But as society changes, even past successes may later become inconsistent with evolving social mores and economic conditions, necessitating thoughtful introspection and, often, policy revision. The IA can do better than pick and choose from among existing policies based on unilateral advantage and a convenient repudiation of responsibility.

Netflix’s latest net neutrality hypocrisy (yes, there have been others. See here and here, for example) involves its long-term, undisclosed throttling of its video traffic on AT&T’s and Verizon’s wireless networks, while it lobbied heavily for net neutrality rules from the FCC that would prevent just such throttling by ISPs.

It was Netflix that coined the term “strong net neutrality,” in an effort to import interconnection (the connections between ISPs and edge provider networks) into the net neutrality fold. That alone was a bastardization of what net neutrality purportedly stood for, as I previously noted:

There is a reason every iteration of the FCC’s net neutrality rules, including the latest, have explicitly not applied to backbone interconnection agreements: Interconnection over the backbone has always been open and competitive, and it simply doesn’t give rise to the kind of discrimination concerns net neutrality is meant to address.

That Netflix would prefer not to pay for delivery of its content isn’t surprising. But net neutrality regulations don’t — and shouldn’t — have anything to do with it.

But Netflix did something else with “strong net neutrality.” It tied it to consumer choice:

This weak net neutrality isn’t enough to protect an open, competitive Internet; a stronger form of net neutrality is required. Strong net neutrality additionally prevents ISPs from charging a toll for interconnection to services like Netflix, YouTube, or Skype, or intermediaries such as Cogent, Akamai or Level 3, to deliver the services and data requested by ISP residential subscribers. Instead, they must provide sufficient access to their network without charge. (Emphasis added).

A focus on consumers is laudable, of course, but when the focus is on consumers there’s no reason to differentiate between ISPs (to whom net neutrality rules apply) and content providers entering into contracts with ISPs to deliver their content (to whom net neutrality rules don’t apply).

And Netflix has just showed us exactly why that’s the case.

Netflix can and does engage in management of its streams in order (presumably) to optimize consumer experience as users move between networks, devices and viewers (e.g., native apps vs Internet browser windows) with very different characteristics and limitations. That’s all well and good. But as we noted in our Policy Comments in the FCC’s Open Internet Order proceeding,

In this circumstance, particularly when the content in question is Netflix, with 30% of network traffic, both the network’s and the content provider’s transmission decisions may be determinative of network quality, as may the users’ device and application choices.

As a 2011 paper by a group of network engineers studying the network characteristics of video streaming data from Netflix and YouTube noted:

This is a concern as it means that a sudden change of application or container in a large population might have a significant impact on the network traffic. Considering the very fast changes in trends this is a real possibility, the most likely being a change from Flash to HTML5 along with an increase in the use of mobile devices…. [S]treaming videos at high resolutions can result in smoother aggregate traffic while at the same time linearly increase the aggregate data rate due to video streaming.

Again, a concern with consumers is admirable, but Netflix isn’t concerned with consumers. It’s concerned at most with consumers of Netflix, while they are consuming Netflix. But the reality is that Netflix’s content management decisions can adversely affect consumers overall, including its own subscribers when they aren’t watching Netflix.

And here’s the huge irony. The FCC’s net neutrality rules are tailor-made to guarantee that Netflix will never have any incentive to take these externalities into account in its own decisions. What’s more, they ensure that ISPs are severely hamstrung in managing their networks for the benefit of all consumers, not least because their interconnection deals with large content providers like Netflix are now being closely scrutinized.

It’s great that Netflix thinks it should manage its video delivery to optimize viewing under different network conditions. But net neutrality rules ensure that Netflix bears no cost for overwhelming the network in the process. Essentially, short of building new capacity — at great expense to all ISP subscribers, of course — ISPs can’t do much about it, either, under the rules. And, of course, the rules also make it impossible for ISPs to negotiate for financial help from Netflix (or its heaviest users) in paying for those upgrades.

On top of this, net neutrality advocates have taken aim at usage-based billing and other pricing practices that would help with the problem by enabling ISPs to charge their heaviest users more in order to alleviate the inherent subsidy by normal users that flat-rate billing entails. (Netflix itself, as one of the articles linked above discusses at length, is hypocritically inconsistent on this score).

As we also noted in our OIO Policy Comments:

The idea that consumers and competition generally are better off when content providers face no incentive to take account of congestion externalities in their pricing (or when users have no incentive to take account of their own usage) runs counter to basic economic logic and is unsupported by the evidence. In fact, contrary to such claims, usage-based pricing, congestion pricing and sponsored content, among other nonlinear pricing models, would, in many circumstances, further incentivize networks to expand capacity (not create artificial scarcity).

Some concern for consumers. Under Netflix’s approach consumers get it coming and going: Either their non-Netflix traffic is compromised for the sake of Netflix’s traffic, or they have to pay higher subscription fees to ISPs for the privilege of accommodating Netflix’s ever-expanding traffic loads (4K videos, anyone?) — whether they ever use Netflix or not.

Sometimes, apparently, Netflix throttles its own traffic in order to “help” a few consumers. (That it does so without disclosing the practice is pretty galling, especially given the enhanced transparency rules in the Open Internet Order — something Netflix also advocated for, and which also apply only to ISPs and not to content providers). But its self-aggrandizing advocacy for the FCC’s latest net neutrality rules reveals that its first priority is to screw over consumers, so long as it can shift the blame and the cost to others.

I received word today that Douglass North passed away yesterday at the age of 95 (obit here). Professor North shared the Nobel Prize in Economic with Robert Fogel in 1993 for his work in economic history on the role of institutions in shaping economic development and performance.

Doug was one of my first professors in graduate school at Washington University. Many of us in our first year crammed into Doug’s economic history class for fear that he might retire and we not get the chance to study under him. Little did we expect that he would continue teaching into his DoughNorth_color_300-doc80s. The text for our class was the pre-publication manuscript of his book, Institutions, Institutional Change and Economic Performance. Doug’s course offered an interesting juxtaposition to the traditional neoclassical microeconomics course for first-year PhD students. His work challenged the simplifying assumptions of the neoclassical system and shed a whole new light on understanding economic history, development and performance. I still remember that day in October 1993 when the department was abuzz with the announcement that Doug had received the Nobel Prize. It was affirming and inspiring.

As I started work on my dissertation, I had hoped to incorporate a historical component on the early development of crude oil futures trading in the 1930s so I could get Doug involved on my committee. Unfortunately, there was not enough information still available to provide any analysis (there was one news reference to a new crude futures exchange, but nothing more–and the historical records of the NY Mercantile Exchange had been lost in a fire).and I had to focus solely on the deregulatory period of the late 1970s and early 1980s. I remember joking at one of our economic history workshops that I wasn’t sure if it counted as economic history since it happened during Doug’s lifetime.

Doug was one of the founding conspirators for the International Society for New Institutional Economics (now the Society for Institutional & Organizational Economics) in 1997, along with Ronald Coase and Oliver Williamson. Although the three had strong differences of opinions concerning certain aspects of their respective theoretical approaches, they understood the generally complementary nature of their work and its importance not just for the economic profession, but for understanding how societies and organizations perform and evolve and the role institutions play in that process.

The opportunity to work around these individuals, particularly with North and Coase, strongly shaped and influenced my understanding not only of economics, but of why a broader perspective of economics is so important for understanding the world around us. That experience profoundly affected my own research interests and my teaching of economics. Some of Doug’s papers continue to play an important role in courses I teach on economic policy. Students, especially international students, continue to be inspired by his explanation of the roles of institutions, how they affect markets and societies, and the forces that lead to institutional change.

As we prepare to celebrate Thanksgiving in the States, Doug’s passing is a reminder of how much I have to be thankful for over my career. I’m grateful for having had the opportunity to know and to work with Doug. I’m grateful that we had an opportunity to bring him to Mizzou in 2003 for our CORI Seminar series, at which he spoke on Understanding the Process of Economic Change (the title of his next book at the time). And I’m especially thankful for the influence he had on my understanding of economics and that his ideas will continue to shape economic thinking and economic policy for years to come.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.

Today, the International Center for Law & Economics released a white paper, co-authored by Executive Director Geoffrey Manne and Senior Fellow Julian Morris, entitled Dangerous Exception: The detrimental effects of including “fair use” copyright exceptions in free trade agreements.

Dangerous Exception explores the relationship between copyright, creativity and economic development in a networked global marketplace. In particular, it examines the evidence for and against mandating a U.S.-style fair use exception to copyright via free trade agreements like the Trans-Pacific Partnership (TPP), and through “fast-track” trade promotion authority (TPA).

In the context of these ongoing trade negotiations, some organizations have been advocating for the inclusion of dramatically expanded copyright exceptions in place of more limited language requiring that such exceptions conform to the “three-step test” implemented by the 1994 TRIPs Agreement.

The paper argues that if broad fair use exceptions are infused into trade agreements they could increase piracy and discourage artistic creation and innovation — especially in nations without a strong legal tradition implementing such provisions.

The expansion of digital networks across borders, combined with historically weak copyright enforcement in many nations, poses a major challenge to a broadened fair use exception. The modern digital economy calls for appropriate, but limited, copyright exceptions — not their expansion.

The white paper is available here. For some of our previous work on related issues, see:

In its February 25 North Carolina Dental decision, the U.S. Supreme Court, per Justice Anthony Kennedy, held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state.  In so ruling, the Court struck a significant blow against protectionist rent-seeking and for economic liberty.  (As I stated in a recent Heritage Foundation legal memorandum, “[a] Supreme Court decision accepting this [active supervision] principle might help to curb special-interest favoritism conferred through state law.  At the very least, it could complicate the efforts of special interests to protect themselves from competition through regulation.”)

A North Carolina law subjects the licensing of dentistry to a North Carolina State Board of Dental Examiners (Board), six of whose eight members must be licensed dentists.  After dentists complained to the Board that non-dentists were charging lower prices than dentists for teeth whitening, the Board sent cease-and-desist letter to non-dentist teeth whitening providers, warning that the unlicensed practice dentistry is a crime.  This led non-dentists to cease teeth whitening services in North Carolina.  The Federal Trade Commission (FTC) held that the Board’s actions violated Section 5 of the FTC Act, which prohibits unfair methods of competition, the Fourth Circuit agreed, and the Court affirmed the Fourth Circuit’s decision.

In its decision, the Court rejected the claim that state action immunity, which confers immunity on the anticompetitive conduct of states acting in their sovereign capacity, applied to the Board’s actions.  The Court stressed that where a state delegates control over a market to a non-sovereign actor, immunity applies only if the state accepts political accountability by actively supervising that actor’s decisions.  The Court applied its Midcal test, which requires (1) clear state articulation and (2) active state supervision of decisions by non-sovereign actors for immunity to attach.  The Court held that entities designated as state agencies are not exempt from active supervision when they are controlled by market participants, because allowing an exemption in such circumstances would pose the risk of self-dealing that the second prong of Midcal was created to address.

Here, the Board did not contend that the state exercised any (let alone active) supervision over its anticompetitive conduct.  The Court closed by summarizing “a few constant requirements of active supervision,” namely, (1) the supervisor must review the substance of the anticompetitive decision, (2) the supervisor must have the power to veto or modify particular decisions for consistency with state policy, (3) “the mere potential for state supervision is not an adequate substitute for a decision by the State,” and (4) “the state supervisor may not itself be an active market participant.”  The Court cautioned, however, that “the adequacy of supervision otherwise will depend on all the circumstances of a case.”

Justice Samuel Alito, joined by Justices Antonin Scalia and Clarence Thomas, dissented, arguing that the Court ignored precedent that state agencies created by the state legislature (“[t]he Board is not a private or ‘nonsovereign’ entity”) are shielded by the state action doctrine.  “By straying from this simple path” and assessing instead whether individual agencies are subject to regulatory capture, the Court spawned confusion, according to the dissenters.  Midcal was inapposite, because it involved a private trade association.  The dissenters feared that the majority’s decision may require states “to change the composition of medical, dental, and other boards, but it is not clear what sort of changes are needed to satisfy the test that the Court now adopts.”  The dissenters concluded “that determining when regulatory capture has occurred is no simple task.  That answer provides a reason for relieving courts from the obligation to make such determinations at all.  It does not explain why it is appropriate for the Court to adopt the rather crude test for capture that constitutes the holding of today’s decision.”

The Court’s holding in North Carolina Dental helpfully limits the scope of the Court’s infamous Parker v. Brown decision (which shielded from federal antitrust attack a California raisin producers’ cartel overseen by a state body), without excessively interfering in sovereign state prerogatives.  State legislatures may still choose to create self-interested professional regulatory bodies – their sovereignty is not compromised.  Now, however, they will have to (1) make it clearer up front that they intend to allow those bodies to displace competition, and (2) subject those bodies to disinterested third party review.  These changes should make it far easier for competition advocates (including competition agencies) to spot and publicize welfare-inimical regulatory schemes, and weaken the incentive and ability of rent-seekers to undermine competition through state regulatory processes.  All told, the burden these new judicially-imposed constraints will impose on the states appears relatively modest, and should be far outweighed by the substantial welfare benefits they are likely to generate.

U.S. antitrust law focuses primarily on private anticompetitive restraints, leaving the most serious impediments to a vibrant competitive process – government-initiated restraints – relatively free to flourish.  Thus the Federal Trade Commission (FTC) should be commended for its July 16 congressional testimony that spotlights a fast-growing and particularly pernicious species of (largely state) government restriction on competition – occupational licensing requirements.  Today such disciplines (to name just a few) as cat groomers, flower arrangers, music therapists, tree trimmers, frozen dessert retailers, eyebrow threaders, massage therapists (human and equine), and “shampoo specialists,” in addition to the traditional categories of doctors, lawyers, and accountants, are subject to professional licensure.  Indeed, since the 1950s, the coverage of such rules has risen dramatically, as the percentage of Americans requiring government authorization to do their jobs has risen from less than five percent to roughly 30 percent.

Even though some degree of licensing responds to legitimate health and safety concerns (i.e., no fly-by-night heart surgeons), much occupational regulation creates unnecessary barriers to entry into a host of jobs.  Excessive licensing confers unwarranted benefits on fortunate incumbents, while effectively barring large numbers of capable individuals from the workforce.  (For example, many individuals skilled in natural hair braiding simply cannot afford the 2,100 hours required to obtain a license in Iowa, Nebraska, and South Dakota.)  It also imposes additional economic harms, as the FTC’s testimony explains:  “[Occupational licensure] regulations may lead to higher prices, lower quality services and products, and less convenience for consumers.  In the long term, they can cause lasting damage to competition and the competitive process by rendering markets less responsive to consumer demand and by dampening incentives for innovation in products, services, and business models.”  Licensing requirements are often enacted in tandem with other occupational regulations that unjustifiably limit the scope of beneficial services particular professionals can supply – for instance, a ban on tooth cleaning by dental hygienists not acting under a dentist’s supervision that boosts dentists’ income but denies treatment to poor children who have no access to dentists.

What legal and policy tools are available to chip away at these pernicious and costly laws and regulations, which largely are the fruit of successful special interest lobbying?  The FTC’s competition advocacy program, which responds to requests from legislators and regulators to assess the economic merits of proposed laws and regulations, has focused on unwarranted regulatory restrictions in such licensed professions as real estate brokers, electricians, accountants, lawyers, dentists, dental hygienists, nurses, eye doctors, opticians, and veterinarians.  Retrospective reviews of FTC advocacy efforts suggest it may have helped achieve some notable reforms (for example, 74% of requestors, regulators, and bill sponsors surveyed responded that FTC advocacy initiatives influenced outcomes).  Nevertheless, advocacy’s reach and effectiveness inherently are limited by FTC resource constraints, by the need to obtain “invitations” to submit comments, and by the incentive and ability of licensing scheme beneficiaries to oppose regulatory and legislative reforms.

Former FTC Chairman Kovacic and James Cooper (currently at George Mason University’s Law and Economics Center) have suggested that federal and state antitrust experts could be authorized to have ex ante input into regulatory policy making.  As the authors recognize, however, several factors sharply limit the effectiveness of such an initiative.  In particular, “the political feasibility of this approach at the legislative level is slight”, federal mandates requiring ex ante reviews would raise serious federalism concerns, and resource constraints would loom large.

Antitrust law challenges to anticompetitive licensing schemes likewise offer little solace.  They are limited by the antitrust “state action” doctrine, which shields conduct undertaken pursuant to “clearly articulated” state legislative language that displaces competition – a category that generally will cover anticompetitive licensing requirements.  Even a Supreme Court decision next term (in North Carolina Dental v. FTC) that state regulatory boards dominated by self-interested market participants must be actively supervised to enjoy state action immunity would have relatively little bite.  It would not limit states from issuing simple statutory commands that create unwarranted occupational barriers, nor would it prevent states from implementing “adequate” supervisory schemes that are designed to approve anticompetitive state board rules.

What then is to be done?

Constitutional challenges to unjustifiable licensing strictures may offer the best long-term solution to curbing this regulatory epidemic.  As Clark Neily points out in Terms of Engagement, there is a venerable constitutional tradition of protecting the liberty interest to earn a living, reflected in well-reasoned late 19th and early 20th century “Lochner-era” Supreme Court opinions.  Even if Lochner is not rehabilitated, however, there are a few recent jurisprudential “straws in the wind” that support efforts to rein in “irrational” occupational licensure barriers.  Perhaps acting under divine inspiration, the Fifth Circuit in St. Joseph Abbey (2013) ruled that Louisiana statutes that required all casket manufacturers to be licensed funeral directors – laws that prevented monks from earning a living by making simple wooden caskets – served no other purpose than to protect the funeral industry, and, as such, violated the 14th Amendment’s Equal Protection and Due Process Clauses.  In particular, the Fifth Circuit held that protectionism, standing alone, is not a legitimate state interest sufficient to establish a “rational basis” for a state statute, and that absent other legitimate state interests, the law must fall.  Since the Sixth and Ninth Circuits also have held that intrastate protectionism standing alone is not a legitimate purpose for rational basis review, but the Tenth Circuit has held to the contrary, the time may soon be ripe for the Supreme Court to review this issue and, hopefully, delegitimize pure economic protectionism.  Such a development would place added pressure on defenders of protectionist occupational licensing schemes.  Other possible avenues for constitutional challenges to protectionist licensing regimes (perhaps, for example, under the Dormant Commerce Clause) also merit being explored, of course.  The Institute of Justice already is performing yeoman’s work in litigating numerous cases involving unjustified licensing and other encroachments on economic liberty; perhaps their example can prove an inspiration for pro bono efforts by others.

Eliminating anticompetitive occupational licensing rules – and, more generally, vindicating economic liberties that too long have been neglected – is obviously a long-term project, and far-reaching reform will not happen in the near term.  Nevertheless, while we the currently living may in the long run be dead (pace Keynes), our posterity will be alive, and we owe it to them to pursue the vindication of economic liberties under the Constitution.

Paul H. Rubin and Joseph S. Rubin advance the provocative position that some crony capitalism may be welfare enhancing. With all due respect, I am not convinced by their defense of government-business cronyism.  “Second best correction” arguments can be made with respect to ANY inefficient government rule.  In reality, it is almost impossible to calibrate the degree of the distortion created by the initial regulation, so there is no way of stating credibly that the “counter-distortion” is on net favorable to society.  More fundamentally, such counter-distortions are the products of rent-seeking activities by firms and other interest groups, which care nothing about the net social surplus effects of the first and counter-distortion.  The problem with allowing counter-distortions is that firms that are harmed thereby (think of less politically connected companies that are hurt when a big player takes advantage of Export-Import Bank subsidies) either will suffer, or will lobby (using scarce resources) for “third-line” or “tertiary” distortions to alleviate the harmful effects of the initial counter-distortions.  Those new distortions in turn will spawn a continuing series of responses, causing additional unanticipated consequences and attendant welfare losses.

It follows that the best policy is not to defend counter-distortions, which very seldom if ever (and then only through sheer chance) appropriately offset the initial distortions.  (Since the counter-distortions will be rife with new regulatory complexities, they are bound to be costly to implement and highly likely to be destructive of social surplus.)  Rather, the best, simplest, and cleanest policy is to work to get rid of the initial distortions.  If companies complain about other policies that hurt them (generated, for instance, by the Foreign Corrupt Practices Act, or by Food and Drug Administration regulatory delays), the answer is to reform or repeal those bad policies, not to retain inherently welfare-distortive laws such as the Ex-Im Bank authorization.  The alternative approach would devolve into a justification for a web of ever more complex and intrusive federal regulations and interest group-generated “carve-outs.”

This logic applies generally.  For example, the best solution to the welfare-reducing effect of particular Obamacare mandates is not to create a patchwork of exceptions for certain politically-favored businesses and labor groups, but, rather, to repeal counterproductive government-induced health care market distortions.  Similarly, the answer to an economically damaging tax code is not to create a patchwork of credits for politically-favored industries, but, rather, to simplify the code and apply it neutrally, thereby promoting economic growth across industry sectors.

The argument that Ex-Im Bank activities are an example of a “welfare-enhancing” counter-distortion is particularly strained, given the fact that most U.S. exporters gain no benefits from Ex-Im Bank funding, while the American taxpayer foots the bill.  Indeed, capital is diverted away from “unlucky” exporters to the politically connected few who know how to play the Washington game (well-capitalized companies that are least in need of the taxpayer’s largesse).  As stated by Doug Bandow in Forbes, “[n]o doubt, Exim financing makes some deals work.  But others die because ExIm diverts credit from firms without agency backing.  Unfortunately, it is easier to see the benefits of the former than the costs of the latter.”  In short, the recitation of Ex-Im Bank’s alleged “benefits” to American exporters who are “seen” ignores the harm imposed on other “unseen” American companies and taxpayers.  (What’s more, responding to Ex-Im Bank, foreign governments are incentivized to impose their own subsidy programs to counteract the Ex-Im Bank subsidies.)  Thus, the case for retaining Ex-Im Bank is nothing more than another example of Bastiat’s “broken window” fallacy.     

In sum, the goal should be to simplify legal structures and repeal welfare-inimical laws and regulations, not try to correct them through new inherently flawed regulatory intrusions.  In my view, the only examples of rent-seeking that might yield net social benefits are those associated with regulatory reform (such as the expiration of the Ex-Im Bank authorization) or with the creation of new markets (as Gordon Brady and I have argued).