Archives For handout

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

Over at the Federalist Society’s blog, there has been an ongoing debate about what to do about Section 230. While there has long-been variety in what we call conservatism in the United States, the most prominent strains have agreed on at least the following: Constitutionally limited government, free markets, and prudence in policy-making. You would think all of these values would be important in the Section 230 debate. It seems, however, that some are willing to throw these principles away in pursuit of a temporary political victory over perceived “Big Tech censorship.” 

Constitutionally Limited Government: Congress Shall Make No Law

The First Amendment of the United States Constitution states: “Congress shall make no law… abridging the freedom of speech.” Originalists on the Supreme Court have noted that this makes clear that the Constitution protects against state action, not private action. In other words, the Constitution protects a negative conception of free speech, not a positive conception.

Despite this, some conservatives believe that Section 230 should be about promoting First Amendment values by mandating private entities are held to the same standards as the government. 

For instance, in his Big Tech and the Whole First Amendment, Craig Parshall of the American Center for Law and Justice (ACLJ) stated:

What better example of objective free speech standards could we have than those First Amendment principles decided by justices appointed by an elected president and confirmed by elected members of the Senate, applying the ideals laid down by our Founders? I will take those over the preferences of brilliant computer engineers any day.

In other words, he thinks Section 230 should be amended to only give Big Tech the “subsidy” of immunity if it commits to a First Amendment-like editorial regime. To defend the constitutionality of such “restrictions on Big Tech”, he points to the Turner intermediate scrutiny standard, in which the Supreme Court upheld must-carry provisions against cable networks. In particular, Parshall latches on to the “bottleneck monopoly” language from the case to argue that Big Tech is similarly situated to cable providers at the time of the case.

Turner, however, turned more on the “special characteristics of the cable medium” that gave it the bottleneck power than the market power itself. As stated by the Supreme Court:

When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home. Hence, simply by virtue of its ownership of the essential pathway for cable speech, a cable operator can prevent its subscribers from obtaining access to programming it chooses to exclude. A cable operator, unlike speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.

Turner v. FCC, 512 U.S. 622, 656 (1994).

None of the Big Tech companies has the comparable ability to silence competing speakers with a flick of the switch. In fact, the relationship goes the other way on the Internet. Users can (and do) use multiple Big Tech companies’ services, as well as those of competitors which are not quite as big. Users are the ones who can switch with a click or a swipe. There is no basis for treating Big Tech companies any differently than other First Amendment speakers.

Like newspapers, Big Tech companies must use their editorial discretion to determine what is displayed and where. Just like those newspapers, Big Tech has the First Amendment right to editorial discretion. This, not Section 230, is the bedrock law that gives Big Tech companies the right to remove content.

Thus, when Rachel Bovard of the Internet Accountability Project argues that the FCC should remove the ability of tech platforms to engage in viewpoint discrimination, she makes a serious error in arguing it is Section 230 that gives them the right to remove content.

Immediately upon noting that the NTIA petition seeks clarification on the relationship between (c)(1) and (c)(2), Bovard moves right to concern over the removal of content. “Unfortunately, embedded in that section [(c)(2)] is a catch-all phrase, ‘otherwise objectionable,’ that gives tech platforms discretion to censor anything that they deem ‘otherwise objectionable.’ Such broad language lends itself in practice to arbitrariness.” 

In order for CDA 230 to “give[] tech platforms discretion to censor,” they would have to not have that discretion absent CDA 230. Bovard totally misses the point of the First Amendment argument, stating:

Yet DC’s tech establishment frequently rejects this argument, choosing instead to focus on the First Amendment right of corporations to suppress whatever content they so choose, never acknowledging that these choices, when made at scale, have enormous ramifications. . . . 

But this argument intentionally sidesteps the fact that Sec. 230 is not required by the First Amendment, and that its application to tech platforms privileges their First Amendment behavior in a unique way among other kinds of media corporations. Newspapers also have a First Amendment right to publish what they choose—but they are subject to defamation and libel laws for content they write, or merely publish. Media companies also make First Amendment decisions subject to a thicket of laws and regulations that do not similarly encumber tech platforms.

There is the merest kernel of truth in the lines quoted above. Newspapers are indeed subject to defamation and libel laws for what they publish. But, as should be obvious, liability for publication entails actually publishing something. And what some conservatives are concerned about is platforms’ ability to not publish something: to take down conservative content.

It might be simpler if the First Amendment treated published speech and unpublished speech the same way. But it doesn’t. One can be liable for what one speaks, writes, or publishes on behalf of others. Indeed, even with the full protection of the First Amendment, there is no question that newspapers can be held responsible for delicts caused by content they publish. But no newspaper has ever been held responsible for anything they didn’t publish.

Free Markets: Competition as the Bulwark Against Abuses, not Regulation

Conservatives have long believed in the importance of property rights, exchange, and the power of the free market to promote economic growth. Competition is seen as the protector of the consumer, not big government regulators. In the latter half of the twentieth century into the twenty-first century, conservatives have fought for capitalism over socialism, free markets over regulation, and competition over cronyism. But in the name of combating anti-conservative bias online, they are willing to throw these principles away.

The bedrock belief in the right of property owners to decide the terms of how they want to engage with others is fundamental to American conservatism. As stated by none other than Bovard (along with co-author Jim Demint in their book Conservative: Knowing What to Keep):

Capitalism is nothing more or less than the extension of individual freedom from the political and cultural realms to the economy. Just as government isn’t supposed to tell you how to pray, or what to think, or what sports teams to follow or books to read, it’s not supposed to tell you what to do with your own money and property.

Conservatives normally believe that it is the free choices of consumers and producers in the marketplace that maximize consumer welfare, rather than the choices of politicians and bureaucrats. Competition, in other words, is what protects us from abuses in the marketplace. Again as Bovard and Demint rightly put it:

Under the free enterprise system, money is not redistributed by a central government bureau. It goes wherever people see value. Those who create value are rewarded which then signals to the rest of the economy to up their game. It’s continuous democracy.

To get around this, both Parshall and Bovard make much of the “market dominance” of tech platforms. The essays take the position that tech platforms have nearly unassailable monopoly power which makes them unaccountable. Bovard claims that “mega-corporations have as much power as the government itself—and in some ways, more power, because theirs is unchecked and unaccountable.” Parshall even connects this to antitrust law, stating:  

This brings us to another kind of innovation, one that’s hidden from the public view. It has to do with how Big Tech companies use both algorithms plus human review during content moderation. This review process has resulted in the targeting, suppression, or down-ranking of primarily conservative content. As such, this process, should it continue, should be considered a kind of suppressive “innovation” in a quasi-antitrust analysis.

How the process harms “consumer welfare” is obvious. A more competitive market could produce social media platforms designing more innovational content moderation systems that honor traditional free speech and First Amendment norms while still offering features and connectivity akin to the huge players.

Antitrust law, in theory, would be a good way to handle issues of market power and consumer harm that results from non-price effects. But it is difficult to see how antitrust could handle the issue of political bias well:

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Neither antitrust nor quasi-antitrust regimes are well-suited to dealing with the perceived harm of anti-conservative bias. However unfulfilling this is to some conservatives, competition and choice are better answers to perceived political bias than the heavy hand of government. 

Prudence: Awareness of Unintended Consequences

Another bedrock principle of conservatism is to be aware of unintended consequences when making changes to long-standing laws and policies. In regulatory matters, cost-benefit analysis is employed to evaluate whether policies are improving societal outcomes. Using economic thinking to understand the likely responses to changes in regulation is fundamental to American conservatism. Or as Bovard and Demint’s book title suggests, conservatism is about knowing what to keep. 

Bovard has argued that since conservatism is a set of principles, not a dogmatic ideology, it can be in favor of fighting against the collectivism of Big Tech companies imposing their political vision upon the world. Conservatism, in this Kirkian sense, doesn’t require particular policy solutions. But this analysis misses what has worked about Section 230 and how the very tech platforms she decries have greatly benefited society. Prudence means understanding what has worked and only changing what has worked in a way that will improve upon it.

The benefits of Section 230 immunity in promoting platforms for third-party speech are clear. It is not an overstatement to say that Section 230 contains “The Twenty-Six Words that Created the Internet.” It is important to note that Section 230 is not only available to Big Tech companies. It is available to all online platforms who host third-party speech. Any reform efforts at Section 230 must know what to keep.In a sense, Section (c)(1) of Section 230 does, indeed, provide greater protection for published content online than the First Amendment on its own would offer: it extends the First Amendment’s permissible scope of published content for which an online service cannot be held liable to include otherwise actionable third-party content.

But let’s be clear about the extent of this protection. It doesn’t protect anything a platform itself publishes, or even anything in which it has a significant hand in producing. Why don’t offline newspapers enjoy this “handout” (though the online versions clearly do for comments)? Because they don’t need it, and because — yes, it’s true — it comes at a cost. How much third-party content would newspapers publish without significant input from the paper itself if only they were freed from the risk of liability for such content? None? Not much? The New York Times didn’t build and sustain its reputation on the slapdash publication of unedited ramblings by random commentators. But what about classifieds? Sure. There would be more classified ads, presumably. More to the point, newspapers would exert far less oversight over the classified ads, saving themselves the expense of moderating this one, small corner of their output.

There is a cost to traditional newspapers from being denied the extended protections of Section 230. But the effect is less third-party content in parts of the paper that they didn’t wish to have the same level of editorial control. If Section 230 is a “subsidy” as critics put it, then what it is subsidizing is the hosting of third-party speech. 

The Internet would look vastly different if it was just the online reproduction of the offline world. If tech platforms were responsible for all third-party speech to the degree that newspapers are for op-eds, then they would likely moderate it to the same degree, making sure there is nothing which could expose them to liability before publishing. This means there would be far less third-party speech on the Internet.

In fact, it could be argued that it is smaller platforms who would be most affected by the repeal of Section 230 immunity. Without it, it is likely that only the biggest tech platforms would have the necessary resources to dedicate to content moderation in order to avoid liability.

Proposed Section 230 reforms will likely have unintended consequences in reducing third-party speech altogether, including conservative speech. For instance, a few bills have proposed only allowing moderation for reasons defined by statute if the platform has an “objectively reasonable belief” that the speech fits under such categories. This would likely open up tech platforms to lawsuits over the meaning of “objectively reasonable belief” that could deter them from wanting to host third-party speech altogether. Similarly, lawsuits for “selective enforcement” of a tech platform’s terms of service could lead them to either host less speech or change their terms of service.

This could actually exacerbate the issue of political bias. Allegedly anti-conservative tech platforms could respond to a “good faith” requirement in enforcing its terms of service by becoming explicitly biased. If the terms of service of a tech platform state grounds which would exclude conservative speech, a requirement of “good faith” enforcement of those terms of service will do nothing to prevent the bias. 

Conclusion

Conservatives would do well to return to their first principles in the Section 230 debate. The Constitution’s First Amendment, respect for free markets and property rights, and appreciation for unintended consequences in changing tech platform incentives all caution against the current proposals to condition Section 230 immunity on platforms giving up editorial discretion. Whether or not tech platforms engage in anti-conservative bias, there’s nothing conservative about abdicating these principles for the sake of political expediency.

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Last week a group of startup investors wrote a letter to protest what they assume FCC Chairman Tom Wheeler’s proposed, revised Open Internet NPRM will say.

Bear in mind that an NPRM is a proposal, not a final rule, and its issuance starts a public comment period. Bear in mind, as well, that the proposal isn’t public yet, presumably none of the signatories to this letter has seen it, and the devil is usually in the details. That said, the letter has been getting a lot of press.

I found the letter seriously wanting, and seriously disappointing. But it’s a perfect example of what’s so wrong with this interminable debate on net neutrality.

Below I reproduce the letter in full, in quotes, with my comments interspersed. The key take-away: Neutrality (or non-discrimination) isn’t what’s at stake here. What’s at stake is zero-cost access by content providers to broadband networks. One can surely understand why content providers and those who fund them want their costs of doing business to be lower. But the rhetoric of net neutrality is mismatched with this goal. It’s no wonder they don’t just come out and say it – it’s quite a remarkable claim.

Open Internet Investors Letter

The Honorable Tom Wheeler, Chairman
Federal Communications Commission
445 12th Street, SW
Washington D.C. 20554

May 8, 2014

Dear Chairman Wheeler:

We write to express our support for a free and open Internet.

We invest in entrepreneurs, investing our own funds and those of our investors (who are individuals, pension funds, endowments, and financial institutions).  We often invest at the earliest stages, when companies include just a handful of founders with largely unproven ideas. But, without lawyers, large teams or major revenues, these small startups have had the opportunity to experiment, adapt, and grow, thanks to equal access to the global market.

“Equal” access has nothing to do with it. No startup is inherently benefitted by being “equal” to others. Maybe this is just careless drafting. But frankly, as I’ll discuss, there are good reasons to think (contra the pro-net neutrality narrative) that startups will be helped by inequality (just like contra the (totally wrong) accepted narrative, payola helps new artists). It says more than they would like about what these investors really want that they advocate “equality” despite the harm it may impose on startups (more on this later).

Presumably what “equal” really means here is “zero cost”: “As long as my startup pays nothing for access to ISPs’ subscribers, it’s fine that we’re all on equal footing.” Wheeler has stated his intent that his proposal would require any prioritization to be available to any who want it, on equivalent, commercially reasonable terms. That’s “equal,” too, so what’s to complain about? But it isn’t really inequality that’s gotten everyone so upset.

Of course, access is never really “zero cost;” start-ups wouldn’t need investors if their costs were zero. In that sense, why is equality of ISP access any more important than other forms of potential equality? Why not mandate price controls on rent? Why not mandate equal rent? A cost is a cost. What’s really going on here is that, like Netflix, these investors want to lower their costs and raise their returns as much as possible, and they want the government to do it for them.

As a result, some of the startups we have invested in have managed to become among the most admired, successful, and influential companies in the world.

No startup became successful as a result of “equality” or even zero-cost access to broadband. No doubt some of their business models were predicated on that assumption. But it didn’t cause their success.

We have made our investment decisions based on the certainty of a level playing field and of assurances against discrimination and access fees from Internet access providers.

And they would make investment decisions based on the possibility of an un-level playing field if that were the status quo. More importantly, the businesses vying for investment dollars might be different ones if they built their business models in a different legal/economic environment. So what? This says nothing about the amount of investment, the types of businesses, the quality of businesses that would arise under a different set of rules. It says only that past specific investments might not have been made.

Unless the contention is that businesses would be systematically worse under a different rule, this is irrelevant. I have seen that claim made, and it’s implicit here, of course, but I’ve seen no evidence to actually support it. Businesses thrive in unequal, cost-ladened environments all the time. It costs about $4 million/30 seconds to advertise during the Super Bowl. Budweiser and PepsiCo paid multiple millions this year to do so; many of their competitors didn’t. With inequality like that, it’s a wonder Sierra Nevada and Dr. Pepper haven’t gone bankrupt.

Indeed, our investment decisions in Internet companies are dependent upon the certainty of an equal-opportunity marketplace.

Again, no they’re not. Equal opportunity is a euphemism for zero cost, or else this is simply absurd on its face. Are these investors so lacking in creativity and ability that they can invest only when there is certainty of equal opportunity? Don’t investors thrive – aren’t they most needed – in environments where arbitrage is possible, where a creative entrepreneur can come up with a risky, novel way to take advantage of differential conditions better than his competitors? Moreover, the implicit equating of “equal-opportunity marketplace” with net neutrality rules is far-fetched. Is that really all that matters?

This is a good time to make a point that is so often missed: The loudest voices for net neutrality are the biggest companies – Google, Netflix, Amazon, etc. That fact should give these investors and everyone else serious pause. Their claim rests on the idea that “equality” is needed, so big companies can’t use an Internet “fast lane” to squash them. Google is decidedly a big company. So why do the big boys want this so much?

The battle is often pitched as one of ISPs vs. (small) content providers. But content providers have far less to worry about and face far less competition from broadband providers than from big, incumbent competitors. It is often claimed that “Netflix was able to pay Comcast’s toll, but a small startup won’t have that luxury.” But Comcast won’t even notice or care about a small startup; its traffic demands will be inconsequential. Netflix can afford to pay for Internet access for precisely the same reason it came to Comcast’s attention: It’s hugely successful, and thus creates a huge amount of traffic.

Based on news reports and your own statements, we are worried that your proposed rules will not provide the necessary certainty that we need to make investment decisions and that these rules will stifle innovation in the Internet sector.

Now, there’s little doubt that legal certainty aids investment decisions. But “certainty” is not in danger here. The rules have to change because the court said so – with pretty clear certainty. And a new rule is not inherently any more or less likely to offer certainty than the previous Open Internet Order, which itself was subject to intense litigation (obviously) and would have been subject to interpretation and inconsistent enforcement (and would have allowed all kinds of paid prioritization, too!). Certainty would be good, but Wheeler’s proposed rule won’t likely do anything about the amount of certainty one way or the other.

If established companies are able to pay for better access speeds or lower latency, the Internet will no longer be a level playing field. Start-ups with applications that are advantaged by speed (such as games, video, or payment systems) will be unlikely to overcome that deficit no matter how innovative their service.

Again, it’s notable that some of the strongest advocates for net neutrality are established companies. Another letter sent out last week included signatures from a bunch of startups, but also Google, Microsoft, Facebook and Yahoo!, among others.

In truth it’s hard to see why startup investors would think this helps them. Non-neutrality offers the prospect that a startup might be able to buy priority access to overcome the inherent disadvantage of newness, and to better compete with an established company. Neutrality means that that competitive advantage is impossible, and the baseline relative advantages and disadvantages remain – which helps incumbents, not startups. With a neutral Internet – well, the advantages of the incumbent competitor can’t be dissipated by a startup buying a favorable leg-up in speed and the Netflix’s of the world will be more likely to continue to dominate.

Of course the claim is that incumbents will use their huge resources to gain even more advantage with prioritized access. Implicit in this must be the assumption that the advantage that could be gained by a startup buying priority offers less return for the startup than the cost imposed on it by the inherent disadvantages of reputation, brand awareness, customer base, etc. But that’s not plausible for all or even most startups. And investors exist precisely because they are able to provide funds for which there is a likelihood of a good return – so if paying for priority would help overcome inherent disadvantages, there would be money for it.

Also implicit is the claim that the benefits to incumbents (over and above their natural advantages) from paying for priority, in terms of hamstringing new entrants, will outweigh the cost. This is unlikely generally to be true, as well. They already have advantages. Sure, sometimes they might want to pay for more, but in precisely the cases where it would be worth it to do so, the new entrant would also be most benefited by doing so itself – ensuring, again, that investment funds will be available.

Of course if both incumbents and startups decide paying for priority is better, we’re back to a world of “equality,” so what’s to complain about, based on this letter? This puts into stark relief that what these investors really want is government-mandated, subsidized broadband access, not “equality.”

Now, it’s conceivable that that is the optimal state of affairs, but if it is, it isn’t for the reasons given here, nor has anyone actually demonstrated that it is the case.

Entrepreneurs will need to raise money to buy fast lane services before they have proven that consumers want their product. Investors will extract more equity from entrepreneurs to compensate for the risk.

Internet applications will not be able to afford to create a relationship with millions of consumers by making their service freely available and then build a business over time as they better understand the value consumers find in their service (which is what Facebook, Twitter, Tumblr, Pinterest, Reddit, Dropbox and virtually other consumer Internet service did to achieve scale).

In other words: “Subsidize us. We’re worth it.” Maybe. But this is probably more revealing than intended. The Internet cost something to someone to build. (Actually, it cost more than a trillion dollars to broadband providers). This just says “we shouldn’t have to pay them for it now.” Fine, but who, then, and how do you know that forcing someone else to subsidize these startup companies will actually lead to better results? Mightn’t we get less broadband investment such that there is little Internet available for these companies to take advantage of in the first place? If broadband consumers instead of content consumers foot the bill, is that clearly preferable, either from a social welfare perspective, or even the self interest of these investors who, after all, do ultimately rely on consumer spending to earn their return?

Moreover, why is this “build for free, then learn how to monetize over time” business model necessarily better than any other? These startup investors know better than anyone that enshrining existing business models just because they exist is the antithesis of innovation and progress. But that’s exactly what they’re saying – “the successful companies of the past did it this way, so we should get a government guarantee to preserve our ability to do it, too!”

This is the most depressing implication of this letter. These investors and others like them have been responsible for financing enormously valuable innovations. If even they can’t see the hypocrisy of these claims for net neutrality – and worse, choose to propagate it further – then we really have come to a sad place. When innovators argue passionately for stagnation, we’re in trouble.

Instead, creators will have to ask permission of an investor or corporate hierarchy before they can launch. Ideas will be vetted by committees and quirky passion projects will not get a chance. An individual in dorm room or a design studio will not be able to experiment out loud on the Internet. The result will be greater conformity, fewer surprises, and less innovation.

This is just a little too much protest. Creators already have to ask “permission” – or are these investors just opening up their bank accounts to whomever wants their money? The ones that are able to do it on a shoestring, with money saved up from babysitting gigs, may find higher costs, and the need to do more babysitting. But again, there is nothing special about the Internet in this. Let’s mandate zero cost office space and office supplies and developer services and design services and . . . etc. for all – then we’ll have way more “permission-less” startups. If it’s just a handout they want, they should say so, instead of pretending there is a moral or economic welfare basis for their claims.

Further, investors like us will be wary of investing in anything that access providers might consider part of their future product plans for fear they will use the same technical infrastructure to advantage their own services or use network management as an excuse to disadvantage competitive offerings.

This is crazy. For the same reasons I mentioned above, the big access provider (and big incumbent competitor, for that matter) already has huge advantages. If these investors aren’t already wary of investing in anything that Google or Comcast or Apple or… might plan to compete with, they must be terrible at their jobs.

What’s more, Wheeler’s much-reviled proposal (what we know about it, that is), to say nothing of antitrust law, clearly contemplates exactly this sort of foreclosure and addresses it. “Pure” net neutrality doesn’t add much, if anything, to the limits those laws already do or would provide.

Policing this will be almost impossible (even using a standard of “commercial reasonableness”) and access providers do not need to successfully disadvantage their competition; they just need to create a credible threat so that investors like us will be less inclined to back those companies.

You think policing the world of non-neutrality is hard – try policing neutrality. It’s not as easy as proponents make it out to be. It’s simply never been the case that all bits at all times have been treated “neutrally” on the Internet. Any version of an Open Internet Order (just like the last one, for example) will have to recognize this.

Larry Downes compiled a list of the exceptions included in the last Open Internet Order when he testified before the House Judiciary Committee on the rules in 2011. There are 16 categories of exemption, covering a wide range of fundamental components of broadband connectivity, from CDNs to free Wi-Fi at Starbucks. His testimony is a tour de force, and should be required reading for everyone involved in this debate.

But think about how the manifest advantages of these non-neutral aspects of broadband networks would be squared with “real” neutrality. On their face, if these investors are to be taken at their word, these arguments would preclude all of the Open Internet Order’s exemptions, too. And if any sort of inequality is going to be deemed ok, how accurately would regulators distinguish between “illegitimate” inequality and the acceptable kind that lets coffee shops subsidize broadband? How does the simplistic logic of net equality distinguish between, say, Netflix’s colocated servers and a startup like Uber being integrated into Google Maps? The simple answer is that it doesn’t, and the claims and arguments of this letter are woefully inadequate to the task.

We need simple, strong, enforceable rules against discrimination and access fees, not merely against blocking.

No, we don’t. Or, at least, no one has made that case. These investors want a handout; that is the only case this letter makes.

We encourage the Commission to consider all available jurisdictional tools at its disposal in ensuring a free and open Internet that rewards, not disadvantages, investment and entrepreneurship.

… But not investment in broadband, and not entrepreneurship that breaks with the business models of the past. In reality, this letter is simple rent-seeking: “We want to invest in what we know, in what’s been done before, and we don’t want you to do anything to make that any more costly for us. If that entails impairing broadband investment or imposing costs on others, so be it – we’ll still make our outsized returns, and they can write their own letter complaining about ‘inequality.’”

A final point I have to make. Although the investors don’t come right out and say it, many others have, and it’s implicit in the investors’ letter: “Content providers shouldn’t have to pay for broadband. Users already pay for the service, so making content providers pay would just let ISPs double dip.” The claim is deeply problematic.

For starters, it’s another form of the status quo mentality: “Users have always paid and content hasn’t, so we object to any deviation from that.” But it needn’t be that way. And of course models frequently coexist where different parties pay for the same or similar services. Some periodicals are paid for by readers and offer little or no advertising; others charge a subscription and offer paid ads; and still others are offered for free, funded entirely by ads. All of these models work. None is “better” than the other. There is no reason the same isn’t true for broadband and content.

Net neutrality claims that the only proper price to charge on the content side of the market is zero. (Congratulations: You’re in the same club as that cutting-edge, innovative technology, the check, which is cleared at par by government fiat. A subsidy that no doubt explains why checks have managed to last this long). As an economic matter, that’s possible; it could be that zero is the right price. But it most certainly needn’t be, and issues revolving around Netflix’s traffic and the ability of ISPs and Netflix cost-effectively to handle it are evidence that zero may well not be the right price.

The reality is that these sorts of claims are devoid of economic logic — which is presumably why they, like the whole net neutrality “movement” generally, appeal so gratuitously to emotion rather than reason. But it doesn’t seem unreasonable to hope for more from a bunch of savvy financiers.

 

Tom Barnett (Covington & Burling) represents Expedia in, among other things, its efforts to persuade a US antitrust agency to bring a case against Google involving the alleged use of its search engine results to harm competition.  In that role, in a recent piece in Bloomberg, Barnett wrote the following things:

  • “The U.S. Justice Department stood up for consumers last month by requiring Google Inc. to submit to significant conditions on its takeover of ITA Software Inc., a company that specializes in organizing airline data.”
  • “According to the department, without the judicially monitored restrictions, Google’s control over this key asset “would have substantially lessened competition among providers of comparative flight search websites in the United States, resulting in reduced choice and less innovation for consumers.”
  • “Now Google also offers services that compete with other sites to provide specialized “vertical” search services in particular segments (such as books, videos, maps and, soon, travel) and information sought by users (such as hotel and restaurant reviews in Google Places).  So Google now has an incentive to use its control over search traffic to steer users to its own services and to foreclose the visibility of competing websites.”
  • “Search Display: Google has led users to expect that the top results it displays are those that its search algorithm indicates are most likely to be relevant to their query. This is why the vast majority of user clicks are on the top three or four results.  Google now steers users to its own pages by inserting links to its services at the top of the search results page, often without disclosing what it has done. If you search for hotels in a particular city, for example, Google frequently inserts links to its Places pages.”
  • “All of these activities by Google warrant serious antitrust scrutiny. … It’s important for consumers that antitrust enforcers thoroughly investigate Google’s activities to ensure that competition and innovation on the Internet remain vibrant. The ITA decision is a great win for consumers; even bigger issues and threats remain.”

The themes are fairly straightforward: (1) Google is a dominant search engine, and its size and share of the search market warrants concern, (2) Google is becoming vertically integrated, which also warrants concern, (3) Google uses its search engine results in manner that harms rivals through actions that “warrant serious antitrust scrutiny,” and (4) Barnett appears to applaud judicial monitoring of Google’s contracts involving one of its “key assets.”   Sigh.

The notion of firms “coming full circle” in antitrust, a la Microsoft’s journey from antitrust defendant to complainant, is nothing new.   Neither is it too surprising or noteworthy when an antitrust lawyer, including very good ones like Barnett, say things when representing a client that are at tension with prior statements made when representing other clients.  By itself, that is not really worth a post.  What I think is interesting here is that the prior statements from Barnett about the appropriate scope of antitrust enforcement generally, and monopolization in the specific, were made as Assistant Attorney General for the Antitrust Division — and thus, I think are more likely to reflect Barnett’s actual views on the law, economics, and competition policy than the statements that appear in Bloomberg.  The comments also expose some shortcomings in the current debate over competition policy and the search market.

But lets get to it.  Here is a list of statements that Barnett made in a variety of contexts while at the Antitrust Division.

  • “Mere size does not demonstrate competitive harm.”  (Section 2 of the Sherman Act Presentation, June 20, 2006)
  • “…if the government is too willing to step in as a regulator, rivals will devote their resources to legal challenges rather than business innovation. This is entirely rational from an individual rival’s perspective: seeking government help to grab a share of your competitor’s profit is likely to be low cost and low risk, whereas innovating on your own is a risky, expensive proposition. But it is entirely irrational as a matter of antitrust policy to encourage such efforts.
    (Interoperability Between Antitrust and Intellectual Property, George Mason University School of Law Symposium, September 13, 2006)
  • “Rather, rivals should be encouraged to innovate on their own – to engage in leapfrog or Schumpeterian competition. New innovation expands the pie for rivals and consumers alike. We would do well to heed Justice Scalia’s observation in Trinko, that creating a legal avenue for such challenges can ‘distort investment’ of both the dominant and the rival firms.” (emphasis added)
    (Interoperability Between Antitrust and Intellectual Property, George Mason University School of Law Symposium, September 13, 2006)
  • “Because a Section 2 violation hurts competitors, they are often the focus of section 2 remedial efforts.  But competitor well-being, in itself, is not the purpose of our antitrust laws.  The Darwinian process of natural selection described by Judge Easterbrook and Professor Schumpeter cannot drive growth and innovation unless tigers and other denizens of the jungle are forced to survive the crucible of competition.”  (Cite).
  • “Implementing a remedy that is too broad runs the risk of distorting markets, impairing competition, and prohibiting perfectly legal and efficient conduct.” (same)
  • “Access remedies also raise efficiency and innovation concerns.  By forcing a firm to share the benefits of its investments and relieving its rivals of the incentive to develop comparable assets of their own, access remedies can reduce the competitive vitality of an industry.” (same)
  • “The extensively discussed problems with behavioral remedies need not be repeated in detail here.  Suffice it to say that agencies and courts lack the resources and expertise to run businesses in an efficient manner. … [R]emedies that require government entities to make business decisions or that require extensive monitoring or other government activity should be avoided wherever possible.”  (Cite).
  • “We need to recognize the incentive created by imposing a duty on a defendant to provide competitors access to its assets.  Such a remedy can undermine the incentive of those other competitors to develop their own assets as well as undermine the incentive for the defendant competitor to develop the assets in the first instance.  If, for example, you compel access to the single bridge across the Missouri River, you might improve competitive options in the short term but harm competition in the longer term by ending up with only one bridge as opposed to two or three.” (same)
  • “There seems to be consensus that we should prohibit unilateral conduct only where it is demonstrated through rigorous economic analysis to harm competition and thereby harm consumer welfare.” (same)

I’ll take Barnett (2006-08) over Barnett (2011) in a technical knockout.  Concerns about administrable antitrust remedies, unintended consequences of those remedies, error costs, helping consumers and restoring competition rather than merely giving a handout to rivals, and maintaining the incentive to compete and innovate are all serious issues in the Section 2 context.  Antitrust scholars from Epstein and Posner to Areeda and Hovenkamp and others have all recognized these issues — as did Barnett when he was at the DOJ (and no doubt still).  I do not fault him for the inconsistency.  But on the merits, the current claims about the role of Section 2 in altering competition in the search engine space, and the applause for judicially monitored business activities, runs afoul of the well grounded views on Section 2 and remedies that Barnett espoused while at the DOJ.

Let me end with one illustration that I think drives the point home.   When one compares Barnett’s column in Bloomberg to his speeches at DOJ, there is one difference that jumps off the page and I think is illustrative of a real problem in the search engine antitrust debate.  Barnett’s focus in the Bloomberg piece, as counsel for Expedia, is largely harm to rivals.  Google is big.  Google has engaged in practices that might harm various Internet businesses.  The focus is not consumers, i.e. the users.  They are mentioned here and there — but in the context of Google’s practices that might “steer” users toward their own sites.  As Barnett (2006-08) well knew, and no doubt continues to know, is that vertical integration and vertical contracts with preferential placement of this sort can well be (and often are) pro-competitive.  This is precisely why Barnett (2006-08) counseled requiring hard proof of harm to consumers before he would recommend much less applaud an antitrust remedy tinkering with the way search business is conducted and running the risk of violating the “do no harm” principle.  By way of contrast, Barnett’s speeches at the DOJ frequently made clear that the notion that the antitrust laws “protection competition, not competitors,” was not just a mantra, but a serious core of sensible Section 2 enforcement.

The focus can and should remain upon consumers rather than rivals.  The economic question is whether, when and if Google uses search results to favor its own content, that conduct is efficient and pro-consumer or can plausibly cause antitrust injury.  Those leaping from “harm to rivals” to harm to consumers should proceed with caution.  Neither economic theory nor empirical evidence indicate that the leap is an easy one.  Quite the contrary, the evidence suggests these arrangements are generally pro-consumer and efficient.  On a case-by-case analysis, the facts might suggest a competitive problem in any given case.

Barnett (2006-08) has got Expedia’s antitrust lawyer dead to rights on this one.  Consumers would be better off if the antitrust agencies took the advice of the former and ignored the latter.

Tomorrow I will be attending a symposium on small business financing sponsored by the Entrepreneurial Business Law Journal‘s at the Moritz College of Law at the Ohio State University. I’m on a panel entitled “Recessionary Impacts on Equity Capital,” which is a bit misleading–or at least a bit different that the topic I offered to speak on, which is the effect of the recession and recent financial crisis on small business financing more generally. The rest of the day includes presentations governmental and policy responses to the crisis and practical implications of constricted capital. A copy of the schedule and list of speakers is available. I’m not very familiar with any of the other panelists, but the luncheon address will be given by Al Martinez-Fonts, Executive Vice President, U.S. Chamber of Commerce.

I’m going to focus on a few basic points and highlight some of the myths around small businesses and small business financing that drives poor policy. My first objective is to lay out a simple framework for thinking about financing deals, or any deal for that matter. Namely, the idea that every transaction involves allocations of value, uncertainty and decision rights; and the deal itself provides structure on those allocations by specifying the incentive systems, performance measures and decision rights that address both parties’ interests. How those structures are designed determine the nature of risk exposure and incentive conflicts that may affect the ex post value and performance of the deal.

In a sense, there is nothing new in small business financing post-crisis.  The fundamentals are the same. There is a multitude of contractual terms to address the various kinds of incentive issues and uncertainties that exist in the current market environment. To the extent there is anything truly unique about the current context, they are less about the financial market itself than about broader regulatory and economic issues. For example, much of the uncertainty affecting credit-worthiness have to do with economic and cash flow uncertainties stemming from upheavals in the regulatory landscape for small businesses, including health care. Uncertainty concerning implementation of financial market reforms passed in July 2010 create uncertainties for lenders. These uncertainties exacerbate the usual economic uncertainties of new and small businesses during an economic recovery period.

During the recession itself, “stimulus” spending distorted the credit-worthiness of small businesses in industries that were more directly benefited by government handouts and by the security provided small businesses that supply large, publicly-administered and guaranteed businesses (such as in the auto industry).  Thus, federal and state economic policy to “create jobs” in some sectors distorted the incentives to lend to different groups of small businesses, likely reducing employment in other sectors.

Finally, I’m going to suggest that talking about “small business” financing is a misnomer if we are truly motivated by a care of job creation. A recent paper by John Haltiwanger, Ron Jarmin, and Javier Miranda illustrates that business size is not the key determinant of job creation in the US, as is often argued in the media and policy circles. (HT: Peter Klein at O&M) They find that it is young firms, which happen to be small, not small firms in general that provide the job creation. Ironically, these young firms are also the ones for whom financing is most difficult due to the nascent stage of development and uncertainty. Thus, policies directed to firms based on size alone further distort capital availability from other (larger) companies that are equally likely to create jobs. Since this distortion is not costless, the policies are not welfare-neutral by simply switching where jobs are created, but likely to reduce welfare overall.

So now you don’t need to rush to Columbus, Ohio, to hear what I’ll have to say–unless you want to see the fireworks in person. But now you’ll know what’s going on in case there is news of more upset around the horse shoe in Columbus.

The biggest and most important issue for the next few months won’t be immigration, the New Black Panthers, or even the war in Afghanistan. Huge tax increases are headed our way, and it raises tough questions. On the one hand, signaling we are serious about deficits is likely a good thing. But, since politicians haven’t been able to reduce spending additional income, assuming more taxes brings more income, cutting spending would be a better course. Having locked ourselves into huge spending, maybe some tax increases are inevitable.

On the other hand, raising taxes in a recession is not a good idea. Taking money out of the economy, routing it through Washington, and then hoping politicians can spend it wisely sounds like the triumph of hope over experience.

Take me. I just calculated my family tax increase at a useful website from the Tax Foundation. If the president and Congress raise taxes as expected, my family will have to pay an additional $10,000 in taxes next year.

To come up with the extra $800 per month, we can cut back on some things. The recent (legal) immigrant from Mexico who cuts our grass will suffer, as will the recent (legal) immigrant from Poland who cleans our house a few times a month. We can cancel our cell phones and some cable channels, as well as take our daughter from her art class at the community art center, but these are only a few hundred dollars per month in total. We’ll have to go to skip going to the movies and eat out less, and think about skipping that vacation to Disney World. What is the theory under which collecting this money in taxes and deciding in Washington how to spend it is superior to our decisions? Maybe we should ask the entrepreneurs we employ and the new arrivals they employ in turn whether they prefer to work for us or get a government handout.

Have you ever been tempted to buy a beggar a cup of coffee or a sandwich instead of giving money? If so, you have, like a young Anakin Skywalker, taken your first step to the dark side of altruism. Don’t get me wrong, I’ve been there too. The reason I offered food instead of (money for) vodka is because I wanted to “help” the beggar. From my lofty perch (that is, sober, housed, and employed), I wanted to impose my values on him. Like a father choosing broccoli instead of ice cream for his kids, I thought I knew better what was good for the beggar — what he really wanted if only his thought processes were rational.

At some level, this is sensible. If I am paying, either directly in the form of the handout or indirectly in the form of the obvious externalities from the beggar (e.g., crime, stink, etc.), then it makes sense for me to try to reduce these costs.

But the dark side of caring is the perversity of this control. Once we start thinking this way, the creep towards totalitarian nannyism is hard to resist. Once I am paying for your health insurance, I suddenly care a lot whether you get an abortion, have that gender-switching surgery you’ve always wanted, or, most ominously, eat that Big Mac for lunch instead of the salmon salad. In today’s new America, I suddenly really care how much junk food the people making less than $88,000 eat — I pay for every Dorito that crosses their lips. And, for the record, I hate this about me and about New America. (Evidence this is our future comes from the UK, where 75% of people in a recent survey supported greater government control over individuals’ food choices.)

The problems with altruism are well documented. The IMF for years tried to control the internal policies of countries that it bailed out or loaned money to. These attempts were failures, both because the experts don’t always know what they think they know and because the meddling inevitably involves backlash, power grabs, corruption, and so on. (The IMF has abandoned these policies.) This instinct was also a source of the eugenics movement. Once we think of people as cost centers instead of autonomous individuals, the cost-benefit calculations can lead to some disturbing results. German posters from the eugenics era provide a nice example.

The battles ahead for New America are likely to be just as dirty. The battle over abortion in the Stupak Incident is just a preview of what is to come as every interest group wanting to feed at the trough, remake America, press for rights they hold dear, and so on, head for Washington to convince our dear leaders why the rest of the country should or should not pay for their pet project.Whatever the negative impact of me acting as a control freak is on my neighbor the beggar will be dwarfed by the nation as control freak. At the individual level, the control we try to exercise might actually be a good thing. But multiply it by 300 million, centralize it in Washington, and unleash the forces of public choice on it, and watch the beginning of the end of our freedom.


Mr. Obama, go to "China"

Todd Henderson —  22 February 2010

The president revealed his last-ditch plan to reform our healthcare system today. (Funny the plan is revealed before the “bipartisan” meeting about health care being trumpeted for political reasons.) One thing I was hoping to see in the proposal is missing — an increase in the eligibility age for Medicare (and, while we are at it, Social Security). Although I would prefer to see us do away with these entitlement programs, if we have them, why not make them solvent and sensible? When these programs were passed, people lived a lot shorter lives than they do today, and a simple indexing to life expectancy would go a long way toward reducing our national fiscal crisis. Not only would this reduce our government-funded health care expenses, it would encourage 65 year olds to stay in the work force. Take my Dad. He retired to a life of reading history books when he hit that magic number, even though he was still energetic, capable, and earning a good living at the time. Our perverse entitlement programs encouraged him to do this, to accept government handouts even though he doesn’t need them, and mandated that he go onto a government-run insurance program, even though he could easily afford his own health care bills or insurance. This makes absolutely no sense. Any system that takes people like this out of the work force and bestows upon them welfare without regard to need is not just stupid, it is immoral.

Faced with a similar set of existing incentives in the 1990s, President Clinton and a Republican Congress ended welfare as we knew it. No longer would we pay people not to work, but instead we would make government handouts instrumental toward a productive life. President Clinton had the cache and credibility with the opponents of welfare reform to get this obviously beneficial change enacted, just as President Nixon did with foreign policy hawks when he went to China. Since Democrats largely stand in the way of entitlement reform, the same must be true of President Obama. President Bush wasn’t able to reform Social Security in part because his proposal to let people invest their own money for retirement sounded to some like a plan to make Bush’s banker friends rich at the expense of Joe the Plumber. But Obama could do this. If he proposed to raise the eligibility age for Medicare (and the other entitlement programs) gradually but dramatically, perhaps in return for some Republican concessions on insurance reform and subsidies for the poor to buy insurance, there might be a deal. The Republicans might even be able to get some tort reform as part of the deal — again, who better than a Democrat lawyer to stand up to the trial lawyers?

In his best moments, the president has seemed to play against type and stand up for good ideas that are not favored by his core special-interests constituency. There have been, for instance, some nods for school choice and performance pay that have irritated the teachers’ unions. He has also continued our assault on Muslim terrorists. We need more of this from the president. (And, he needs more of it if he hopes to be reelected. Just ask President Clinton.) In short, the best hope for reform is compromise, and compromise in ways in which Mr. Obama has a comparative advantage. Anyone could ram through a one-sided agenda; it takes real leadership to go to China. Book your ticket, Mr. Obama. I hear the Great Wall shouldn’t be missed.

The WSJ describes how Chairman Bernanke is going on the offensive in advance of his confirmation hearings, using them as an opportunity to oppose those elements of the Dodd Bill that would strip the Fed of some of its powers.  However you feel about the policy debate, you’ve got to give him some credit for using his confirmation hearings to defend his agency, the safer course would be to secure his helmet, dive into a foxhole, and wait until post-confirmation to enter the fray.

One of Bernanke’s concerns is that Ron Paul’s new bill to audit the Fed might compromise its ability to manage monetary policy.  I am one of those people conditioned to worry about the Federal Reserve’s independence, but Ron Paul’s Bill to require the Comptroller to audit the Fed’s books doesn’t seem to me to be all that big a deal.  The GAO is one of the most non-partisan, capable organizations in the beltway, and the Fed’s deliberations would stay confidential despite Ron Paul’s Bill.  The GAO would, however, get a chance to examine transactions taking place at the discount window that have nothing to do with monetary policy.

The Fed is in part to blame for this controversy, during the crisis it used the discount window to lend to non-bank companies in an unprecedented manner that has since put its balance sheet under severe stress.  The discount window is supposed to be a vehicle for monetary policy, but the Fed was the one who chose to confuse that distinction by using the discount window as a vehicle for bailouts.  Bernanke is concerned in part that if institutions thought their loans from the Fed became public knowledge, the institutions would think twice about taking them.  That doesn’t sound like a problem to me, I want firms to think long and hard before they seek handouts from the discount window when they are in trouble.

Some also want to give the Fed resolution authority over systematically risky institutions, an equally bad idea.  As long as the Fed maintains supervisory authority over banks or other institutions, and the power to lend to them, it should not have the power to decide when to liquidate them.  Otherwise, the decision facing the Fed will be to admit it failed as a regulator, and place the institution into receivership, or lend more to the institution and hope to vindicate its record.  Call it a regulatory reverse-moral hazard.

The Dodd Bill also seeks to strip the Fed of its supervisory powers over banks.  I am predisposed to favor regulatory competition, both horizontally among federal agencies and vertically between states and the feds.  But it is not clear to me why the Fed has a competitive advantage as supervisory regulator.  Activities not related to monetary policy, like banking supervision or-even worse- consumer protection regulation, threaten to distract the Fed from its primary mission.  Managing interest rates to tame inflation by buying and selling Treasury bonds is a herculean task, and giving the Fed too many extracirricular activities threatens the very things we like about the Fed.

When I was growing up my father told me that a man only needs three things in life: a good bartender, a good priest, and a good tailor.  I think Dad intended that different people wear those three hats, otherwise it just doesn’t work.  But wearing conflicting hats is the unfortunate mission we have given to the Fed at this point.  Some regulatory re-gearing, and an enhanced audit capability for non-monetary policy activities at the discount window, could be what we need to get the Fed on the right track.

Stanford economist Michael Boskin thinks so. He set forth his argument in the WSJ. After observing that President Obama’s budget “more than doubles the national debt held by the public, adding more to the debt than all previous presidents — from George Washington to George W. Bush — combined” and “would raise taxes to historically high levels” relative to GDP, Boskin states:

The pervasive government subsidies and mandates — in health, pharmaceuticals, energy and the like — will do a poor job of picking winners and losers (ask the Japanese or Europeans) and will be difficult to unwind as recipients lobby for continuation and expansion. Expanding the scale and scope of government largess means that more and more of our best entrepreneurs, managers and workers will spend their time and talent chasing handouts subject to bureaucratic diktats, not the marketplace needs and wants of consumers. …

New and expanded refundable tax credits would raise the fraction of taxpayers paying no income taxes to almost 50% from 38%. This is potentially the most pernicious feature of the president’s budget, because it would cement a permanent voting majority with no stake in controlling the cost of general government.

From the poorly designed stimulus bill and vague new financial rescue plan, to the enormous expansion of government spending, taxes and debt somehow permanently strengthening economic growth, the assumptions underlying the president’s economic program seem bereft of rigorous analysis and a careful reading of history.

Unfortunately, our history suggests new government programs, however noble the intent, more often wind up delivering less, more slowly, at far higher cost than projected, with potentially damaging unintended consequences. The most recent case, of course, was the government’s meddling in the housing market to bring home ownership to low-income families, which became a prime cause of the current economic and financial disaster.

On the growth effects of a large expansion of government, the European social welfare states present a window on our potential future: standards of living permanently 30% lower than ours. Rounding off perceived rough edges of our economic system may well be called for, but a major, perhaps irreversible, step toward a European-style social welfare state with its concomitant long-run economic stagnation is not.

There are some pretty scary devils in the details of this Detroit bailout legislation. This WSJ article provides some specifics.

Under the terms of the draft legislation, “the government would receive warrants for stock equivalent to at least 20% of the loans any company receives.” Let’s put that in perspective. General Motors is seeking around $10 billion in short-term loans, so the legislation would give the government the option to buy a $2 billion stake in GM. GM’s market capitalization — the market value of its outstanding stock — is currently around $3 billion. If the government were to exercise its option today, it would pay GM $2 billion (thereby enhancing GM’s value by that amount) and would receive $2 billion worth of newly issued stock in a (now) $5 billion company. Thus, the government would end up owning 40% of GM.

A 40% holder of a corporation’s voting securities usually has effective control of the corporation. Indeed, the American Law Insitute’s Principles of Corporate Governance presume that a holder of at least 25% of voting securities is a controlling shareholder. While the draft bailout legislation doesn’t specify if the stock taken by the government will be voting or non-voting, it’s likely that the warrants provided in the legislation will give the feds a tremendous amount of control over the automakers.

Besides the control that comes with equity, the draft bailout legislation provides for the feds to take direct day-to-day control of the companies via an “auto czar,” who would “act as a kind of trustee with authority to bring together labor, management, creditors and parts suppliers to negotiate a restructuring plan” and would have authority “to review any transaction or contract valued at more than $25 million.” (UPDATE: This figure has apparently been raised to $100 million in the current draft of the legislation.)

The bailout legislation also includes some specific dictates. While the loans are outstanding, recipients may not pay dividends to stockholders or bonuses to senior executives, and they’re barred from participating in legal challenges to state laws designed to impose limits on greenhouse-gas emissions. (UPDATE: The bar on legal challenges may be lifted.) In addition, they must “analyze whether excess production capacity could be used to make trains and buses for public transit authorities.”

A few comments on this Faustian bargain…er, bailout deal:

The equity stake. Anyone who believes that giving the feds a large equity interest in the automakers will improve the business lot of those companies is on glue. Politicians are experts at winning popularity contests. They are not experts at creating shareholder value. If they have an equity stake that gives them a high degree of control over the automakers, they’ll push those companies to do popular things, regardless of the bottom line effect.

If anyone doubts this prediction, I’d point him or her to Fannie Mae and Freddie Mac. Consider these remarks from bailout negotiator Barney Frank (from a 2003 hearing on the financial soundness of Fannie and Freddie):

[I]n my view, the two government sponsored enterprises we are talking about here, Fannie Mae and Freddie Mac, are not in a crisis. … [W]e have got a system that I think has worked very well to help housing. The high cost of housing is one of the great social bombs of this country. I would rank it second to the inadequacy of our health delivery system as a problem that afflicts many, many Americans. We have gotten recent reports about the difficulty here. Fannie Mae and Freddie Mac have played a very useful role in helping make housing more affordable, both in general through leveraging the mortgage market, and in particular, they have a mission that this Congress has given them in return for some of the arrangements which are of some benefit to them to focus on affordable housing, and that is what I am concerned about here. I believe that we, as the Federal Government, have probably done too little rather than too much to push them to meet the goals of affordable housing and to set reasonable goals.

As I explained here, Congress used various carrots to push “private” Fannie and Freddie to ignore business considerations in favor of pursuing a social agenda. In doing so, it created a guaranteed buyer for unwise subprime mortgages and Alt-A (or “liars'”) loans, thereby encouraging private lenders to make them, knowing they could quickly sell them to Fan/Fred, a “greater fool.” The result has been disastrous. Do we really want to turn GM and Ford into Genny Mae and Fordy Mac? I think not.

The auto czar. Today’s WSJ describes this person’s role as follows: “He or she would bring together labor, management, creditors and parts suppliers to negotiate a long-term restructuring plan. The czar could pull back the loans if the companies don’t negotiate in good faith. And if a company and its stakeholders can’t agree on a plan, the czar would be required to recommend one, including the possibility of a Chapter 11 bankruptcy reorganization.”

This looks a heckuva lot like the judge in a bankruptcy reorganization, no? Oh wait. There’s one crucial difference. In bankruptcy, labor contracts would be on the table as a matter of right. Under this plan, the automakers would presumably have to get the unions to agree to open up the contracts for renegotiation, and it’s the automakers, not the unions, that get smacked if they fail to renegotiate. The automakers would have far less ability to procure the sorts of concessions they need to bring their costs down to the level of competing foreign car manufacturers who operate in right-to-work states.

Of course, there are a couple of other arguments in favor of “bankruptcy lite” under an auto czar: (1) it’ll be hard to get debtor-in-possession loans in the current credit crunch, and (2) people won’t buy cars from a car company in bankruptcy. I don’t find these arguments all that persuasive. With respect to the first, why not have the government withhold its bailout until it’s clear that debtor-in-possession loans are impossible to attain? With respect to the second, are consumers really less likely to buy cars from companies in bankruptcy reorganization–doing the housecleaning that will return them to viability–than to buy from companies that keep running to Congress, complaining about their dire straits and begging for a handout lest they fail? C’mon.

The specific dictates. Now here’s where I get a little cynical. If an automaker has a good faith, legally plausible argument that the Clean Air Act (or other federal statute) bars states from adopting certain emission standards, and the automaker’s business (and thus its shareholders) will be benefited by abrogation of those state standards, then shouldn’t the automaker be permitted to ensure that the states follow the law that Congress passed? If Congress wants the state standards implemented, then it should amend the Clean Air Act to ensure that that outcome is achieved. It can certainly do so if it wants, but it’ll have to take a little political heat. It would rather have a free lunch.

More generally, if Congress believes that there are significant externalities created by fuel consumption and that we should restructure the automotive industry to reduce fuel demands, then grow a pair and do the honest thing: tax fuel consumption at a level reflecting the external costs, and let industry adjust. Congress doesn’t want to do that, of course, because it wants to mask the true cost of its social agenda. (By the way, this sort of effort to get a political free lunch is exactly what damned Fannie and Freddie. Congress wanted cheap housing, but it didn’t want to pay for subsidies, so it pushed Fannie and Freddie to make disastrous business decisions.)

The bailout legislation’s ban on dividends and bonuses is what we might call a legal Spencer Pratt — it looks good on first glance, but it’s ultimately a dumb and pointless annoyance. The dividend ban greatly hampers Detroit’s ability to raise private capital. It will be pretty tough to raise money by selling stock that pays no dividend. The bonus ban prevents the firms from adopting performance-based incentive compensation, precisely the sort of compensation scheme Detroit may need to attract the best executive talent and to encourage its current executives to do the hard work necessary to turn the companies around.