Archives For

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.

And:

[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.

Also:

The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Last year, Microsoft’s new CEO, Satya Nadella, seemed to break with the company’s longstanding “complain instead of compete” strategy to acknowledge that:

We’re going to innovate with a challenger mindset…. We’re not coming at this as some incumbent.

Among the first items on his agenda? Treating competing platforms like opportunities for innovation and expansion rather than obstacles to be torn down by any means possible:

We are absolutely committed to making our applications run what most people describe as cross platform…. There is no holding back of anything.

Earlier this week, at its Build Developer Conference, Microsoft announced its most significant initiative yet to bring about this reality: code built into its Windows 10 OS that will enable Android and iOS developers to port apps into the Windows ecosystem more easily.

To make this possible… Windows phones “will include an Android subsystem” meant to play nice with the Java and C++ code developers have already crafted to run on a rival’s operating system…. iOS developers can compile their Objective C code right from Microsoft’s Visual Studio, and turn it into a full-fledged Windows 10 app.

Microsoft also announced that its new browser, rebranded as “Edge,” will run Chrome and Firefox extensions, and that its Office suite would enable a range of third-party services to integrate with Office on Windows, iOS, Android and Mac.

Consumers, developers and Microsoft itself should all benefit from the increased competition that these moves are certain to facilitate.

Most obviously, more consumers may be willing to switch to phones and tablets with the Windows 10 operating system if they can continue to enjoy the apps and extensions they’ve come to rely on when using Google and Apple products. As one commenter said of the move:

I left Windows phone due to the lack of apps. I love the OS though, so if this means all my favorite apps will be on the platform I’ll jump back onto the WP bandwagon in a heartbeat.

And developers should invest more in development when they can expect additional revenue from yet another platform running their apps and extensions, with minimal additional development required.

It’s win-win-win. Except perhaps for Microsoft’s lingering regulatory strategy to hobble Google.

That strategy is built primarily on antitrust claims, most recently rooted in arguments that consumers, developers and competitors alike are harmed by Google’s conduct around Android which, it is alleged, makes it difficult for OS makers (like Cyanogen) and app developers (like Microsoft Bing) to compete.

But Microsoft’s interoperability announcements (along with a host of other rapidly evolving market characteristics) actually serve to undermine the antitrust arguments that Microsoft, through groups like FairSearch and ICOMP, has largely been responsible for pushing in the EU against Google/Android.

The reality is that, with innovations like the one Microsoft announced this week, Microsoft, Google and Apple (and Samsung, Nokia, Tizen, Cyanogen…) are competing more vigorously on several fronts. Such competition is evidence of a vibrant marketplace that is simply not in need of antitrust intervention.

The supreme irony in this is that such a move represents a (further) nail in the coffin of the supposed “applications barrier to entry” that was central to the US DOJ’s antitrust suit against Microsoft and that factors into the contemporary Android antitrust arguments against Google.

Frankly, the argument was never very convincing. Absent unjustified and anticompetitive efforts to prop up such a barrier, the “applications barrier to entry” is just a synonym for “big.” Admittedly, the DC Court of Appeals in Microsoft was careful — far more careful than the district court — to locate specific, narrow conduct beyond the mere existence of the alleged barrier that it believed amounted to anticompetitive monopoly maintenance. But central to the imposition of liability was the finding that some of Microsoft’s conduct deterred application developers from effectively accessing other platforms, without procompetitive justification.

With the implementation of initiatives like the one Microsoft has now undertaken in Windows 10, however, it appears that such concerns regarding Google and mobile app developers are unsupportable.

Of greatest significance to the current Android-related accusations against Google, the appeals court in Microsoft also reversed the district court’s finding of liability based on tying, noting in particular that:

If OS vendors without market power also sell their software bundled with a browser, the natural inference is that sale of the items as a bundle serves consumer demand and that unbundled sale would not.

Of course this is exactly what Microsoft Windows Phone (which decidedly does not have market power) does, suggesting that the bundling of mobile OS’s with proprietary apps is procompetitive.

Similarly, in reviewing the eventual consent decree in Microsoft, the appeals court upheld the conditions that allowed the integration of OS and browser code, and rejected the plaintiff’s assertion that a prohibition on such technological commingling was required by law.

The appeals court praised the district court’s recognition that an appropriate remedy “must place paramount significance upon addressing the exclusionary effect of the commingling, rather than the mere conduct which gives rise to the effect,” as well as the district court’s acknowledgement that “it is not a proper task for the Court to undertake to redesign products.”  Said the appeals court, “addressing the applications barrier to entry in a manner likely to harm consumers is not self-evidently an appropriate way to remedy an antitrust violation.”

Today, claims that the integration of Google Mobile Services (GMS) into Google’s version of the Android OS is anticompetitive are misplaced for the same reason:

But making Android competitive with its tightly controlled competitors [e.g., Apple iOS and Windows Phone] requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

In fact, some commenters have even suggested that, by effectively making the OS more “open,” Microsoft’s new Windows 10 initiative might undermine the Windows experience in exactly this fashion:

As a Windows Phone developer, I think this could easily turn into a horrible idea…. [I]t might break the whole Windows user experience Microsoft has been building in the past few years. Modern UI design is a different approach from both Android and iOS. We risk having a very unhomogenic [sic] store with lots of apps using different design patterns, and Modern UI is in my opinion, one of the strongest points of Windows Phone.

But just because Microsoft may be willing to take this risk doesn’t mean that any sensible conception of competition law and economics should require Google (or anyone else) to do so, as well.

Most significantly, Microsoft’s recent announcement is further evidence that both technological and contractual innovations can (potentially — the initiative is too new to know its effect) transform competition, undermine static market definitions and weaken theories of anticompetitive harm.

When apps and their functionality are routinely built into some OS’s or set as defaults; when mobile apps are also available for the desktop and are seamlessly integrated to permit identical functions to be performed on multiple platforms; and when new form factors like Apple MacBook Air and Microsoft Surface blur the lines between mobile and desktop, traditional, static anticompetitive theories are out the window (no pun intended).

Of course, it’s always been possible for new entrants to overcome network effects and scale impediments by a range of means. Microsoft itself has in the past offered to pay app developers to write for its mobile platform. Similarly, it offers inducements to attract users to its Bing search engine and it has devised several creative mechanisms to overcome its claimed scale inferiority in search.

A further irony (and market complication) is that now some of these apps — the ones with network effects of their own — threaten in turn to challenge the reigning mobile operating systems, exactly as Netscape was purported to threaten Microsoft’s OS (and lead to its anticompetitive conduct) back in the day. Facebook, for example, now offers not only its core social media function, but also search, messaging, video calls, mobile payments, photo editing and sharing, and other functionality that compete with many of the core functions built into mobile OS’s.

But the desire by apps like Facebook to expand their networks by being on multiple platforms, and the desire by these platforms to offer popular apps in order to attract users, ensure that Facebook is ubiquitous, even without any antitrust intervention. As Timothy Bresnahan, Joe Orsini and Pai-Ling Yin demonstrate:

(1) The distribution of app attractiveness to consumers is skewed, with a small minority of apps drawing the vast majority of consumer demand. (2) Apps which are highly demanded on one platform tend also to be highly demanded on the other platform. (3) These highly demanded apps have a strong tendency to multihome, writing for both platforms. As a result, the presence or absence of apps offers little reason for consumers to choose a platform. A consumer can choose either platform and have access to the most attractive apps.

Of course, even before Microsoft’s announcement, cross-platform app development was common, and third-party platforms like Xamarin facilitated cross-platform development. As Daniel O’Connor noted last year:

Even if one ecosystem has a majority of the market share, software developers will release versions for different operating systems if it is cheap/easy enough to do so…. As [Torsten] Körber documents [here], building mobile applications is much easier and cheaper than building PC software. Therefore, it is more common for programmers to write programs for multiple OSes…. 73 percent of apps developers design apps for at least two different mobiles OSes, while 62 percent support 3 or more.

Whether Microsoft’s interoperability efforts prove to be “perfect” or not (and some commenters are skeptical), they seem destined to at least further decrease the cost of cross-platform development, thus reducing any “application barrier to entry” that might impede Microsoft’s ability to compete with its much larger rivals.

Moreover, one of the most interesting things about the announcement is that it will enable Android and iOS apps to run not only on Windows phones, but also on Windows computers. Some 1.3 billion PCs run Windows. Forget Windows’ tiny share of mobile phone OS’s; that massive potential PC market (of which Microsoft still has 91 percent) presents an enormous ready-made market for mobile app developers that won’t be ignored.

It also points up the increasing absurdity of compartmentalizing these markets for antitrust purposes. As the relevant distinctions between mobile and desktop markets break down, the idea of Google (or any other company) “leveraging its dominance” in one market to monopolize a “neighboring” or “related” market is increasingly unsustainable. As I wrote earlier this week:

Mobile and social media have transformed search, too…. This revolution has migrated to the computer, which has itself become “app-ified.” Now there are desktop apps and browser extensions that take users directly to Google competitors such as Kayak, eBay and Amazon, or that pull and present information from these sites.

In the end, intentionally or not, Microsoft is (again) undermining its own case. And it is doing so by innovating and competing — those Schumpeterian concepts that were always destined to undermine antitrust cases in the high-tech sector.

If we’re lucky, Microsoft’s new initiatives are the leading edge of a sea change for Microsoft — a different and welcome mindset built on competing in the marketplace rather than at regulators’ doors.

Last week, the FTC announced its complaint and consent decree with Nomi Technologies for failing to allow consumers to opt-out of cell phone tracking while shopping in retail stores. Whatever one thinks about Nomi itself, the FTC’s enforcement action represents another step in the dubious application of its enforcement authority against deceptive statements.

In response, Geoffrey Manne, Ben Sperry, and Berin Szoka have written a new ICLE White Paper, titled, In the Matter of Nomi, Technologies, Inc.: The Dark Side of the FTC’s Latest Feel-Good Case.

Nomi Technologies offers retailers an innovative way to observe how customers move through their stores, how often they return, what products they browse and for how long (among other things) by tracking the Wi-Fi addresses broadcast by customers’ mobile phones. This allows stores to do what websites do all the time: tweak their configuration, pricing, purchasing and the like in response to real-time analytics — instead of just eyeballing what works. Nomi anonymized the data it collected so that retailers couldn’t track specific individuals. Recognizing that some customers might still object, even to “anonymized” tracking, Nomi allowed anyone to opt-out of all Nomi tracking on its website.

The FTC, though, seized upon a promise made within Nomi’s privacy policy to provide an additional, in-store opt out and argued that Nomi’s failure to make good on this promise — and/or notify customers of which stores used the technology — made its privacy policy deceptive. Commissioner Wright dissented, noting that the majority failed to consider evidence that showed the promise was not material, arguing that the inaccurate statement was not important enough to actually affect consumers’ behavior because they could opt-out on the website anyway. Both Commissioners Wright’s and Commissioner Ohlhausen’s dissents argued that the FTC majority’s enforcement decision in Nomi amounted to prosecutorial overreach, imposing an overly stringent standard of review without any actual indication of consumer harm.

The FTC’s deception authority is supposed to provide the agency with the authority to remedy consumer harms not effectively handled by common law torts and contracts — but it’s not a blank check. The 1983 Deception Policy Statement requires the FTC to demonstrate:

  1. There is a representation, omission or practice that is likely to mislead the consumer;
  2. A consumer’s interpretation of the representation, omission, or practice is considered reasonable under the circumstances; and
  3. The misleading representation, omission, or practice is material (meaning the inaccurate statement was important enough to actually affect consumers’ behavior).

Under the DPS, certain types of claims are treated as presumptively material, although the FTC is always supposed to “consider relevant and competent evidence offered to rebut presumptions of materiality.” The Nomi majority failed to do exactly that in its analysis of the company’s claims, as Commissioner Wright noted in his dissent:

the Commission failed to discharge its commitment to duly consider relevant and competent evidence that squarely rebuts the presumption that Nomi’s failure to implement an additional, retail-level opt out was material to consumers. In other words, the Commission neglects to take into account evidence demonstrating consumers would not “have chosen differently” but for the allegedly deceptive representation.

As we discuss in detail in the white paper, we believe that the Commission committed several additional legal errors in its application of the Deception Policy Statement in Nomi, over and above its failure to adequately weigh exculpatory evidence. Exceeding the legal constraints of the DPS isn’t just a legal problem: in this case, it’s led the FTC to bring an enforcement action that will likely have the very opposite of its intended result, discouraging rather than encouraging further disclosure.

Moreover, as we write in the white paper:

Nomi is the latest in a long string of recent cases in which the FTC has pushed back against both legislative and self-imposed constraints on its discretion. By small increments (unadjudicated consent decrees), but consistently and with apparent purpose, the FTC seems to be reverting to the sweeping conception of its power to police deception and unfairness that led the FTC to a titanic clash with Congress back in 1980.

The Nomi case presents yet another example of the need for FTC process reforms. Those reforms could ensure the FTC focuses on cases that actually make consumers better off. But given the FTC majority’s unwavering dedication to maximizing its discretion, such reforms will likely have to come from Congress.

Find the full white paper here.

Last week the International Center for Law & Economics, joined by TechFreedom, filed comments with the Federal Aviation Administration (FAA) in its Operation and Certification of Small Unmanned Aircraft Systems (“UAS” — i.e, drones) proceeding to establish rules for the operation of small drones in the National Airspace System.

We believe that the FAA has failed to appropriately weigh the costs and benefits, as well as the First Amendment implications, of its proposed rules.

The FAA’s proposed drones rules fail to meet (or even undertake) adequate cost/benefit analysis

FAA regulations are subject to Executive Order 12866, which, among other things, requires that agencies:

  • “consider incentives for innovation,”
  • “propose or adopt a regulation only upon a reasoned determination that the benefits of the intended regulation justify its costs”;
  • “base [their] decisions on the best reasonably obtainable scientific, technical, economic, and other information”; and
  • “tailor [their} regulations to impose the least burden on society,”

The FAA’s proposed drone rules fail to meet these requirements.

An important, and fundamental, problem is that the proposed rules often seem to import “scientific, technical, economic, and other information” regarding traditional manned aircraft, rather than such knowledge specifically applicable to drones and their uses — what FTC Commissioner Maureen Ohlhausen has dubbed “The Procrustean Problem with Prescriptive Regulation.”

As such, not only do the rules often not make sense as a practical matter, they also seek to simply adapt existing standards, rules and understandings promulgated for manned aircraft to regulate drones — insufficiently tailoring the rules to “impose the least burden on society.”

In some cases the rules would effectively ban obviously valuable uses outright, disregarding the rules’ effect on innovation (to say nothing of their effect on current uses of drones) without adequately defending such prohibitions as necessary to protect public safety.

Importantly, the proposed rules would effectively prohibit the use of commercial drones for long-distance services (like package delivery and scouting large agricultural plots) and for uses in populated areas — undermining what may well be drones’ most economically valuable uses.

As our comments note:

By prohibiting UAS operation over people who are not directly involved in the drone’s operation, the rules dramatically limit the geographic scope in which UAS may operate, essentially limiting commercial drone operations to unpopulated or extremely sparsely populated areas. While that may be sufficient for important agricultural and forestry uses, for example, it effectively precludes all possible uses in more urban areas, including journalism, broadcasting, surveying, package delivery and the like. Even in nonurban areas, such a restriction imposes potentially insurmountable costs.

Mandating that operators not fly over other individuals not involved in the UAS operation is, in fact, the nail in the coffin of drone deliveries, an industry that is likely to offer a significant fraction of this technology’s potential economic benefit. Imposing such a blanket ban thus improperly ignores the important “incentives for innovation” suggested by Executive Order 12866 without apparent corresponding benefit.

The FAA’s proposed drone rules fail under First Amendment scrutiny

The FAA’s failure to tailor the rules according to an appropriate analysis of their costs and benefits also causes them to violate the First Amendment. Without proper tailoring based on the unique technological characteristics of drones and a careful assessment of their likely uses, the rules are considerably more broad than the Supreme Court’s “time, place and manner” standard would allow.

Several of the rules constitute a de facto ban on most — indeed, nearly all — of the potential uses of drones that most clearly involve the collection of information and/or the expression of speech protected by the First Amendment. As we note in our comments:

While the FAA’s proposed rules appear to be content-neutral, and will thus avoid the most-exacting Constitutional scrutiny, the FAA will nevertheless have a difficult time demonstrating that some of them are narrowly drawn and adequately tailored time, place, and manner restrictions.

Indeed, many of the rules likely amount to a prior restraint on protected commercial and non-commercial activity, both for obvious existing applications like news gathering and for currently unanticipated future uses.

Our friends Eli Dourado, Adam Thierer and Ryan Hagemann at Mercatus also filed comments in the proceeding, raising similar and analogous concerns:

As far as possible, we advocate an environment of “permissionless innovation” to reap the greatest benefit from our airspace. The FAA’s rules do not foster this environment. In addition, we believe the FAA has fallen short of its obligations under Executive Order 12866 to provide thorough benefit-cost analysis.

The full Mercatus comments, available here, are also recommended reading.

Read the full ICLE/TechFreedom comments here.

Earlier this week Senators Orrin Hatch and Ron Wyden and Representative Paul Ryan introduced bipartisan, bicameral legislation, the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 (otherwise known as Trade Promotion Authority or “fast track” negotiating authority). The bill would enable the Administration to negotiate free trade agreements subject to appropriate Congressional review.

Nothing bridges partisan divides like free trade.

Top presidential economic advisors from both parties support TPA. And the legislation was greeted with enthusiastic support from the business community. Indeed, a letter supporting the bill was signed by 269 of the country’s largest and most significant companies, including Apple, General Electric, Intel, and Microsoft.

Among other things, the legislation includes language calling on trading partners to respect and protect intellectual property. That language in particular was (not surprisingly) widely cheered in a letter to Congress signed by a coalition of sixteen technology, content, manufacturing and pharmaceutical trade associations, representing industries accounting for (according to the letter) “approximately 35 percent of U.S. GDP, more than one quarter of U.S. jobs, and 60 percent of U.S. exports.”

Strong IP protections also enjoy bipartisan support in much of the broader policy community. Indeed, ICLE recently joined sixty-seven think tanks, scholars, advocacy groups and stakeholders on a letter to Congress expressing support for strong IP protections, including in free trade agreements.

Despite this overwhelming support for the bill, the Internet Association (a trade association representing 34 Internet companies including giants like Google and Amazon, but mostly smaller companies like coinbase and okcupid) expressed concern with the intellectual property language in TPA legislation, asserting that “[i]t fails to adopt a balanced approach, including the recognition that limitations and exceptions in copyright law are necessary to promote the success of Internet platforms both at home and abroad.”

But the proposed TPA bill does recognize “limitations and exceptions in copyright law,” as the Internet Association is presumably well aware. Among other things, the bill supports “ensuring accelerated and full implementation of the Agreement on Trade-Related Aspects of Intellectual Property Rights,” which specifically mentions exceptions and limitations on copyright, and it advocates “ensuring that the provisions of any trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law,” which also recognizes copyright exceptions and limitations.

What the bill doesn’t do — and wisely so — is advocate for the inclusion of mandatory fair use language in U.S. free trade agreements.

Fair use is an exception under U.S. copyright law to the normal rule that one must obtain permission from the copyright owner before exercising any of the exclusive rights in Section 106 of the Copyright Act.

Including such language in TPA would require U.S. negotiators to demand that trading partners enact U.S.-style fair use language. But as ICLE discussed in a recent White Paper, if broad, U.S.-style fair use exceptions are infused into trade agreements they could actually increase piracy and discourage artistic creation and innovation — particularly in nations without a strong legal tradition implementing such provisions.

All trade agreements entered into by the U.S. since 1994 include a mechanism for trading partners to enact copyright exceptions and limitations, including fair use, should they so choose. These copyright exceptions and limitations must conform to a global standard — the so-called “three-step test,” — established under the auspices of the 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement, and with roots going back to the 1967 amendments to the 1886 Berne Convention.

According to that standard,

Members shall confine limitations or exceptions to exclusive rights to

  1. certain special cases, which
  2. do not conflict with a normal exploitation of the work and
  3. do not unreasonably prejudice the legitimate interests of the right holder.

This three-step test provides a workable standard for balancing copyright protections with other public interests. Most important, it sets flexible (but by no means unlimited) boundaries, so, rather than squeezing every jurisdiction into the same box, it accommodates a wide range of exceptions and limitations to copyright protection, ranging from the U.S.’ fair use approach to the fair dealing exception in other common law countries to the various statutory exceptions adopted in civil law jurisdictions.

Fair use is an inherently common law concept, developed by case-by-case analysis and a system of binding precedent. In the U.S. it has been codified by statute, but only after two centuries of common law development. Even as codified, fair use takes the form of guidance to judicial decision-makers assessing whether any particular use of a copyrighted work merits the exception; it is not a prescriptive statement, and judicial interpretation continues to define and evolve the doctrine.

Most countries in the world, on the other hand, have civil law systems that spell out specific exceptions to copyright protection, that don’t rely on judicial precedent, and that are thus incompatible with the common law, fair use approach. The importance of this legal flexibility can’t be understated: Only four countries out of the 166 signatories to the Berne Convention have adopted fair use since 1967.

Additionally, from an economic perspective the rationale for fair use would seem to be receding, not expanding, further eroding the justification for its mandatory adoption via free trade agreements.

As digital distribution, the Internet and a host of other technological advances have reduced transaction costs, it’s easier and cheaper for users to license copyrighted content. As a result, the need to rely on fair use to facilitate some socially valuable uses of content that otherwise wouldn’t occur because of prohibitive costs of contracting is diminished. Indeed, it’s even possible that the existence of fair use exceptions may inhibit the development of these sorts of mechanisms for simple, low-cost agreements between owners and users of content – with consequences beyond the material that is subject to the exceptions. While, indeed, some socially valuable uses, like parody, may merit exceptions because of rights holders’ unwillingness, rather than inability, to license, U.S.-style fair use is in no way necessary to facilitate such exceptions. In short, the boundaries of copyright exceptions should be contracting, not expanding.

It’s also worth noting that simple marketplace observations seem to undermine assertions by Internet companies that they can’t thrive without fair use. Google Search, for example, has grown big enough to attract the (misguided) attention of EU antitrust regulators, despite no European country having enacted a U.S-style fair use law. Indeed, European regulators claim that the company has a 90% share of the market — without fair use.

Meanwhile, companies like Netflix contend that their ability to cache temporary copies of video content in order to improve streaming quality would be imperiled without fair use. But it’s impossible to see how Netflix is able to negotiate extensive, complex contracts with copyright holders to actually show their content, but yet is somehow unable to negotiate an additional clause or two in those contracts to ensure the quality of those performances without fair use.

Properly bounded exceptions and limitations are an important aspect of any copyright regime. But given the mix of legal regimes among current prospective trading partners, as well as other countries with whom the U.S. might at some stage develop new FTAs, it’s highly likely that the introduction of U.S.-style fair use rules would be misinterpreted and misapplied in certain jurisdictions and could result in excessively lax copyright protection, undermining incentives to create and innovate. Of course for the self-described consumer advocates pushing for fair use, this is surely the goal. Further, mandating the inclusion of fair use in trade agreements through TPA legislation would, in essence, force the U.S. to ignore the legal regimes of its trading partners and weaken the protection of copyright in trade agreements, again undermining the incentive to create and innovate.

There is no principled reason, in short, for TPA to mandate adoption of U.S-style fair use in free trade agreements. Congress should pass TPA legislation as introduced, and resist any rent-seeking attempts to include fair use language.

reason-mag-dont-tread-on-my-internetBen Sperry and I have a long piece on net neutrality in the latest issue of Reason Magazine entitled, “How to Break the Internet.” It’s part of a special collection of articles and videos dedicated to the proposition “Don’t Tread on My Internet!”

Reason has put together a great bunch of material, and packaged it in a special retro-designed page that will make you think it’s the 1990s all over again (complete with flaming graphics and dancing Internet babies).

Here’s a taste of our article:

“Net neutrality” sounds like a good idea. It isn’t.

As political slogans go, the phrase net neutrality has been enormously effective, riling up the chattering classes and forcing a sea change in the government’s decades-old hands-off approach to regulating the Internet. But as an organizing principle for the Internet, the concept is dangerously misguided. That is especially true of the particular form of net neutrality regulation proposed in February by Federal Communications Commission (FCC) Chairman Tom Wheeler.

Net neutrality backers traffic in fear. Pushing a suite of suggested interventions, they warn of rapacious cable operators who seek to control online media and other content by “picking winners and losers” on the Internet. They proclaim that regulation is the only way to stave off “fast lanes” that would render your favorite website “invisible” unless it’s one of the corporate-favored. They declare that it will shelter startups, guarantee free expression, and preserve the great, egalitarian “openness” of the Internet.

No decent person, in other words, could be against net neutrality.

In truth, this latest campaign to regulate the Internet is an apt illustration of F.A. Hayek’s famous observation that “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Egged on by a bootleggers-and-Baptists coalition of rent-seeking industry groups and corporation-hating progressives (and bolstered by a highly unusual proclamation from the White House), Chairman Wheeler and his staff are attempting to design something they know very little about-not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.

And the rest of the contents of the site are great, as well. Among other things, there’s:

  • “Why are Edward Snowden’s supporters so eager to give the government more control over the Internet?” Matt Welch’s  take on the contradictions in the thinking of net neutrality’s biggest advocates.
  • “The Feds want a back door into your computer. Again.” Declan McCullagh on the eternal return of government attempts to pre-hack your technology.
  • “Uncle Sam wants your Fitbit.” Adam Thierer on the coming clampdown on data coursing through the Internet of Things.
  • Mike Godwin on how net neutrality can hurt developing countries most of all.
  • “How states are planning to grab tax dollars for online sales,” by Veronique de Rugy
  • FCC Commissioner Ajit Pai on why net neutrality is “a solution that won’t work to a problem that simply doesn’t exist.”
  • “8 great libertarian apps that make your world a little freer and a whole lot easier to navigate.”

There’s all that, plus enough flaming images and dancing babies to make your eyes bleed. Highly recommended!

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.

The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the Commission approve an antitrust suit against the company.

While this is excellent fodder for a few hours of Twitter hysteria, it takes more than 140 characters to delve into the nuances of a 20-month federal investigation. And the bottom line is, frankly, pretty ho-hum.

As I said recently,

One of life’s unfortunate certainties, as predictable as death and taxes, is this: regulators regulate.

The Bureau of Competition staff is made up of professional lawyers — many of them litigators, whose existence is predicated on there being actual, you know, litigation. If you believe in human fallibility at all, you have to expect that, when they err, FTC staff errs on the side of too much, rather than too little, enforcement.

So is it shocking that the FTC staff might recommend that the Commission undertake what would undoubtedly have been one of the agency’s most significant antitrust cases? Hardly.

Nor is it surprising that the commissioners might not always agree with staff. In fact, staff recommendations are ignored all the time, for better or worse. Here are just a few examples: R.J Reynolds/Brown & Williamson merger, POM Wonderful , Home Shopping Network/QVC merger, cigarette advertising. No doubt there are many, many more.

Regardless, it also bears pointing out that the staff did not recommend the FTC bring suit on the central issue of search bias “because of the strong procompetitive justifications Google has set forth”:

Complainants allege that Google’s conduct is anticompetitive because if forecloses alternative search platforms that might operate to constrain Google’s dominance in search and search advertising. Although it is a close call, we do not recommend that the Commission issue a complaint against Google for this conduct.

But this caveat is enormous. To report this as the FTC staff recommending a case is seriously misleading. Here they are forbearing from bringing 99% of the case against Google, and recommending suit on the marginal 1% issues. It would be more accurate to say, “FTC staff recommends no case against Google, except on a couple of minor issues which will be immediately settled.”

And in fact it was on just these minor issues that Google agreed to voluntary commitments to curtail some conduct when the FTC announced it was not bringing suit against the company.

The Wall Street Journal quotes some other language from the staff report bolstering the conclusion that this is a complex market, the conduct at issue was ambiguous (at worst), and supporting the central recommendation not to sue:

We are faced with a set of facts that can most plausibly be accounted for by a narrative of mixed motives: one in which Google’s course of conduct was premised on its desire to innovate and to produce a high quality search product in the face of competition, blended with the desire to direct users to its own vertical offerings (instead of those of rivals) so as to increase its own revenues. Indeed, the evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.

On a global level, the record will permit Google to show substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.

This is exactly when you want antitrust enforcers to forbear. Predicting anticompetitive effects is difficult, and conduct that could be problematic is simultaneously potentially vigorous competition.

That the staff concluded that some of what Google was doing “harmed competitors” isn’t surprising — there were lots of competitors parading through the FTC on a daily basis claiming Google harmed them. But antitrust is about protecting consumers, not competitors. Far more important is the staff finding of “substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.”

Indeed, the combination of “substantial innovation,” “intense competition from Microsoft and others,” and “Google’s strong procompetitive justifications” suggests a well-functioning market. It similarly suggests an antitrust case that the FTC would likely have lost. The FTC’s litigators should probably be grateful that the commissioners had the good sense to vote to close the investigation.

Meanwhile, the Wall Street Journal also reports that the FTC’s Bureau of Economics simultaneously recommended that the Commission not bring suit at all against Google. It is not uncommon for the lawyers and the economists at the Commission to disagree. And as a general (though not inviolable) rule, we should be happy when the Commissioners side with the economists.

While the press, professional Google critics, and the company’s competitors may want to make this sound like a big deal, the actual facts of the case and a pretty simple error-cost analysis suggests that not bringing a case was the correct course.

Today, the International Center for Law & Economics released a white paper, co-authored by Executive Director Geoffrey Manne and Senior Fellow Julian Morris, entitled Dangerous Exception: The detrimental effects of including “fair use” copyright exceptions in free trade agreements.

Dangerous Exception explores the relationship between copyright, creativity and economic development in a networked global marketplace. In particular, it examines the evidence for and against mandating a U.S.-style fair use exception to copyright via free trade agreements like the Trans-Pacific Partnership (TPP), and through “fast-track” trade promotion authority (TPA).

In the context of these ongoing trade negotiations, some organizations have been advocating for the inclusion of dramatically expanded copyright exceptions in place of more limited language requiring that such exceptions conform to the “three-step test” implemented by the 1994 TRIPs Agreement.

The paper argues that if broad fair use exceptions are infused into trade agreements they could increase piracy and discourage artistic creation and innovation — especially in nations without a strong legal tradition implementing such provisions.

The expansion of digital networks across borders, combined with historically weak copyright enforcement in many nations, poses a major challenge to a broadened fair use exception. The modern digital economy calls for appropriate, but limited, copyright exceptions — not their expansion.

The white paper is available here. For some of our previous work on related issues, see:

On February 13 an administrative law judge (ALJ) at the California Public Utility Commission (CPUC) issued a proposed decision regarding the Comcast/Time Warner Cable (TWC) merger. The proposed decision recommends that the CPUC approve the merger with conditions.

It’s laudable that the ALJ acknowledges at least some of the competitive merits of the proposed deal. But the set of conditions that the proposed decision would impose on the combined company in order to complete the merger represents a remarkable set of unauthorized regulations that are both inappropriate for the deal and at odds with California’s legislated approach to regulation of the Internet.

According to the proposed decision, every condition it imposes is aimed at mitigating a presumed harm arising from the merger:

The Applicants must meet the conditions adopted herein in order to provide reasonable assurance that the proposed transaction will be in the public interest in accordance with Pub. Util. Code § 854(a) and (c).… We only adopt conditions which mitigate an effect of the merger in order to satisfy the public interest requirements of § 854.

By any reasonable interpretation, this would mean that the CPUC can adopt only those conditions that address specific public interest concerns arising from the deal itself. But most of the conditions in the proposed decision fail this basic test and seem designed to address broader social policy issues that have nothing to do with the alleged competitive effects of the deal.

Instead, without undertaking an analysis of the merger’s competitive effects, the proposed decision effectively accepts that the merger serves the public interest, while also simply accepting the assertions of the merger’s opponents that it doesn’t. In the name of squaring that circle, the proposed decision seeks to permit the merger to proceed, but then seeks to force the post-merger company to conform to the merger’s critics’ rather arbitrary view of their preferred market structure for the provision of cable broadband services in California.

For something — say, a merger — to be in the public interest, it need not further every conceivable public interest goal. This is a perversion of the standard, and it turns “public interest” into an unconstrained license to impose a regulatory wish-list on particular actors, outside of the scope of usual regulatory processes.

While a few people may have no problem with the proposed decision’s expansive vision of Internet access regulation, California governor Jerry Brown and the overwhelming majority of the California state legislature cannot be counted among the supporters of this approach.

In 2012 the state legislature passed by an overwhelming margin — and Governor Brown signed — SB 1161 (codified as Section 710 of the California Public Utilities Code), which expressly prohibits the CPUC from regulating broadband:

The commission shall not exercise regulatory jurisdiction or control over Voice over Internet Protocol and Internet Protocol enabled services except as required or expressly delegated by federal law or expressly directed to do so by statute or as set forth in [certain enumerated exceptions].”

The message is clear: The CPUC should not try to bypass clear state law and all institutional safeguards by misusing the merger clearance process.

While bipartisan majorities in the state house, supported by a Democratic governor, have stopped the CPUC from imposing new regulations on Internet and VoIP services through SB 1161, the proposed decision seeks to impose regulations through merger conditions that go far beyond anything permitted by this state law.

For instance, the proposed decision seeks to impose arbitrary retail price controls on broadband access:

Comcast shall offer to all customers of the merged companies, for a period of five years following the effective date of the parent company merger, the opportunity to purchase stand-alone broadband Internet service at a price not to exceed the price charged by Time Warner for providing that service to its customers, and at speeds, prices, and terms, at least comparable to that offered by Time Warner prior to the merger’s closing.

And the proposed decision seeks to mandate market structure in other insidious ways, as well, mandating specific broadband speeds, requiring a break-neck geographic expansion of Comcast’s service area, and dictating installation and service times, among other things — all without regard to the actual plausibility (or cost) of implementing such requirements.

But the problem is even more acute. Not only does the proposed decision seek to regulate Internet access issues irrelevant to the merger, it also proposes to impose conditions that would actually undermine competition.

The proposed decision would impose the following conditions on Comcast’s business VoIP and business Internet services:

Comcast shall offer Time Warner’s Business Calling Plan with Stand Alone Internet Access to interested CLECs throughout the combined service territories of the merging companies for a period of five years from the effective date of the parent company merger at existing prices, terms and conditions.

Comcast shall offer Time Warner’s Carrier Ethernet Last Mile Access product to interested CLECs throughout the combined service territories of the merging companies for a period of five years from the effective date of the parent company at the same prices, terms and conditions as offered by Time Warner prior to the merger.

But the proposed decision fails to recognize that Comcast is an also-ran in the business service market. Last year it served a small fraction of the business customers served by AT&T and Verizon, who have long dominated the business services market:

According to a Sept. 2011 ComScore survey, AT&T and Verizon had the largest market shares of all business services ISPs. AT&T held 20% of market share and Verizon held 12%. Comcast ranked 6th, with 5% of market share.

The proposed conditions would hamstring the upstart challenger Comcast by removing both product and pricing flexibility for five years – an eternity in rapidly evolving technology markets. That’s a sure-fire way to minimize competition, not promote it.

The proposed decision reiterates several times its concern that the combined Comcast/Time Warner Cable will serve more than 80% of California households, and “reduce[] the possibilities for content providers to reach the California broadband market.” The alleged concern is that the combined company could exercise anticompetitive market power — imposing artificially high fees for carrying content or degrading service of unaffiliated content and services.

The problem is Comcast and TWC don’t compete anywhere in California today, and they face competition from other providers everywhere they operate. As the decision matter-of-factly states:

Comcast and Time Warner do not compete with one another… [and] Comcast and Time Warner compete with other providers of Internet access services in their respective service territories.

As a result, the merger will actually have no effect on the number of competitive choices in the state; the increase in the statewide market share as a result of the deal is irrelevant. And so these purported competition concerns can’t be the basis for any conditions, let alone the sweeping ones set out in the proposed decision.

The stated concern about content providers finding it difficult to reach Californians is a red herring: the post-merger Comcast geographic footprint will be exactly the same as the combined, pre-merger Comcast/TWC/Charter footprint. Content providers will be able to access just as many Californians (and with greater speeds) as before the merger.

True, content providers that just want to reach some number of random Californians may have to reach more of them through Comcast than they would have before the merger. But what content provider just wants to reach some number of Californians in the first place? Moreover, this fundamentally misstates the way the Internet works: it is users who reach the content they prefer; not the other way around. And, once again, for literally every consumer in the state, the number of available options for doing so won’t change one iota following the merger.

Nothing shows more clearly how the proposed decision has strayed from responding to merger concerns to addressing broader social policy issues than the conditions aimed at expanding low-price broadband offerings for underserved households. Among other things, the proposed conditions dramatically increase the size and scope of Comcast’s Internet Essentials program, converting this laudable effort from a targeted program (that uses a host of tools to connect families where a child is eligible for the National School Lunch Program to the Internet) into one that must serve all low-income adults.

Putting aside the damage this would do to the core Internet Essentials’ mission of connecting school age children by diverting resources from the program’s central purpose, it is manifestly outside the scope of the CPUC’s review. Nothing in the deal affects the number of adults (or children, for that matter) in California without broadband.

It’s possible, of course, that Comcast might implement something like an expanded Internet Essentials program without any prodding; after all, companies implement (and expand) such programs all the time. But why on earth should regulators be able to define such an obligation arbitrarily, and to impose it on whatever ISP happens to be asking for a license transfer? That arbitrariness creates precisely the sort of business uncertainty that SB 1161 was meant to prevent.

The same thing applies to the proposed decision’s requirement regarding school and library broadband connectivity:

Comcast shall connect and/or upgrade Internet infrastructure for K-12 schools and public libraries in unserved and underserved areas in Comcast’s combined California service territory so that it is providing high speed Internet to at least the same proportion of K-12 schools and public libraries in such unserved and underserved areas as it provides to the households in its service territory.

No doubt improving school and library infrastructure is a noble goal — and there’s even a large federal subsidy program (E-Rate) devoted to it. But insisting that Comcast do so — and do so to an extent unsupported by the underlying federal subsidy program already connecting such institutions, and in contravention of existing provider contracts with schools — as a condition of the merger is simple extortion.

The CPUC is treating the proposed merger like a free-for-all, imposing in the name of the “public interest” a set of conditions that it would never be permitted to impose absent the gun-to-the-head of merger approval. Moreover, it seeks to remake California’s broadband access landscape in a fashion that would likely never materialize in the natural course of competition: If the merger doesn’t go through, none of the conditions in the proposed decision and alleged to be necessary to protect the public interest will exist.

Far from trying to ensure that Comcast’s merger with TWC doesn’t erode competitive forces to the detriment of the public, the proposed decision is trying to micromanage the market, simply asserting that the public interest demands imposition of it’s subjective and arbitrary laundry list of preferred items. This isn’t sensible regulation, it isn’t compliant with state law, and it doesn’t serve the people of California.