Archives For international center for law & economics

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.

And:

[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.

Also:

The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Last week, the FTC announced its complaint and consent decree with Nomi Technologies for failing to allow consumers to opt-out of cell phone tracking while shopping in retail stores. Whatever one thinks about Nomi itself, the FTC’s enforcement action represents another step in the dubious application of its enforcement authority against deceptive statements.

In response, Geoffrey Manne, Ben Sperry, and Berin Szoka have written a new ICLE White Paper, titled, In the Matter of Nomi, Technologies, Inc.: The Dark Side of the FTC’s Latest Feel-Good Case.

Nomi Technologies offers retailers an innovative way to observe how customers move through their stores, how often they return, what products they browse and for how long (among other things) by tracking the Wi-Fi addresses broadcast by customers’ mobile phones. This allows stores to do what websites do all the time: tweak their configuration, pricing, purchasing and the like in response to real-time analytics — instead of just eyeballing what works. Nomi anonymized the data it collected so that retailers couldn’t track specific individuals. Recognizing that some customers might still object, even to “anonymized” tracking, Nomi allowed anyone to opt-out of all Nomi tracking on its website.

The FTC, though, seized upon a promise made within Nomi’s privacy policy to provide an additional, in-store opt out and argued that Nomi’s failure to make good on this promise — and/or notify customers of which stores used the technology — made its privacy policy deceptive. Commissioner Wright dissented, noting that the majority failed to consider evidence that showed the promise was not material, arguing that the inaccurate statement was not important enough to actually affect consumers’ behavior because they could opt-out on the website anyway. Both Commissioners Wright’s and Commissioner Ohlhausen’s dissents argued that the FTC majority’s enforcement decision in Nomi amounted to prosecutorial overreach, imposing an overly stringent standard of review without any actual indication of consumer harm.

The FTC’s deception authority is supposed to provide the agency with the authority to remedy consumer harms not effectively handled by common law torts and contracts — but it’s not a blank check. The 1983 Deception Policy Statement requires the FTC to demonstrate:

  1. There is a representation, omission or practice that is likely to mislead the consumer;
  2. A consumer’s interpretation of the representation, omission, or practice is considered reasonable under the circumstances; and
  3. The misleading representation, omission, or practice is material (meaning the inaccurate statement was important enough to actually affect consumers’ behavior).

Under the DPS, certain types of claims are treated as presumptively material, although the FTC is always supposed to “consider relevant and competent evidence offered to rebut presumptions of materiality.” The Nomi majority failed to do exactly that in its analysis of the company’s claims, as Commissioner Wright noted in his dissent:

the Commission failed to discharge its commitment to duly consider relevant and competent evidence that squarely rebuts the presumption that Nomi’s failure to implement an additional, retail-level opt out was material to consumers. In other words, the Commission neglects to take into account evidence demonstrating consumers would not “have chosen differently” but for the allegedly deceptive representation.

As we discuss in detail in the white paper, we believe that the Commission committed several additional legal errors in its application of the Deception Policy Statement in Nomi, over and above its failure to adequately weigh exculpatory evidence. Exceeding the legal constraints of the DPS isn’t just a legal problem: in this case, it’s led the FTC to bring an enforcement action that will likely have the very opposite of its intended result, discouraging rather than encouraging further disclosure.

Moreover, as we write in the white paper:

Nomi is the latest in a long string of recent cases in which the FTC has pushed back against both legislative and self-imposed constraints on its discretion. By small increments (unadjudicated consent decrees), but consistently and with apparent purpose, the FTC seems to be reverting to the sweeping conception of its power to police deception and unfairness that led the FTC to a titanic clash with Congress back in 1980.

The Nomi case presents yet another example of the need for FTC process reforms. Those reforms could ensure the FTC focuses on cases that actually make consumers better off. But given the FTC majority’s unwavering dedication to maximizing its discretion, such reforms will likely have to come from Congress.

Find the full white paper here.

Last week the International Center for Law & Economics, joined by TechFreedom, filed comments with the Federal Aviation Administration (FAA) in its Operation and Certification of Small Unmanned Aircraft Systems (“UAS” — i.e, drones) proceeding to establish rules for the operation of small drones in the National Airspace System.

We believe that the FAA has failed to appropriately weigh the costs and benefits, as well as the First Amendment implications, of its proposed rules.

The FAA’s proposed drones rules fail to meet (or even undertake) adequate cost/benefit analysis

FAA regulations are subject to Executive Order 12866, which, among other things, requires that agencies:

  • “consider incentives for innovation,”
  • “propose or adopt a regulation only upon a reasoned determination that the benefits of the intended regulation justify its costs”;
  • “base [their] decisions on the best reasonably obtainable scientific, technical, economic, and other information”; and
  • “tailor [their} regulations to impose the least burden on society,”

The FAA’s proposed drone rules fail to meet these requirements.

An important, and fundamental, problem is that the proposed rules often seem to import “scientific, technical, economic, and other information” regarding traditional manned aircraft, rather than such knowledge specifically applicable to drones and their uses — what FTC Commissioner Maureen Ohlhausen has dubbed “The Procrustean Problem with Prescriptive Regulation.”

As such, not only do the rules often not make sense as a practical matter, they also seek to simply adapt existing standards, rules and understandings promulgated for manned aircraft to regulate drones — insufficiently tailoring the rules to “impose the least burden on society.”

In some cases the rules would effectively ban obviously valuable uses outright, disregarding the rules’ effect on innovation (to say nothing of their effect on current uses of drones) without adequately defending such prohibitions as necessary to protect public safety.

Importantly, the proposed rules would effectively prohibit the use of commercial drones for long-distance services (like package delivery and scouting large agricultural plots) and for uses in populated areas — undermining what may well be drones’ most economically valuable uses.

As our comments note:

By prohibiting UAS operation over people who are not directly involved in the drone’s operation, the rules dramatically limit the geographic scope in which UAS may operate, essentially limiting commercial drone operations to unpopulated or extremely sparsely populated areas. While that may be sufficient for important agricultural and forestry uses, for example, it effectively precludes all possible uses in more urban areas, including journalism, broadcasting, surveying, package delivery and the like. Even in nonurban areas, such a restriction imposes potentially insurmountable costs.

Mandating that operators not fly over other individuals not involved in the UAS operation is, in fact, the nail in the coffin of drone deliveries, an industry that is likely to offer a significant fraction of this technology’s potential economic benefit. Imposing such a blanket ban thus improperly ignores the important “incentives for innovation” suggested by Executive Order 12866 without apparent corresponding benefit.

The FAA’s proposed drone rules fail under First Amendment scrutiny

The FAA’s failure to tailor the rules according to an appropriate analysis of their costs and benefits also causes them to violate the First Amendment. Without proper tailoring based on the unique technological characteristics of drones and a careful assessment of their likely uses, the rules are considerably more broad than the Supreme Court’s “time, place and manner” standard would allow.

Several of the rules constitute a de facto ban on most — indeed, nearly all — of the potential uses of drones that most clearly involve the collection of information and/or the expression of speech protected by the First Amendment. As we note in our comments:

While the FAA’s proposed rules appear to be content-neutral, and will thus avoid the most-exacting Constitutional scrutiny, the FAA will nevertheless have a difficult time demonstrating that some of them are narrowly drawn and adequately tailored time, place, and manner restrictions.

Indeed, many of the rules likely amount to a prior restraint on protected commercial and non-commercial activity, both for obvious existing applications like news gathering and for currently unanticipated future uses.

Our friends Eli Dourado, Adam Thierer and Ryan Hagemann at Mercatus also filed comments in the proceeding, raising similar and analogous concerns:

As far as possible, we advocate an environment of “permissionless innovation” to reap the greatest benefit from our airspace. The FAA’s rules do not foster this environment. In addition, we believe the FAA has fallen short of its obligations under Executive Order 12866 to provide thorough benefit-cost analysis.

The full Mercatus comments, available here, are also recommended reading.

Read the full ICLE/TechFreedom comments here.

Earlier this week Senators Orrin Hatch and Ron Wyden and Representative Paul Ryan introduced bipartisan, bicameral legislation, the Bipartisan Congressional Trade Priorities and Accountability Act of 2015 (otherwise known as Trade Promotion Authority or “fast track” negotiating authority). The bill would enable the Administration to negotiate free trade agreements subject to appropriate Congressional review.

Nothing bridges partisan divides like free trade.

Top presidential economic advisors from both parties support TPA. And the legislation was greeted with enthusiastic support from the business community. Indeed, a letter supporting the bill was signed by 269 of the country’s largest and most significant companies, including Apple, General Electric, Intel, and Microsoft.

Among other things, the legislation includes language calling on trading partners to respect and protect intellectual property. That language in particular was (not surprisingly) widely cheered in a letter to Congress signed by a coalition of sixteen technology, content, manufacturing and pharmaceutical trade associations, representing industries accounting for (according to the letter) “approximately 35 percent of U.S. GDP, more than one quarter of U.S. jobs, and 60 percent of U.S. exports.”

Strong IP protections also enjoy bipartisan support in much of the broader policy community. Indeed, ICLE recently joined sixty-seven think tanks, scholars, advocacy groups and stakeholders on a letter to Congress expressing support for strong IP protections, including in free trade agreements.

Despite this overwhelming support for the bill, the Internet Association (a trade association representing 34 Internet companies including giants like Google and Amazon, but mostly smaller companies like coinbase and okcupid) expressed concern with the intellectual property language in TPA legislation, asserting that “[i]t fails to adopt a balanced approach, including the recognition that limitations and exceptions in copyright law are necessary to promote the success of Internet platforms both at home and abroad.”

But the proposed TPA bill does recognize “limitations and exceptions in copyright law,” as the Internet Association is presumably well aware. Among other things, the bill supports “ensuring accelerated and full implementation of the Agreement on Trade-Related Aspects of Intellectual Property Rights,” which specifically mentions exceptions and limitations on copyright, and it advocates “ensuring that the provisions of any trade agreement governing intellectual property rights that is entered into by the United States reflect a standard of protection similar to that found in United States law,” which also recognizes copyright exceptions and limitations.

What the bill doesn’t do — and wisely so — is advocate for the inclusion of mandatory fair use language in U.S. free trade agreements.

Fair use is an exception under U.S. copyright law to the normal rule that one must obtain permission from the copyright owner before exercising any of the exclusive rights in Section 106 of the Copyright Act.

Including such language in TPA would require U.S. negotiators to demand that trading partners enact U.S.-style fair use language. But as ICLE discussed in a recent White Paper, if broad, U.S.-style fair use exceptions are infused into trade agreements they could actually increase piracy and discourage artistic creation and innovation — particularly in nations without a strong legal tradition implementing such provisions.

All trade agreements entered into by the U.S. since 1994 include a mechanism for trading partners to enact copyright exceptions and limitations, including fair use, should they so choose. These copyright exceptions and limitations must conform to a global standard — the so-called “three-step test,” — established under the auspices of the 1994 Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement, and with roots going back to the 1967 amendments to the 1886 Berne Convention.

According to that standard,

Members shall confine limitations or exceptions to exclusive rights to

  1. certain special cases, which
  2. do not conflict with a normal exploitation of the work and
  3. do not unreasonably prejudice the legitimate interests of the right holder.

This three-step test provides a workable standard for balancing copyright protections with other public interests. Most important, it sets flexible (but by no means unlimited) boundaries, so, rather than squeezing every jurisdiction into the same box, it accommodates a wide range of exceptions and limitations to copyright protection, ranging from the U.S.’ fair use approach to the fair dealing exception in other common law countries to the various statutory exceptions adopted in civil law jurisdictions.

Fair use is an inherently common law concept, developed by case-by-case analysis and a system of binding precedent. In the U.S. it has been codified by statute, but only after two centuries of common law development. Even as codified, fair use takes the form of guidance to judicial decision-makers assessing whether any particular use of a copyrighted work merits the exception; it is not a prescriptive statement, and judicial interpretation continues to define and evolve the doctrine.

Most countries in the world, on the other hand, have civil law systems that spell out specific exceptions to copyright protection, that don’t rely on judicial precedent, and that are thus incompatible with the common law, fair use approach. The importance of this legal flexibility can’t be understated: Only four countries out of the 166 signatories to the Berne Convention have adopted fair use since 1967.

Additionally, from an economic perspective the rationale for fair use would seem to be receding, not expanding, further eroding the justification for its mandatory adoption via free trade agreements.

As digital distribution, the Internet and a host of other technological advances have reduced transaction costs, it’s easier and cheaper for users to license copyrighted content. As a result, the need to rely on fair use to facilitate some socially valuable uses of content that otherwise wouldn’t occur because of prohibitive costs of contracting is diminished. Indeed, it’s even possible that the existence of fair use exceptions may inhibit the development of these sorts of mechanisms for simple, low-cost agreements between owners and users of content – with consequences beyond the material that is subject to the exceptions. While, indeed, some socially valuable uses, like parody, may merit exceptions because of rights holders’ unwillingness, rather than inability, to license, U.S.-style fair use is in no way necessary to facilitate such exceptions. In short, the boundaries of copyright exceptions should be contracting, not expanding.

It’s also worth noting that simple marketplace observations seem to undermine assertions by Internet companies that they can’t thrive without fair use. Google Search, for example, has grown big enough to attract the (misguided) attention of EU antitrust regulators, despite no European country having enacted a U.S-style fair use law. Indeed, European regulators claim that the company has a 90% share of the market — without fair use.

Meanwhile, companies like Netflix contend that their ability to cache temporary copies of video content in order to improve streaming quality would be imperiled without fair use. But it’s impossible to see how Netflix is able to negotiate extensive, complex contracts with copyright holders to actually show their content, but yet is somehow unable to negotiate an additional clause or two in those contracts to ensure the quality of those performances without fair use.

Properly bounded exceptions and limitations are an important aspect of any copyright regime. But given the mix of legal regimes among current prospective trading partners, as well as other countries with whom the U.S. might at some stage develop new FTAs, it’s highly likely that the introduction of U.S.-style fair use rules would be misinterpreted and misapplied in certain jurisdictions and could result in excessively lax copyright protection, undermining incentives to create and innovate. Of course for the self-described consumer advocates pushing for fair use, this is surely the goal. Further, mandating the inclusion of fair use in trade agreements through TPA legislation would, in essence, force the U.S. to ignore the legal regimes of its trading partners and weaken the protection of copyright in trade agreements, again undermining the incentive to create and innovate.

There is no principled reason, in short, for TPA to mandate adoption of U.S-style fair use in free trade agreements. Congress should pass TPA legislation as introduced, and resist any rent-seeking attempts to include fair use language.

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.

Today, the International Center for Law & Economics released a white paper, co-authored by Executive Director Geoffrey Manne and Senior Fellow Julian Morris, entitled Dangerous Exception: The detrimental effects of including “fair use” copyright exceptions in free trade agreements.

Dangerous Exception explores the relationship between copyright, creativity and economic development in a networked global marketplace. In particular, it examines the evidence for and against mandating a U.S.-style fair use exception to copyright via free trade agreements like the Trans-Pacific Partnership (TPP), and through “fast-track” trade promotion authority (TPA).

In the context of these ongoing trade negotiations, some organizations have been advocating for the inclusion of dramatically expanded copyright exceptions in place of more limited language requiring that such exceptions conform to the “three-step test” implemented by the 1994 TRIPs Agreement.

The paper argues that if broad fair use exceptions are infused into trade agreements they could increase piracy and discourage artistic creation and innovation — especially in nations without a strong legal tradition implementing such provisions.

The expansion of digital networks across borders, combined with historically weak copyright enforcement in many nations, poses a major challenge to a broadened fair use exception. The modern digital economy calls for appropriate, but limited, copyright exceptions — not their expansion.

The white paper is available here. For some of our previous work on related issues, see:

There is a consensus in America that we need to control health care costs and improve the delivery of health care. After a long debate on health care reform and careful scrutiny of health care markets, there seems to be agreement that the unintegrated, “siloed approach” to health care is inefficient, costly, and contrary to the goal of improving care. But some antitrust enforcers — most notably the FTC — are standing in the way.

Enlightened health care providers are responding to this consensus by entering into transactions that will lead to greater clinical and financial integration, facilitating a movement from volume-based to value-based delivery of care. Any many aspects of the Affordable Care Act encourage this path to integration. Yet when the market seeks to address these critical concerns about our health care system, the FTC and some state Attorneys General take positions diametrically opposed to sound national health care policy as adopted by Congress and implemented by the Department of Health and Human Services.

To be sure, not all state antitrust enforcers stand in the way of health care reform. For example, many states including New York, Pennsylvania and Massachusetts, seem to be willing to permit hospital mergers even in concentrated markets with an agreement for continued regulation. At the same time, however, the FTC has been aggressively challenging integration, taking the stance that hospital mergers will raise prices by giving those hospitals greater leverage in negotiations.

The distance between HHS and the FTC in DC is about 6 blocks, but in healthcare policy they seem to be are miles apart.

The FTC’s skepticism about integration is an old story. As I have discussed previously, during the last decade the agency challenged more than 30 physician collaborations even though those cases lacked any evidence that the collaborations led to higher prices. And, when physicians asked for advice on collaborations, it took the Commission on average more than 436 days to respond to those requests (about as long as it took Congress to debate and enact the Affordable Care Act).

The FTC is on a recent winning streak in challenging hospital mergers. But those were primarily simple cases with direct competition between hospitals in the same market with very high levels of concentration. The courts did not struggle long in these cases, because the competitive harm appeared straightforward.

Far more controversial is when a hospital acquires a physician practice. This type of vertical integration seems precisely what the advocates for health care reform are crying out for. The lack of integration between physicians and hospitals is a core to the problems in health care delivery. But the antitrust law is entirely solicitous of these types of vertical mergers. There has not been a vertical merger successfully challenged in the courts since 1980 – the days of reruns of the TV show Dr. Kildare. And even the supposedly pro-enforcement Obama Administration has not gone to court to challenge a vertical merger, and the Obama FTC has not even secured a merger consent under a vertical theory.

The case in which the FTC has decided to “bet the house” is its challenge to St. Luke’s Health System’s acquisition of Saltzer Medical Group in Nampa, Idaho.

St. Luke’s operates the largest hospital in Boise, and Saltzer is the largest physician practice in Nampa, roughly 20-miles away. But rather than recognizing that this was a vertical affiliation designed to integrate care and to promote a transition to a system in which the provider takes the risk of overutilization, the FTC characterized the transaction as purely horizontal – no different from the merger of two hospitals. In that manner, the FTC sought to paint concentration levels it designed to assure victory.

But back to the reasons why integration is essential. It is undisputed that provider integration is the key to improving American health care. Americans pay substantially more than any other industrialized nation for health care services, 17.2 percent of gross domestic product. Furthermore, these higher costs are not associated with better overall care or greater access for patients. As noted during the debate on the Affordable Care Act, the American health care system’s higher costs and lower quality and access are mostly associated with the usage of a fee-for-service system that pays for each individual medical service, and the “siloed approach” to medicine in which providers work autonomously and do not coordinate to improve patient outcomes.

In order to lower health care costs and improve care, many providers have sought to transform health care into a value-based, patient-centered approach. To institute such a health care initiative, medical staff, physicians, and hospitals must clinically integrate and align their financial incentives. Integrated providers utilize financial risk, share electronic records and data, and implement quality measures in order to provide the best patient care.

The most effective means of ensuring full-scale integration is through a tight affiliation, most often achieved through a merger. Unlike contractual arrangements that are costly, time-sensitive, and complicated by an outdated health care regulatory structure, integrated affiliations ensure that entities can effectively combine and promote structural change throughout the newly formed organization.

For nearly five weeks of trial in Boise St. Luke’s and the FTC fought these conflicting visions of integration and health care policy. Ultimately, the court decided the supposed Nampa primary care physician market posited by the FTC would become far more concentrated, and the merger would substantially lessen competition for “Adult Primary Care Services” by raising prices in Nampa. As such, the district court ordered an immediate divestiture.

Rarely, however, has an antitrust court expressed such anguish at its decision. The district court readily “applauded [St. Luke’s] for its efforts to improve the delivery of healthcare.” It acknowledged the positive impact the merger would have on health care within the region. The court further noted that Saltzer had attempted to coordinate with other providers via loose affiliations but had failed to reap any benefits. Due to Saltzer’s lack of integration, Saltzer physicians had limited “the number of Medicaid or uninsured patients they could accept.”

According to the district court, the combination of St. Luke’s and Saltzer would “improve the quality of medical care.” Along with utilizing the same electronic medical records system and giving the Saltzer physicians access to sophisticated quality metrics designed to improve their practices, the parties would improve care by abandoning fee-for-service payment for all employed physicians and institute population health management reimbursing the physicians via risk-based payment initiatives.

As noted by the district court, these stated efficiencies would improve patient outcomes “if left intact.” Along with improving coordination and quality of care, the merger, as noted by an amicus brief submitted by the International Center for Law & Economics and the Medicaid Defense Fund to the Ninth Circuit, has also already expanded access to Medicaid and uninsured patients by ensuring previously constrained Saltzer physicians can offer services to the most needy.

The court ultimately was not persuaded by the demonstrated procompetitive benefits. Instead, the district court relied on the FTC’s misguided arguments and determined that the stated efficiencies were not “merger-specific,” because such efficiencies could potentially be achieved via other organizational structures. The district court did not analyze the potential success of substitute structures in achieving the stated efficiencies; instead, it relied on the mere existence of alternative provider structures. As a result, as ICLE and the Medicaid Defense Fund point out:

By placing the ultimate burden of proving efficiencies on the Appellants and applying a narrow, impractical view of merger specificity, the court has wrongfully denied application of known procompetitive efficiencies. In fact, under the court’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to oppose untested, theoretical less restrictive structural alternatives.

Notably, the district court’s divestiture order has been stayed by the Ninth Circuit. The appeal on the merits is expected to be heard some time this autumn. Along with reviewing the relevant geographic market and usage of divestiture as a remedy, the Ninth Circuit will also analyze the lower court’s analysis of the merger’s procompetitive efficiencies. For now, the stay order is a limited victory for underserved patients and the merging defendants. While such a ruling is not determinative of the Ninth Circuit’s decision on the merits, it does demonstrate that the merging parties have at least a reasonable possibility of success.

As one might imagine, the Ninth Circuit decision is of great importance to the antitrust and health care reform community. If the district court’s ruling is upheld, it could provide a deterrent to health care providers from further integrating via mergers, a precedent antithetical to the very goals of health care reform. However, if the Ninth Circuit finds the merger does not substantially lessen competition, then precompetitive vertical integration is less likely to be derailed by misapplication of the antitrust laws. The importance and impact of such a decision on American patients cannot be understated.

The Federal Trade Commission’s recent enforcement actions against Amazon and Apple raise important questions about the FTC’s consumer protection practices, especially its use of economics. How does the Commission weigh the costs and benefits of its enforcement decisions? How does the agency employ economic analysis in digital consumer protection cases generally?

Join the International Center for Law and Economics and TechFreedom on Thursday, July 31 at the Woolly Mammoth Theatre Company for a lunch and panel discussion on these important issues, featuring FTC Commissioner Joshua Wright, Director of the FTC’s Bureau of Economics Martin Gaynor, and several former FTC officials. RSVP here.

Commissioner Wright will present a keynote address discussing his dissent in Apple and his approach to applying economics in consumer protection cases generally.

Geoffrey Manne, Executive Director of ICLE, will briefly discuss his recent paper on the role of economics in the FTC’s consumer protection enforcement. Berin Szoka, TechFreedom President, will moderate a panel discussion featuring:

  • Martin Gaynor, Director, FTC Bureau of Economics
  • David Balto, Fmr. Deputy Assistant Director for Policy & Coordination, FTC Bureau of Competition
  • Howard Beales, Fmr. Director, FTC Bureau of Consumer Protection
  • James Cooper, Fmr. Acting Director & Fmr. Deputy Director, FTC Office of Policy Planning
  • Pauline Ippolito, Fmr. Acting Director & Fmr. Deputy Director, FTC Bureau of Economics

Background

The FTC recently issued a complaint and consent order against Apple, alleging its in-app purchasing design doesn’t meet the Commission’s standards of fairness. The action and resulting settlement drew a forceful dissent from Commissioner Wright, and sparked a discussion among the Commissioners about balancing economic harms and benefits in Section 5 unfairness jurisprudence. More recently, the FTC brought a similar action against Amazon, which is now pending in federal district court because Amazon refused to settle.

Event Info

The “FTC: Technology and Reform” project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology. The Project’s initial report, released in December 2013, identified critical questions facing the agency, Congress, and the courts about the FTC’s future, and proposed a framework for addressing them.

The event will be live streamed here beginning at 12:15pm. Join the conversation on Twitter with the #FTCReform hashtag.

When:

Thursday, July 31
11:45 am – 12:15 pm — Lunch and registration
12:15 pm – 2:00 pm — Keynote address, paper presentation & panel discussion

Where:

Woolly Mammoth Theatre Company – Rehearsal Hall
641 D St NW
Washington, DC 20004

Questions? – Email mail@techfreedom.orgRSVP here.

See ICLE’s and TechFreedom’s other work on FTC reform, including:

  • Geoffrey Manne’s Congressional testimony on the the FTC@100
  • Op-ed by Berin Szoka and Geoffrey Manne, “The Second Century of the Federal Trade Commission”
  • Two posts by Geoffrey Manne on the FTC’s Amazon Complaint, here and here.

About The International Center for Law and Economics:

The International Center for Law and Economics is a non-profit, non-partisan research center aimed at fostering rigorous policy analysis and evidence-based regulation.

About TechFreedom:

TechFreedom is a non-profit, non-partisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.

The International Center for Law & Economics (ICLE) and TechFreedom filed two joint comments with the FCC today, explaining why the FCC has no sound legal basis for micromanaging the Internet and why “net neutrality” regulation would actually prove counter-productive for consumers.

The Policy Comments are available here, and the Legal Comments are here. See our previous post, Net Neutrality Regulation Is Bad for Consumers and Probably Illegal, for a distillation of many of the key points made in the comments.

New regulation is unnecessary. “An open Internet and the idea that companies can make special deals for faster access are not mutually exclusive,” said Geoffrey Manne, Executive Director of ICLE. “If the Internet really is ‘open,’ shouldn’t all companies be free to experiment with new technologies, business models and partnerships?”

“The media frenzy around this issue assumes that no one, apart from broadband companies, could possibly question the need for more regulation,” said Berin Szoka, President of TechFreedom. “In fact, increased regulation of the Internet will incite endless litigation, which will slow both investment and innovation, thus harming consumers and edge providers.”

Title II would be a disaster. The FCC has proposed re-interpreting the Communications Act to classify broadband ISPs under Title II as common carriers. But reinterpretation might unintentionally ensnare edge providers, weighing them down with onerous regulations. “So-called reclassification risks catching other Internet services in the crossfire,” explained Szoka. “The FCC can’t easily forbear from Title II’s most onerous rules because the agency has set a high bar for justifying forbearance. Rationalizing a changed approach would be legally and politically difficult. The FCC would have to simultaneously find the broadband market competitive enough to forbear, yet fragile enough to require net neutrality rules. It would take years to sort out this mess — essentially hitting the pause button on better broadband.”

Section 706 is not a viable option. In 2010, the FCC claimed Section 706 as an independent grant of authority to regulate any form of “communications” not directly barred by the Act, provided only that the Commission assert that regulation would somehow promote broadband. “This is an absurd interpretation,” said Szoka. “This could allow the FCC to essentially invent a new Communications Act as it goes, regulating not just broadband, but edge companies like Google and Facebook, too, and not just neutrality but copyright, cybersecurity and more. The courts will eventually strike down this theory.”

A better approach. “The best policy would be to maintain the ‘Hands off the Net’ approach that has otherwise prevailed for 20 years,” said Manne. “That means a general presumption that innovative business models and other forms of ‘prioritization’ are legal. Innovation could thrive, and regulators could still keep a watchful eye, intervening only where there is clear evidence of actual harm, not just abstract fears.” “If the FCC thinks it can justify regulating the Internet, it should ask Congress to grant such authority through legislation,” added Szoka. “A new communications act is long overdue anyway. The FCC could also convene a multistakeholder process to produce a code enforceable by the Federal Trade Commission,” he continued, noting that the White House has endorsed such processes for setting Internet policy in general.

Manne concluded: “The FCC should focus on doing what Section 706 actually commands: clearing barriers to broadband deployment. Unleashing more investment and competition, not writing more regulation, is the best way to keep the Internet open, innovative and free.”

For some of our other work on net neutrality, see:

“Understanding Net(flix) Neutrality,” an op-ed by Geoffrey Manne in the Detroit News on Netflix’s strategy to confuse interconnection costs with neutrality issues.

“The Feds Lost on Net Neutrality, But Won Control of the Internet,” an op-ed by Berin Szoka and Geoffrey Manne in Wired.com.

“That startup investors’ letter on net neutrality is a revealing look at what the debate is really about,” a post by Geoffrey Manne in Truth on the Market.

Bipartisan Consensus: Rewrite of ‘96 Telecom Act is Long Overdue,” a post on TF’s blog highlighting the key points from TechFreedom and ICLE’s joint comments on updating the Communications Act.

The Net Neutrality Comments are available here:

ICLE/TF Net Neutrality Policy Comments

TF/ICLE Net Neutrality Legal Comments