Archives For regulation

The FCC’s proposed “Open Internet Order,” which would impose heavy-handed “common carrier” regulation of Internet service providers (the Order is being appealed in federal court and there are good arguments for striking it down) in order to promote “net neutrality,” is fundamentally misconceived.  If upheld, it will slow innovation, impose substantial costs, and harm consumers (see Heritage Foundation commentaries on FCC Internet regulation here, here, here, and here).  What’s more, it is not needed to protect consumers and competition from potential future abuse by Internet firms.  As I explain in a Heritage Foundation Legal Memorandum published yesterday, should the Open Internet Order be struck down, the U.S. Federal Trade Commission (FTC) has ample authority under Section 5 of the Federal Trade Commission Act (FTC Act) to challenge any harmful conduct by entities involved in Internet broadband services markets when such conduct undermines competition or harms consumers.

Section 5 of the FTC Act authorizes the FTC to prevent persons, partnerships, or corporations from engaging in “unfair methods of competition” or “unfair or deceptive acts or practices” in or affecting commerce.  This gives it ample authority to challenge Internet abuses raising antitrust (unfair methods) and consumer protection (unfair acts or practices) issues.

On the antitrust side, in evaluating individual business restraints under a “rule of reason,” the FTC relies on objective fact-specific analyses of the actual economic and consumer protection implications of a particular restraint.  Thus, FTC evaluations of broadband industry restrictions are likely to be more objective and predictable than highly subjective “public interest” assessments by the FCC, leading to reduced error and lower planning costs for purveyors of broadband and related services.  Appropriate antitrust evaluation should accord broad leeway to most broadband contracts.  As FTC Commissioner Josh Wright put it in testifying before Congress, “fundamental observation and market experience [demonstrate] that the business practices at the heart of the net neutrality debate are generally procompetitive.”  This suggests application of a rule of reason that will fully weigh efficiencies but not shy away from challenging broadband-related contractual arrangements that undermine the competitive process.

On the consumer protection side, the FTC can attack statements made by businesses that mislead and thereby impose harm on consumers (including business purchasers) who are acting reasonably.  It can also challenge practices that, though not literally false or deceptive, impose substantial harm on consumers (including business purchasers) that they cannot reasonably avoid, assuming the harm is greater than any countervailing benefits.  These are carefully designed and cabined sources of authority that require the FTC to determine the presence of actual consumer harm before acting.  Application of the FTC’s unfairness and deception powers therefore lacks the uncertainty associated with the FCC’s uncabined and vague “public interest” standard of evaluation.  As in the case of antitrust, the existence of greater clarity and a well-defined analytic methodology suggests that reliance on FTC rather than FCC enforcement in this area is preferable from a policy standpoint.

Finally, arguments for relying on FTC Internet policing are based on experience as well – the FTC is no Internet policy novice.  It closely monitors Internet activity and, over the years, it has developed substantial expertise in Internet topics through research, hearings, and enforcement actions.

Most recently, for example, the FTC sued AT&T in federal court for allegedly slowing wireless customers’ Internet speeds, although the customers had subscribed to “unlimited” data usage plans.  The FTC asserted that in offering renewals to unlimited-plan customers, AT&T did not adequately inform them of a new policy to “throttle” (drastically reduce the speed of) customer data service once a certain monthly data usage cap was met. The direct harm of throttling was in addition to the high early termination fees that dissatisfied customers would face for early termination of their services.  The FTC characterized this behavior as both “unfair” and “deceptive.”  Moreover, the commission claimed that throttling-related speed reductions and data restrictions were not determined by real-time network congestion and thus did not even qualify as reasonable network management activity.  This case illustrates that the FTC is perfectly capable of challenging potential “network neutrality” violations that harm consumer welfare (since “throttled” customers are provided service that is inferior to the service afforded customers on “tiered” service plans) and thus FCC involvement is unwarranted.

In sum, if a court strikes down the latest FCC effort to regulate the Internet, the FTC has ample authority to address competition and consumer protection problems in the area of broadband, including questions related to net neutrality.  The FTC’s highly structured, analytic, fact-based approach to these issues is superior to FCC net neutrality regulation based on vague and unfocused notions of the public interest.  If a court does not act, Congress might wish to consider legislation to prohibit FCC Internet regulation and leave oversight of potential competitive and consumer abuses to the FTC.

Understanding the nature and extent of the growth of the federal regulatory state is vital to sound policymaking.  Taking that to heart, over the last decade the Heritage Foundation has issued a series of reports measuring trends in federal regulatory activity.  On May 11 of this year, Heritage released its most recent regulatory study, “Red Tape Rising: Six Years of Escalating Regulation Under Obama” (RTP).  RTP, as the title suggests, paints a grim picture of rapidly escalating federal regulation unmoored from sound cost-benefit analysis – regulatory policy that is detrimental to American economic health.  Fortunately, RTP also suggests potential prescriptions to tame the federal regulatory virus.  You should read the entire study, but a few key excerpts from RTP, highlighted below, merit particular attention:

“The number and cost of government regulations continued to climb in 2014, intensifying Washington’s control over the economy and Americans’ lives. The addition of 27 new major rules last year pushed the tally for the Obama Administration’s first six years to 184, with scores of other rules in the pipeline. The cost of just these 184 rules is estimated by regulators to be nearly $80 billion annually, although the actual cost of this massive expansion of the administrative state is obscured by the large number of rules for which costs have not been fully quantified. Absent substantial reform, economic growth and individual freedom will continue to suffer.”

“President Barack Obama has repeatedly demonstrated his willingness to act by regulatory fiat instead of executing laws as passed by Congress. But regulatory overreach by the executive branch is only part of the problem. A great deal of the excessive regulation in the past six years is the result of Congress granting broad powers to agencies through passage of vast and vaguely worded legislation. The misnamed Affordable Care Act and the Dodd–Frank financial-regulation law top the list.”

“Many more regulations are on the way, with another 126 economically significant rules on the Administration’s agenda, such as directives to farmers for growing and harvesting fruits and vegetables; strict limits on credit access for service members; and, yet another redesign of light bulbs.”

“In many respects, the need for reform of the regulatory system has never been greater. The White House, Congress, and federal agencies routinely ignore regulatory costs, exaggerate benefits, and breach legislative and constitutional boundaries. They also increasingly dictate lifestyle choices rather than focusing on public health and safety.”

“Immediate reforms should include requiring legislation to undergo an analysis of regulatory impacts before a floor vote in Congress, and requiring every major regulation to obtain congressional approval before taking effect. Sunset deadlines should be set in law for all major rules, and independent agencies should be subject—as are executive branch agencies—to the White House regulatory review process.”

In light of its findings, RTP makes eight specific recommendations:

  1. Require congressional approval of new major regulations issued by agencies.
  2. Require regulatory impact assessments of proposed legislation.
  3. Establish a sunset date for regulations.
  4. Subject “independent” agencies to executive branch regulatory review.
  5. Codify stricter information-quality standards for rulemaking.
  6. Reform “sue and settle” practices (under which regulators work in concert with advocacy groups to produce settlements to lawsuits that result in greater regulation).
  7. Increase professional staff levels within the Office of Information and Regulatory Affairs (OIRA) (the small White House agency charged with reviewing proposed regulations).
  8. Codify the requirement now imposed by Executive Order 12866 mandating agencies to assess the costs and benefits of proposed rules and to consider alternatives.

These excellent recommendations merit serious consideration by federal policymakers.

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.

And:

[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.

Also:

The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Last week the International Center for Law & Economics, joined by TechFreedom, filed comments with the Federal Aviation Administration (FAA) in its Operation and Certification of Small Unmanned Aircraft Systems (“UAS” — i.e, drones) proceeding to establish rules for the operation of small drones in the National Airspace System.

We believe that the FAA has failed to appropriately weigh the costs and benefits, as well as the First Amendment implications, of its proposed rules.

The FAA’s proposed drones rules fail to meet (or even undertake) adequate cost/benefit analysis

FAA regulations are subject to Executive Order 12866, which, among other things, requires that agencies:

  • “consider incentives for innovation,”
  • “propose or adopt a regulation only upon a reasoned determination that the benefits of the intended regulation justify its costs”;
  • “base [their] decisions on the best reasonably obtainable scientific, technical, economic, and other information”; and
  • “tailor [their} regulations to impose the least burden on society,”

The FAA’s proposed drone rules fail to meet these requirements.

An important, and fundamental, problem is that the proposed rules often seem to import “scientific, technical, economic, and other information” regarding traditional manned aircraft, rather than such knowledge specifically applicable to drones and their uses — what FTC Commissioner Maureen Ohlhausen has dubbed “The Procrustean Problem with Prescriptive Regulation.”

As such, not only do the rules often not make sense as a practical matter, they also seek to simply adapt existing standards, rules and understandings promulgated for manned aircraft to regulate drones — insufficiently tailoring the rules to “impose the least burden on society.”

In some cases the rules would effectively ban obviously valuable uses outright, disregarding the rules’ effect on innovation (to say nothing of their effect on current uses of drones) without adequately defending such prohibitions as necessary to protect public safety.

Importantly, the proposed rules would effectively prohibit the use of commercial drones for long-distance services (like package delivery and scouting large agricultural plots) and for uses in populated areas — undermining what may well be drones’ most economically valuable uses.

As our comments note:

By prohibiting UAS operation over people who are not directly involved in the drone’s operation, the rules dramatically limit the geographic scope in which UAS may operate, essentially limiting commercial drone operations to unpopulated or extremely sparsely populated areas. While that may be sufficient for important agricultural and forestry uses, for example, it effectively precludes all possible uses in more urban areas, including journalism, broadcasting, surveying, package delivery and the like. Even in nonurban areas, such a restriction imposes potentially insurmountable costs.

Mandating that operators not fly over other individuals not involved in the UAS operation is, in fact, the nail in the coffin of drone deliveries, an industry that is likely to offer a significant fraction of this technology’s potential economic benefit. Imposing such a blanket ban thus improperly ignores the important “incentives for innovation” suggested by Executive Order 12866 without apparent corresponding benefit.

The FAA’s proposed drone rules fail under First Amendment scrutiny

The FAA’s failure to tailor the rules according to an appropriate analysis of their costs and benefits also causes them to violate the First Amendment. Without proper tailoring based on the unique technological characteristics of drones and a careful assessment of their likely uses, the rules are considerably more broad than the Supreme Court’s “time, place and manner” standard would allow.

Several of the rules constitute a de facto ban on most — indeed, nearly all — of the potential uses of drones that most clearly involve the collection of information and/or the expression of speech protected by the First Amendment. As we note in our comments:

While the FAA’s proposed rules appear to be content-neutral, and will thus avoid the most-exacting Constitutional scrutiny, the FAA will nevertheless have a difficult time demonstrating that some of them are narrowly drawn and adequately tailored time, place, and manner restrictions.

Indeed, many of the rules likely amount to a prior restraint on protected commercial and non-commercial activity, both for obvious existing applications like news gathering and for currently unanticipated future uses.

Our friends Eli Dourado, Adam Thierer and Ryan Hagemann at Mercatus also filed comments in the proceeding, raising similar and analogous concerns:

As far as possible, we advocate an environment of “permissionless innovation” to reap the greatest benefit from our airspace. The FAA’s rules do not foster this environment. In addition, we believe the FAA has fallen short of its obligations under Executive Order 12866 to provide thorough benefit-cost analysis.

The full Mercatus comments, available here, are also recommended reading.

Read the full ICLE/TechFreedom comments here.

reason-mag-dont-tread-on-my-internetBen Sperry and I have a long piece on net neutrality in the latest issue of Reason Magazine entitled, “How to Break the Internet.” It’s part of a special collection of articles and videos dedicated to the proposition “Don’t Tread on My Internet!”

Reason has put together a great bunch of material, and packaged it in a special retro-designed page that will make you think it’s the 1990s all over again (complete with flaming graphics and dancing Internet babies).

Here’s a taste of our article:

“Net neutrality” sounds like a good idea. It isn’t.

As political slogans go, the phrase net neutrality has been enormously effective, riling up the chattering classes and forcing a sea change in the government’s decades-old hands-off approach to regulating the Internet. But as an organizing principle for the Internet, the concept is dangerously misguided. That is especially true of the particular form of net neutrality regulation proposed in February by Federal Communications Commission (FCC) Chairman Tom Wheeler.

Net neutrality backers traffic in fear. Pushing a suite of suggested interventions, they warn of rapacious cable operators who seek to control online media and other content by “picking winners and losers” on the Internet. They proclaim that regulation is the only way to stave off “fast lanes” that would render your favorite website “invisible” unless it’s one of the corporate-favored. They declare that it will shelter startups, guarantee free expression, and preserve the great, egalitarian “openness” of the Internet.

No decent person, in other words, could be against net neutrality.

In truth, this latest campaign to regulate the Internet is an apt illustration of F.A. Hayek’s famous observation that “the curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” Egged on by a bootleggers-and-Baptists coalition of rent-seeking industry groups and corporation-hating progressives (and bolstered by a highly unusual proclamation from the White House), Chairman Wheeler and his staff are attempting to design something they know very little about-not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.

And the rest of the contents of the site are great, as well. Among other things, there’s:

  • “Why are Edward Snowden’s supporters so eager to give the government more control over the Internet?” Matt Welch’s  take on the contradictions in the thinking of net neutrality’s biggest advocates.
  • “The Feds want a back door into your computer. Again.” Declan McCullagh on the eternal return of government attempts to pre-hack your technology.
  • “Uncle Sam wants your Fitbit.” Adam Thierer on the coming clampdown on data coursing through the Internet of Things.
  • Mike Godwin on how net neutrality can hurt developing countries most of all.
  • “How states are planning to grab tax dollars for online sales,” by Veronique de Rugy
  • FCC Commissioner Ajit Pai on why net neutrality is “a solution that won’t work to a problem that simply doesn’t exist.”
  • “8 great libertarian apps that make your world a little freer and a whole lot easier to navigate.”

There’s all that, plus enough flaming images and dancing babies to make your eyes bleed. Highly recommended!

Much ink has been spilled (and with good reason) about the excessive and totally unnecessary regulatory burdens associated with the Federal Communications Commission’s (FCC) February 26 “Open Internet Order” (OIO), which imposes public utility regulation on Internet traffic.  For example, as Heritage Foundation Senior Research Fellow James Gattuso recently explained, “[d]evised for the static monopolies, public-utility regulation will be corrosive to today’s dynamic Internet. There’s a reason the phrase ‘innovative public utility’ doesn’t flows easily from the tongue. The hundreds of rules that come with public utility status are geared to keeping monopolies in line, not encouraging new or innovative ways of doing things. . . .  Even worse, by imposing burdens on big and small carriers alike, the new rules may actually stifle chances of increasing competition among broadband providers.”

Apart from its excessive and unjustifiable economic costs, the OIO has another unfortunate feature which has not yet been widely commented upon – it is an invitation to cronyism, which is an affront to the neutral application of the laws.  As Heritage Foundation President Jim DeMint and Heritage Action President Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations. Those special preferences undermine competition on the merits firms that lack insider status, harming the public.

But what scope is there for cronyism in the FCC’s application of its OIO?  A lotAs I explain in a March 30 Heritage Foundation Daily Signal blog posting, the FCC will provide OIO guidance through “enforcement advisories” and “advisory opinions,” and the Commission’s Enforcement Bureau can request written opinions from outside organizations.  Translating this bureaucratese into English, the FCC is saying that the inherently open-ended language that determines whether an Internet business practice is given a thumbs up or thumbs down will turn on “opinions” that will require the input of high-priced lawyers and advisers.  Smaller and emerging firms that cannot afford to pay for influence may be out of luck.  Moreover, large established companies that are experts at the “Washington game” and engage in administration-approved activities or expenditures (such as politically correct green projects or the right campaign contributions) may be given special consideration when the FCC’s sages decide whether an Internet business practice is “unreasonable” or not.  This means, for example, that firms that are willing to pay more for better Internet access to challenge such powerful firms as Netflix in video services or Google in search activities or Facebook in social networking may be out of luck, if they are less effective at playing the Washington influence game than at competing on the merits.  Those who downplay this risk should recall that the FCC has a long and sad record of using regulations to advantage powerful incumbents (for decades the FCC shielded AT&T from cellular telephony competition and the over-the-air television broadcasters from cable competition).

In short, the benefits to American consumers and the overall American economy generated by a regulation-free Internet—not to mention the ability of entrepreneurs to thrive, free from cronyism—may soon become a thing of the past, unless action is taken by Congress or the courts.  American citizens deserve better than that from their government.

In its February 25 North Carolina Dental v. Federal Trade Commission decision, the U.S. Supreme Court held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state. Will this decision discourage harmful protectionist regulation, such as the prohibition on tooth whitening by non-dentists at issue in this case? Will it also interfere with the ability of states to shape their regulatory programs as they see fit? U.S. Federal Trade Commissioner Maureen Ohlhausen will address this important set of questions in a March 31 luncheon presentation at the Heritage Foundation, with Clark Neily of the Institute for Justice and Misha Tseytlin of the West Virginia State Attorney General’s Office providing expert commentary. (You may view this event online or register to attend it in person here).

Just in time for this event, the Heritage Foundation has released a legal memorandum on “North Carolina Dental Board and the Reform of State-Sponsored Protectionism.”  The  memorandum explains that North Carolina Dental “has far-reaching ramifications for the reform of ill-conceived protectionist state regulations that limit entry into myriad professions and thereby harm consumers. In holding that a state regulatory board controlled by market participants in the industry being regulated cannot cloak its anticompetitive rules in ‘state action’ antitrust immunity unless it is ‘actively supervised’ by the state, the Court struck a significant blow against protectionist rent-seeking legislation and for economic liberty. The states may re-examine their licensing statutes in light of the Court’s decision, but if they decline to revise their regulatory schemes to eliminate their unjustifiable exclusionary effect, there may well be yet another round of challenges to those programs—this time based on the federal Constitution.”

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Rate this:

Continue Reading...

In a recent post, I explained how the U.S. Supreme Court’s February 25 opinion in North Carolina Dental Board v. FTC (holding that a state regulatory board controlled by market participants must be “actively supervised” by the state to receive antitrust immunity) struck a significant blow against protectionist rent-seeking and for economic liberty.  Maureen Ohlhausen, who has spoken out against special interest government regulation as an FTC Commissioner (and formerly as Director of the FTC’s Office of Policy Planning), will discuss the ramifications of the Court’s North Carolina Dental decision in a March 31 luncheon speech at the Heritage Foundation.  Senior Attorney Clark Neily of the Institute for Justice and Misha Tseytlin, General Counsel in the West Virginia Attorney General’s Office, will provide expert commentary on the Commissioner’s speech.  You can register for this event here.

Anybody who has spent much time with children knows how squishy a concept “unfairness” can be.  One can hear the exchange, “He’s not being fair!” “No, she’s not!,” only so many times before coming to understand that unfairness is largely in the eye of the beholder.

Perhaps it’s unfortunate, then, that Congress chose a century ago to cast the Federal Trade Commission’s authority in terms of preventing “unfair methods of competition.”  But that’s what it did, and the question now is whether there is some way to mitigate this “eye of the beholder” problem.

There is.

We know that any business practice that violates the substantive antitrust laws (the Sherman and Clayton Acts) is an unfair method of competition, so we can look to Sherman and Clayton Act precedents to assess the “unfairness” of business practices that those laws reach.  But what about the Commission’s so-called “standalone” UMC authority—its power to prevent business practices that seem to impact competition unfairly but are not technically violations of the substantive antitrust laws?

Almost two years ago, Commissioner Josh Wright recognized that if the FTC’s standalone UMC authority is to play a meaningful role in assuring market competition, the Commission should issue guidelines on what constitutes an unfair method of competition. He was right.  The Commission, you see, really has only four options with respect to standalone Section 5 claims:

  1. It could bring standalone actions based on current commissioners’ considered judgments about what constitutes unfairness. Such an approach, though, is really inconsistent with the rule of law. Past commissioners, for example, have gone so far as to suggest that practices causing “resource depletion, energy waste, environmental contamination, worker alienation, [and] the psychological and social consequences of producer-stimulated demands” could be unfair methods of competition. Maybe our current commissioners wouldn’t cast so wide a net, but they’re not always going to be in power. A government of laws and not of men simply can’t mete out state power on the basis of whim.
  2. It could bring standalone actions based on unfairness principles appearing in Section 5’s “common law.” The problem here is that there is no such common law. As Commissioner Wright has observed and I have previously explained, a common law doesn’t just happen. Development of a common law requires vigorously litigated disputes and reasoned, published opinions that resolve those disputes and serve as precedent. Section 5 “litigation,” such as it is, doesn’t involve any of that.
    • First, standalone Section 5 disputes tend not to be vigorously litigated. Because the FTC acts as both prosecutor and judge in such actions, their outcome is nearly a foregone conclusion. When FTC staff win before the administrative law judge, the ALJ’s decision is always affirmed by the full commission; when staff loses with the ALJ, the full Commission always reverses. Couple this stacked deck with the fact that unfairness exists in the eye of the beholder and will therefore change with the composition of the Commission, and we end up with a situation in which accused parties routinely settle. As Commissioner Wright observes, “parties will typically prefer to settle a Section 5 claim rather than go through lengthy and costly litigation in which they are both shooting at a moving target and have the chips stacked against them.”
    • The consent decrees that memorialize settlements, then, offer little prospective guidance. They usually don’t include any detailed explanation of why the practice at issue was an unfair method of competition. Even if they did, it wouldn’t matter much; the Commission doesn’t treat its own enforcement decisions as precedent. In light of the realities of Section 5 litigation, there really is no Section 5 common law.
  3. It could refrain from bringing standalone Section 5 actions and pursue only business practices that violate the substantive antitrust laws. Substantive antitrust violations constitute unfair methods of competition, and the federal courts have established fairly workable principles for determining when business practices violate the Sherman and Clayton Acts. The FTC could therefore avoid the “eye of the beholder” problem by limiting its UMC authority to business conduct that violates the antitrust laws. Such an approach, though, would prevent the FTC from policing conduct that, while not technically an antitrust violation, is anticompetitive and injurious to consumers.
  4. It could bring standalone Section 5 actions based on articulated guidelines establishing what constitutes an unfair method of competition. This is really the only way to use Section 5 to pursue business practices that are not otherwise antitrust violations, without offending the rule of law.

Now, if the FTC is to take this fourth approach—the only one that both allows for standalone Section 5 actions and honors rule of law commitments—it obviously has to settle on a set of guidelines.  Fortunately, it has almost done so!

Since Commissioner Wright called for Section 5 guidelines almost two years ago, much ink has been spilled outlining and critiquing proposed guidelines.  Commissioner Wright got the ball rolling by issuing his own proposal along with his call for the adoption of guidelines.  Commissioner Ohlhausen soon followed suit, proposing a slightly broader set of principles.  Numerous commentators then joined the conversation (a number doing so in a TOTM symposium), and each of the other commissioners has now stated her own views.

A good deal of consensus has emerged.  Each commissioner agrees that Section 5 should be used to prosecute only conduct that is actually anticompetitive (as defined by the federal courts).  There is also apparent consensus on the view that standalone Section 5 authority should not be used to challenge conduct governed by well-forged liability principles under the Sherman and Clayton Acts.  (For example, a practice routinely evaluated under Section 2 of the Sherman Act should not be pursued using standalone Section 5 authority.)  The commissioners, and the vast majority of commentators, also agree that there should be some efficiencies screen in prosecution decisions.  The remaining disagreement centers on the scope of the efficiencies screen—i.e., how much of an efficiency benefit must a business practice confer in order to be insulated from standalone Section 5 liability?

On that narrow issue—the only legitimate point of dispute remaining among the commissioners—three views have emerged:  Commissioner Wright would refrain from prosecuting if the conduct at issue creates any cognizable efficiencies; Commissioner Ohlhausen would do so as long as the efficiencies are not disproportionately outweighed by anticompetitive harms; Chairwoman Ramirez would engage in straightforward balancing (not a “disproportionality” inquiry) and would refrain from prosecution only where efficiencies outweigh anticompetitive harms.

That leaves three potential sets of guidelines.  In each, it would be necessary that a behavior subject to any standalone Section 5 action (1) create actual or likely anticompetitive harm, and (2) not be subject to well-forged case law under the traditional antitrust laws (so that pursuing the action might cause the distinction between lawful and unlawful commercial behavior to become blurred).  Each of the three sets of guidelines would also include an efficiencies screen—either (3a) the conduct lacks cognizable efficiencies, (3b) the harms created by the conduct are disproportionate to the conduct’s cognizable efficiencies, or (3c) the harms created by the conduct are not outweighed by cognizable efficiencies.

As Commissioner Wright has observed any one of these sets of guidelines would be superior to the status quo.  Accordingly, if the commissioners could agree on the acceptability of any of them, they could improve the state of U.S. competition law.

Recognizing as much, Commissioner Wright is wisely calling on the commissioners to vote on the acceptability of each set of guidelines.  If any set is deemed acceptable by a majority of commissioners, it should be promulgated as official FTC Guidance.  (Presumably, if more than one set commands majority support, the set that most restrains FTC enforcement authority would be the one promulgated as FTC Guidance.)

Of course, individual commissioners might just choose not to vote.  That would represent a sad abdication of authority.  Given that there isn’t (and under current practice, there can’t be) a common law of Section 5, failure to vote on a set of guidelines would effectively cast a vote for either option 1 stated above (ignore rule of law values) or option 3 (limit Section 5’s potential to enhance consumer welfare).  Let’s hope our commissioners don’t relegate us to those options.

The debate has occurred.  It’s time to vote.