Archives For regulation

Last week, the George Washington University Center for Regulatory Studies convened a Conference (GW Conference) on the Status of Transatlantic Trade and Investment Partnership (TTIP) Negotiations between the European Union (EU) and the United States (U.S.), which were launched in 2013 and will continue for an indefinite period of time. In launching TTIP, the Obama Administration claimed that this pact would raise economic welfare in the U.S. and the EU through stimulating investment and lowering non-tariff barriers between the two jurisdictions, by, among other measures, “significantly cut[ting] the cost of differences in [European Union and United States] regulation and standards by promoting greater compatibility, transparency, and cooperation.

Whether TTIP, if enacted, would actually raise economic welfare in the United States is an open question, however. As a recent Heritage Foundation analysis of TTIP explained, a TTIP focus on “harmonizing” regulations could actually lower economic freedom (and welfare) by “regulating upward” through acceptance of the more intrusive approach, and by precluding future competition among alternative regulatory models that could lead to welfare-enhancing regulatory improvements. Thus, the Heritage study recommended that “[a]ny [TTIP] agreement should be based on mutual recognition, not harmonization, of regulations.”

Unfortunately, discussion at the GW Conference indicated that the welfare-superior mutual recognition approach has been rejected by negotiators – at least as of now. In response to a question I posed on the benefits of mutual recognition, an EU official responded that such an “academic” approach is not “realistic,” while a senior U.S. TTIP negotiator indicated that mutual recognition could prove difficult where regulatory approaches differ. I read those diplomatically couched responses as signaling that both sides opposed the mutual recognition approach. This is a real problem. As part of TTIP, U.S. and EU sector-specific regulators are actively engaged in discussing regulatory particulars. There is the distinct possibility that the regulators may agree on measures that raise regulatory burdens for the sectors covered – particularly given the oft-repeated motto at the GW Conference that TTIP must not reduce existing levels of “protection” for health, safety, and the environment. (Those blandishments eschew any cost-benefit calculus to justify existing protection levels.) This conclusion is further supported by public choice theory, which suggests that regulators may be expected to focus on expanding the size and scope of their regulatory domains, not on contracting them. To make things worse, TTIP raises the possibility that the highly successful U.S. tradition of reliance on private sector-led voluntary consensus standards, as opposed to the EU’s preference for heavy government involvement in standard-setting policies, may be undermined. Any move toward greater direct government influence on U.S. standard setting as part of a TTIP bargain would further undermine the vibrancy, competition, and innovation that have led to the great international success of U.S.-developed technical standards.

As a practical matter, however, is there time for a change in direction in TTIP negotiations regarding regulation and standards? Yes, there is. The TTIP negotiators face no true deadline. Moreover, as a matter of political reality, the eventual U.S. statutory adoption of TTIP measures may require the passage by Congress of “fast-track” trade promotion authority (TPA), which provides for congressional up-or-down votes (without possibility of amendment) on legislation embodying trade deals that have been negotiated by the Executive Branch. Given the political sensitivity of trade deals, they cannot easily be renegotiated if they are altered by congressional amendments. (Indeed, in recent decades all major trade agreements requiring implementing legislation have proceeded under TPA.)

If the Obama Administration decides that it wants to advance TTIP, it must rely on a Republican-controlled Congress to obtain TPA. Before it grants such authority, Congress should conduct hearings and demand that Administration officials testify about key aspects of the Administration’s TTIP negotiating philosophy, and, in particular, on how U.S. TTIP negotiators are approaching regulatory differences between the U.S. and the EU. Congress should make it a prerequisite to the grant of TPA that the final TTIP agreement embody welfare-enhancing mutual recognition of regulations and standards, rather than welfare-reducing harmonization. It should vote down any TTIP negotiated deal that fails to satisfy this requirement.

In March 2014, the U.S. Government’s National Telecommunications and Information Administration (NTIA, the Executive Branch’s telecommunications policy agency) abruptly announced that it did not plan to renew its contract with the Internet Corporation for Assigned Names and Numbers (ICANN) to maintain core functions of the Internet. ICANN oversees the Internet domain name system through its subordinate agency, the Internet Assigned Numbers Authority (IANA). In its March statement, NTIA proposed that ICANN consult with “global stakeholders” to agree on an alternative to the “current role played by NTIA in the coordination of the Internet’s [domain name system].”

In recent months Heritage Foundation scholars have discussed concerns stemming from this vaguely-defined NTIA initiative (see, for example, here, here, here, here, here, and here). These concerns include fears that eliminating the U.S. Government’s role in Internet governance could embolden other nations and international organizations (especially the International Telecommunications Union, an arm of the United Nations) to seek to regulate the Internet and limit speech, and create leeway for ICANN to expand beyond its core activities and trench upon Internet freedoms.

Although NTIA has testified that its transition plan would preclude such undesirable outcomes, the reaction to these assurances should be “trust but verify” (especially given the recent Administration endorsement of burdensome Internet common carrier regulation, which appears to be at odds with the spirit if not the letter of NTIA’s assurances).

Reflecting the “trust but verify” spirit, the just-introduced “Defending Internet Freedom Act of 2014” requires that NTIA maintain its existing Internet oversight functions, unless the NTIA Administrator certifies in writing that certain specified assurances have been met regarding Internet governance. Those assurances include findings that the management of the Internet domain name system will not be exercised by foreign governmental or intergovernmental bodies; that ICANN’s bylaws will be amended to uphold First Amendment-type freedoms of speech, assembly, and association; that a four-fifths supermajority will be required for changes in ICANN’s bylaws or fees for services; that an independent process for resolving disputes between ICANN and third parties be established; and that a host of other requirements aimed at protecting Internet freedoms and ensuring ICANN and IANA accountability be instituted.

Legislative initiatives of this sort, while no panacea, play a valuable role in signaling Congress’s intent to hold the Administration accountable for seeing to it that key Internet freedoms (including the avoidance of onerous regulation and deleterious restrictions on speech and content) are maintained. They merit thoughtful consideration.

I highly recommend that free market aficionados attend or listen to the Heritage Foundation’s October 7 program on economic liberties and the Constitution.  This event, hosted by my colleague Paul Larkin, will feature presentations by constitutional litigator Clark Neily of the Institute for Justice and two brilliant market-oriented Constitutional scholars – Professors Randy Barnett and David Bernstein.  The program will highlight recent legal thinking that may reinvigorate efforts to rein in rent-seeking and restore a proper respect for constitutionally-guaranteed economic liberties.  If you cannot watch the program live, it will be available shortly thereafter for viewing on the Heritage Foundation’s website.

The cause of limiting governmental incursions on constitutionally-protected economic freedoms is far from hopeless.  As a Heritage Foundation Special Report released yesterday explains, recent federal court decisions are beginning to put real teeth into the “rational basis” test for reviewing protectionist state laws under the Due Process and Equal Protection Clauses of the Constitution.  For example, several courts have held that mere protectionism, standing alone, does not provide a sufficient rationale to justify a law under rational basis review.  Also, there may be some potential for striking down highly restrictive laws based on “changed circumstances,” for applying the First Amendment to excessive limitations on speech imposed by occupational licensing regulations, or for invoking antitrust to strike down inadequately supervised or articulated anticompetitive statutes.

Stay tuned for future work by Heritage scholars on economic liberties.

Shanker Singham of the Babson Global Institute (formerly a leading international trade lawyer and author of the most comprehensive one-volume work on the interplay between competition and international trade policy) has published a short article introducing the concept of “enterprise cities.”  This article, which outlines an incentives-based, market-oriented approach to spurring economic development, is well worth reading.  A short summary follows.

Singham points out that the transition away from socialist command-and-control economies, accompanied by international trade liberalization, too often failed to create competitive markets within developing countries.  Anticompetitive market distortions imposed by government and generated by politically-connected domestic rent-seekers continue to thrive – measures such as entry barriers that favor entrenched incumbent firms, and other regulatory provisions that artificially favor specific powerful domestic business interests (“crony capitalists”).  Such widespread distortions reduce competition and discourage inward investment, thereby retarding innovation and economic growth and reducing consumer welfare.  Political influence exercised by the elite beneficiaries of the distortions may prevent legal reforms that would remove these regulatory obstacles to economic development.  What, then, can be done to disturb this welfare-inimical state of affairs, when sweeping, nationwide legal reforms are politically impossible?

One incremental approach, advanced by Professor Paul Romer and others, is the establishment of “charter cities” – geographic zones within a country that operate under government-approved free market-oriented charters, rather than under restrictive national laws.  Building on this concept, Babson Global Institute has established a “Competitiveness and Enterprise Development Project” (CEDP) designed to promote the notion of “Enterprise Cities” (ECs) – geographically demarcated zones of regulatory autonomy within countries, governed by a Board.  ECs would be created through negotiations between a national government and a third party group, such as CEDP.  The negotiations would establish “Regulatory Framework Agreements” embodying legal rules (implemented through statutory or constitutional amendments by the host country) that would apply solely within the EC.  Although EC legal regimes would differ with respect to minor details (reflecting local differences that would affect negotiations), they would be consistent in stressing freedom of contract, flexible labor markets, and robust property rights, and in prohibiting special regulatory/legal favoritism (so as to avoid anticompetitive market distortions).  Protecting foreign investment through third party arbitration and related guarantees would be key to garnering foreign investor interest in ECs.   The goal would be to foster a business climate favorable to investment, job creation, innovation, and economic growth.  The EC Board would ensure that agreed-to rules would be honored and enforced by EC-specific legal institutions, such as courts.

Because market-oriented EC rules will not affect market-distortive laws elsewhere within the host country, well-organized rent-seeking elites may not have as strong an incentive to oppose creating ECs.  Indeed, to the extent that a share of EC revenues is transferred to the host country government (depending upon the nature of the EC’s charter), elites might directly benefit, using their political connections to share in the profits.  In short, although setting up viable ECs is no easy matter, their establishment need not be politically unfeasible.  Indeed, the continued success of Hong Kong as a free market island within China (Hong Kong places first in the Heritage Foundation’s Index of Economic Freedom), operating under the Basic Law of Hong Kong, suggests the potential for ECs to thrive, despite having very different rules than the parent state’s legal regime.  (Moreover, the success of Hong Kong may have proven contagious, as China is now promoting a new Shanghai Free Trade Zone thaw would compete with Hong Kong and Singapore.)

The CEDP is currently negotiating the establishment of ECs with a number of governments.  As Singham explains, successful launch of an EC requires:  (1) a committed developer; (2) land that can be used for a project; (3) a good external infrastructure connecting the EC with the rest of the country; and (4) “a government that recognizes the benefits to its reform agenda and to its own economic plan of such a designation of regulatory autonomy and is willing to confront its own challenges by thinking outside the box.”  While the fourth prerequisite may be the most difficult to achieve, internal pressures for faster economic growth and increased investment may lead jurisdictions with burdensome regulatory regimes to consider ECs.

Furthermore, as Singham stresses, by promoting competition on the merits, free from favoritism, a successful EC could stimulate successful entrepreneurship.  Scholarly work points to the importance of entrepreneurship to economic development.

Finally, the beneficial economic effects of ECs could give additional ammunition to national competition authorities as they advocate for less restrictive regulatory frameworks within their jurisdictions.  It could thereby render more effective the efforts of the many new national competition authorities, whose success in enhancing competitive conditions within their jurisdictions has been limited at best.

ECs are no panacea – they will not directly affect restrictive national regulatory laws that benefit privileged special interests but harm the overall economy.  However, to the extent they prove financial successes, over time they could play a crucial indirect role in enhancing competition, reducing inefficiency, and spurring economic growth within their host countries.

Section 5 of the Federal Trade Commission Act proclaims that “[u]nfair methods of competition . . . are hereby declared unlawful.” The FTC has exclusive authority to enforce that provision and uses it to prosecute Sherman Act violations. The Commission also uses the provision to prosecute conduct that doesn’t violate the Sherman Act but is, in the Commission’s view, an “unfair method of competition.”

That’s somewhat troubling, for “unfairness” is largely in the eye of the beholder. One FTC Commissioner recently defined an unfair method of competition as an action that is “‘collusive, coercive, predatory, restrictive, or deceitful,’ or otherwise oppressive, [where the actor lacks] a justification grounded in its legitimate, independent self-interest.” Some years ago, a commissioner observed that a “standalone” Section 5 action (i.e., one not premised on conduct that would violate the Sherman Act) could be used to police “social and environmental harms produced as unwelcome by-products of the marketplace: resource depletion, energy waste, environmental contamination, worker alienation, the psychological and social consequences of producer-stimulated demands.” While it’s unlikely that any FTC Commissioner would go that far today, the fact remains that those subject to Section 5 really don’t know what it forbids.  And that situation flies in the face of the Rule of Law, which at a minimum requires that those in danger of state punishment know in advance what they’re not allowed to do.

In light of this fundamental Rule of Law problem (not to mention the detrimental chilling effect vague competition rules create), many within the antitrust community have called for the FTC to provide guidance on the scope of its “unfair methods of competition” authority. Most notably, two members of the five-member FTC—Commissioners Maureen Ohlhausen and Josh Wright—have publicly called for the Commission to promulgate guidelines. So have former FTC Chairman Bill Kovacic, a number of leading practitioners, and a great many antitrust scholars.

Unfortunately, FTC Chairwoman Edith Ramirez has opposed the promulgation of Section 5 guidelines. She says she instead “favor[s] the common law approach, which has been a mainstay of American antitrust policy since the turn of the twentieth century.” Chairwoman Ramirez observes that the common law method has managed to distill workable liability rules from broad prohibitions in the primary antitrust statutes. Section 1 of the Sherman Act, for example, provides that “[e]very contract, combination … or conspiracy, in restraint of trade … is declared to be illegal.” Section 2 prohibits actions to “monopolize, or attempt to monopolize … any part of … trade.” Clayton Act Section 7 forbids any merger whose effect “may be substantially to lessen competition, or tend to create a monopoly.” Just as the common law transformed these vague provisions into fairly clear liability rules, the Chairwoman says, it can be used to provide adequate guidance on Section 5.

The problem is, there is no Section 5 common law. As Commissioner Wright and his attorney-advisor Jan Rybnicek explain in a new paper, development of a common law—which concededly may be preferable to a prescriptive statutory approach, given its flexibility, ability to evolve with new learning, and sensitivity to time- and place-specific factors—requires certain conditions that do not exist in the Section 5 context.

The common law develops and evolves in a salutary direction because (1) large numbers of litigants do their best to persuade adjudicators of the superiority of their position; (2) the closest cases—those requiring the adjudicator to make fine distinctions—get appealed and reported; (3) the adjudicators publish opinions that set forth all relevant facts, the arguments of the parties, and why one side prevailed over the other; (4) commentators criticize published opinions that are unsound or rely on welfare-reducing rules; (5) adjudicators typically follow past precedents, tweaking (or occasionally overruling) them when they have been undermined; and (6) future parties rely on past decisions when planning their affairs.

Section 5 “adjudication,” such as it is, doesn’t look anything like this. Because the Commission has exclusive authority to bring standalone Section 5 actions, it alone picks the disputes that could form the basis of any common law. It then acts as both prosecutor and judge in the administrative action that follows. Not surprisingly, defendants, who cannot know the contours of a prohibition that will change with the composition of the Commission and who face an inherently biased tribunal, usually settle quickly. After all, they are, in Commissioner Wright’s words, both “shooting at a moving target and have the chips stacked against them.” As a result, we end up with very few disputes, and even those are not vigorously litigated.

Moreover, because nearly all standalone Section 5 actions result in settlements, we almost never end up with a reasoned opinion from an adjudicator explaining why she did or did not find liability on the facts at hand and why she rejected the losing side’s arguments. These sorts of opinions are absolutely crucial for the development of the common law. Chairwoman Ramirez says litigants can glean principles from other administrative documents like complaints and consent agreements, but those documents can’t substitute for a reasoned opinion that parses arguments and says which work, which don’t, and why. On top of all this, the FTC doesn’t even treat its own enforcement decisions as precedent! How on earth could the Commission’s body of enforcement decisions guide decision-making when each could well be a one-off?

I’m a huge fan of the common law. It generally accommodates the Hayekian “knowledge problem” far better than inflexible, top-down statutes. But it requires both inputs—lots of vigorously litigated disputes—and outputs—reasoned opinions that are recognized as presumptively binding. In the Section 5 context, we’re short on both. It’s time for guidelines.

The free market position on telecom reform has become rather confused of late. Erstwhile conservative Senator Thune is now cosponsoring a version of Senator Rockefeller’s previously proposed video reform bill, bundled into satellite legislation (the Satellite Television Access and Viewer Rights Act or “STAVRA”) that would also include a provision dubbed “Local Choice.” Some free marketeers have defended the bill as a step in the right direction.

Although it looks as if the proposal may be losing steam this Congress, the legislation has been described as a “big and bold idea,” and it’s by no means off the menu. But it should be.

It has been said that politics makes for strange bedfellows. Indeed, people who disagree on just about everything can sometimes unite around a common perceived enemy. Take carriage disputes, for instance. Perhaps because, for some people, a day without The Bachelor is simply a day lost, an unlikely alliance of pro-regulation activists like Public Knowledge and industry stalwarts like Dish has emerged to oppose the ability of copyright holders to withhold content as part of carriage negotiations.

Senator Rockefeller’s Online Video Bill was the catalyst for the Local Choice amendments to STAVRA. Rockefeller’s bill did, well, a lot of terrible things, from imposing certain net neutrality requirements, to overturning the Supreme Court’s Aereo decision, to adding even more complications to the already Byzantine morass of video programming regulations.

But putting Senator Thune’s lipstick on Rockefeller’s pig can’t save the bill, and some of the worst problems from Senator Rockefeller’s original proposal remain.

Among other things, the new bill is designed to weaken the ability of copyright owners to negotiate with distributors, most notably by taking away their ability to withhold content during carriage disputes and by forcing TV stations to sell content on an a la carte basis.

Video distribution issues are complicated — at least under current law. But at root these are just commercial contracts and, like any contracts, they rely on a couple of fundamental principles.

First is the basic property right. The Supreme Court (at least somewhat) settled this for now (in Aereo), by protecting the right of copyright holders to be compensated for carriage of their content. With this baseline, distributors must engage in negotiations to obtain content, rather than employing technological workarounds and exploiting legal loopholes.

Second is the related ability of contracts to govern the terms of trade. A property right isn’t worth much if its owner can’t control how it is used, governed or exchanged.

Finally, and derived from these, is the issue of bargaining power. Good-faith negotiations require both sides not to act strategically by intentionally causing negotiations to break down. But if negotiations do break down, parties need to be able to protect their rights. When content owners are not able to withhold content in carriage disputes, they are put in an untenable bargaining position. This invites bad faith negotiations by distributors.

The STAVRA/Local Choice proposal would undermine the property rights and freedom of contract that bring The Bachelor to your TV, and the proposed bill does real damage by curtailing the scope of the property right in TV programming and restricting the range of contracts available for networks to license their content.

The bill would require that essentially all broadcast stations that elect retrans make their content available a la carte — thus unbundling some of the proverbial sticks that make up the traditional property right. It would also establish MVPD pass-through of each local affiliate. Subscribers would pay a fee determined by the affiliate, and the station must be offered on an unbundled basis, without any minimum tier required – meaning an MVPD has to offer local stations to its customers with no markup, on an a la carte basis, if the station doesn’t elect must-carry. It would also direct the FCC to open a rulemaking to determine whether broadcasters should be prohibited from withholding their content online during a dispute with an MPVD.

“Free market” supporters of the bill assert something like “if we don’t do this to stop blackouts, we won’t be able to stem the tide of regulation of broadcasters.” Presumably this would end blackouts of broadcast programming: If you’re an MVPD subscriber, and you pay the $1.40 (or whatever) for CBS, you get it, period. The broadcaster sets an annual per-subscriber rate; MVPDs pass it on and retransmit only to subscribers who opt in.

But none of this is good for consumers.

When transaction costs are positive, negotiations sometimes break down. If the original right is placed in the wrong hands, then contracting may not assure the most efficient outcome. I think it was Coase who said that.

But taking away the ability of content owners to restrict access to their content during a bargaining dispute effectively places the right to content in the hands of distributors. Obviously, this change in bargaining position will depress the value of content. Placing the rights in the hands of distributors reduces the incentive to create content in the first place; this is why the law protects copyright to begin with. But it also reduces the ability of content owners and distributors to reach innovative agreements and contractual arrangements (like certain promotional deals) that benefit consumers, distributors and content owners alike.

The mandating of a la carte licensing doesn’t benefit consumers, either. Bundling is generally pro-competitive and actually gives consumers more content than they would otherwise have. The bill’s proposal to force programmers to sell content to consumers a la carte may actually lead to higher overall prices for less content. Not much of a bargain.

There are plenty of other ways this is bad for consumers, even if it narrowly “protects” them from blackouts. For example, the bill would prohibit a network from making a deal with an MVPD that provides a discount on a bundle including carriage of both its owned broadcast stations as well as the network’s affiliated cable programming. This is not a worthwhile — or free market — trade-off; it is an ill-advised and economically indefensible attack on vertical distribution arrangements — exactly the same thing that animates many net neutrality defenders.

Just as net neutrality’s meddling in commercial arrangements between ISPs and edge providers will ensure a host of unintended consequences, so will the Rockefeller/Thune bill foreclose a host of welfare-increasing deals. In the end, in exchange for never having to go three days without CBS content, the bill will make that content more expensive, limit the range of programming offered, and lock video distribution into a prescribed business model.

Former FCC Commissioner Rob McDowell sees the same hypocritical connection between net neutrality and broadcast regulation like the Local Choice bill:

According to comments filed with the FCC by Time Warner Cable and the National Cable and Telecommunications Association, broadcasters should not be allowed to take down or withhold the content they produce and own from online distribution even if subscribers have not paid for it—as a matter of federal law. In other words, edge providers should be forced to stream their online content no matter what. Such an overreach, of course, would lay waste to the economics of the Internet. It would also violate the First Amendment’s prohibition against state-mandated, or forced, speech—the flip side of censorship.

It is possible that the cable companies figure that subjecting powerful broadcasters to anti-free speech rules will shift the political momentum in the FCC and among the public away from net neutrality. But cable’s anti-free speech arguments play right into the hands of the net-neutrality crowd. They want to place the entire Internet ecosystem, physical networks, content and apps, in the hands of federal bureaucrats.

While cable providers have generally opposed net neutrality regulation, there is, apparently, some support among them for regulations that would apply to the edge. The Rockefeller/Thune proposal is just a replay of this constraint — this time by forcing programmers to allow retransmission of broadcast content under terms set by Congress. While “what’s good for the goose is good for the gander” sounds appealing in theory, here it is simply doubling down on a terrible idea.

What it reveals most of all is that true neutrality advocates don’t want government control to be limited to ISPs — rather, progressives like Rockefeller (and apparently some conservatives, like Thune) want to subject the whole apparatus — distribution and content alike — to intrusive government oversight in order to “protect” consumers (a point Fred Campbell deftly expands upon here and here).

You can be sure that, if the GOP supports broadcast a la carte, it will pave the way for Democrats (and moderates like McCain who back a la carte) to expand anti-consumer unbundling requirements to cable next. Nearly every economic analysis has concluded that mandated a la carte pricing of cable programming would be harmful to consumers. There is no reason to think that applying it to broadcast channels would be any different.

What’s more, the logical extension of the bill is to apply unbundling to all MVPD channels and to saddle them with contract restraints, as well — and while we’re at it, why not unbundle House of Cards from Orange is the New Black? The Rockefeller bill may have started in part as an effort to “protect” OVDs, but there’ll be no limiting this camel once its nose is under the tent. Like it or not, channel unbundling is arbitrary — why not unbundle by program, episode, studio, production company, etc.?

There is simply no principled basis for the restraints in this bill, and thus there will be no limit to its reach. Indeed, “free market” defenders of the Rockefeller/Thune approach may well be supporting a bill that ultimately leads to something like compulsory, a la carte licensing of all video programming. As I noted in my testimony last year before the House Commerce Committee on the satellite video bill:

Unless we are prepared to bear the consumer harm from reduced variety, weakened competition and possibly even higher prices (and absolutely higher prices for some content), there is no economic justification for interfering in these business decisions.

So much for property rights — and so much for vibrant video programming.

That there is something wrong with the current system is evident to anyone who looks at it. As Gus Hurwitz noted in recent testimony on Rockefeller’s original bill,

The problems with the existing regulatory regime cannot be understated. It involves multiple statutes implemented by multiple agencies to govern technologies developed in the 60s, 70s, and 80s, according to policy goals from the 50s, 60s, and 70s. We are no longer living in a world where the Rube Goldberg of compulsory licenses, must carry and retransmission consent, financial interest and syndication exclusivity rules, and the panoply of Federal, state, and local regulations makes sense – yet these are the rules that govern the video industry.

While video regulation is in need of reform, this bill is not an improvement. In the short run it may ameliorate some carriage disputes, but it will do so at the expense of continued programming vibrancy and distribution innovations. The better way to effect change would be to abolish the Byzantine regulations that simultaneously attempt to place thumbs of both sides of the scale, and to rely on free market negotiations with a copyright baseline and antitrust review for actual abuses.

But STAVRA/Local Choice is about as far from that as you can get.

The Wall Street Journal dropped an FCC bombshell last week, although I’m not sure anyone noticed. In an article ostensibly about the possible role that MFNs might play in the Comcast/Time-Warner Cable merger, the Journal noted that

The FCC is encouraging big media companies to offer feedback confidentially on Comcast’s $45-billion offer for Time Warner Cable.

Not only is the FCC holding secret meetings, but it is encouraging Comcast’s and TWC’s commercial rivals to hold confidential meetings and to submit information under seal. This is not a normal part of ex parte proceedings at the FCC.

In the typical proceeding of this sort – known as a “permit-but-disclose proceeding” – ex parte communications are subject to a host of disclosure requirements delineated in 47 CFR 1.1206. But section 1.1200(a) of the Commission’s rules permits the FCC, in its discretion, to modify the applicable procedures if the public interest so requires.

If you dig deeply into the Public Notice seeking comments on the merger, you find a single sentence stating that

Requests for exemptions from the disclosure requirements pursuant to section 1.1204(a)(9) may be made to Jonathan Sallet [the FCC’s General Counsel] or Hillary Burchuk [who heads the transaction review team].

Similar language appears in the AT&T/DirecTV transaction Public Notice.

This leads to the cited rule exempting certain ex parte presentations from the usual disclosure requirements in such proceedings, including the referenced one that exempts ex partes from disclosure when

The presentation is made pursuant to an express or implied promise of confidentiality to protect an individual from the possibility of reprisal, or there is a reasonable expectation that disclosure would endanger the life or physical safety of an individual

So the FCC is inviting “media companies” to offer confidential feedback and to hold secret meetings that the FCC will hold confidential because of “the possibility of reprisal” based on language intended to protect individuals.

Such deviations from the standard permit-but-disclose procedures are extremely rare. As in non-existent. I guess there might be other examples, but I was unable to find a single one in a quick search. And I’m willing to bet that the language inviting confidential communications in the PN hasn’t appeared before – and certainly not in a transaction review.

It is worth pointing out that the language in 1.1204(a)(9) is remarkably similar to language that appears in the Freedom of Information Act. As the DOJ notes regarding that exemption:

Exemption 7(D) provides protection for “records or information compiled for law enforcement purposes [which] could reasonably be expected to disclose the identity of a confidential source… to ensure that “confidential sources are not lost through retaliation against the sources for past disclosure or because of the sources’ fear of future disclosure.”

Surely the fear-of-reprisal rationale for confidentiality makes sense in that context – but here? And invoked to elicit secret meetings and to keep confidential information from corporations instead of individuals, it makes even less sense (and doesn’t even obviously comply with the rule itself). It is not as though – as far as I know – someone approached the Commission with stated fears and requested it implement a procedure for confidentiality in these particular reviews.

Rather, this is the Commission inviting non-transparent process in the midst of a heated, politicized and heavily-scrutinized transaction review.

The optics are astoundingly bad.

Unfortunately, this kind of behavior seems to be par for the course for the current FCC. As Commissioner Pai has noted on more than one occasion, the minority commissioners have been routinely kept in the dark with respect to important matters at the Commission – not coincidentally, in other highly-politicized proceedings.

What’s particularly troubling is that, for all its faults, the FCC’s process is typically extremely open and transparent. Public comments, endless ex parte meetings, regular Open Commission Meetings are all the norm. And this is as it should be. Particularly when it comes to transactions and other regulated conduct for which the regulated entity bears the burden of proving that its behavior does not offend the public interest, it is obviously necessary to have all of the information – to know what might concern the Commission and to make a case respecting those matters.

The kind of arrogance on display of late, and the seeming abuse of process that goes along with it, hearkens back to the heady days of Kevin Martin’s tenure as FCC Chairman – a tenure described as “dysfunctional” and noted for its abuse of process.

All of which should stand as a warning to the vocal, pro-regulatory minority pushing for the FCC to proclaim enormous power to regulate net neutrality – and broadband generally – under Title II. Just as Chairman Martin tried to manipulate diversity rules to accomplish his pet project of cable channel unbundling, some future Chairman will undoubtedly claim authority under Title II to accomplish some other unintended, but politically expedient, objective — and it may not be one the self-proclaimed consumer advocates like, when it happens.

Bad as that risk may be, it is only made more likely by regulatory reviews undertaken in secret. Whatever impelled the Chairman to invite unprecedented secrecy into these transaction reviews, it seems to be of a piece with a deepening politicization and abuse of process at the Commission. It’s both shameful – and deeply worrying.

The U.S. Federal Trade Commission (FTC) continues to expand its presence in online data regulation.  On August 13 the FTC announced a forthcoming workshop to explore appropriate policies toward “big data,” a term used to refer to advancing technologies that are dramatically expanding the commercial collection, analysis, use, and storage of data.  This initiative follows on the heels of the FTC’s May 2014 data broker report, which recommended that Congress impose a variety of requirements on companies that legally collect and sell consumers’ personal information.  (Among other requirements, companies would be required to create consumer data “portals” and implement business procedures that allow consumers to edit and suppress use of their data.)  The FTC also is calling for legislation that would enhance its authority over data security standards and empower it to issue rules requiring companies to inform consumers of security breaches.

These recent regulatory initiatives are in addition to the Commission’s active consumer data enforcement efforts.  Some of these efforts are pursuant to three targeted statutory authorizations – the FTC’s Safeguards Rule (promulgated pursuant to the Gramm-Leach-Bliley Act and directed at non-bank financial institutions), the Fair Credit Reporting Act (directed at consumer protecting agencies), and the Children’s Online Privacy Protection Act (directed at children’s information collected online).

The bulk of the FTC’s enforcement efforts, however, stem from its general authority to proscribe unfair or deceptive practices under Section 5(a)(1) of the FTC ActSince 2002, pursuant to its Section 5 powers, the FTC has filed and settled over 50 cases alleging that private companies used deceptive or ineffective (and thus unfair) practices in storing their data.  (Twitter, LexisNexis, ChoicePoint, GMR Transcription Services, GeneLink, Inc., and mobile device provider HTC are just a few of the firms that have agreed to settle.)  Settlements have involved consent decrees under which the company in question agreed to take a wide variety of “corrective measures” to avoid future harm.

As a matter of first principles, one may question the desirability of FTC data security investigations under Section 5.  Firms have every incentive to avoid data protection breaches that harm their customers, in order to avoid the harm to reputation and business values that stem from such lapses.  At the same time, firms must weigh the costs of alternative data protection systems in determining what the appropriate degree of protection should be.  Economic logic indicates that the optimal business policy is not one that focuses solely on implementing the strongest data protection system program without regard to cost.  Rather, the optimal policy is to invest in enhancing corporate data security up to the point where the marginal benefits of additional security equal the marginal costs, and no further.  Although individual businesses can only roughly approximate this outcome, one may expect that market forces will tend toward the optimal result, as firms that underinvest in data security lose customers and firms that overinvest in security find themselves priced out of the market.  There is no obvious “market failure” that suggests the market should not work adequately in the data security area.  Indeed, there is a large (and growing) amount of information on security systems available to business, and a thriving labor market for IT security specialists to whom companies can turn in designing their security programs.   Nevertheless, it would be naive in the extreme to believe that the FTC will choose to abandon its efforts to apply Section 5 to this area.  With that in mind, let us examine more closely the problems with existing FTC Section 5 data security settlements, with an eye to determining what improvements the Commission might beneficially make if it is so inclined.

The HTC settlement illustrates the breadth of decree-specific obligations the FTC has imposed.  HTC was required to “establish a comprehensive security program, undergo independent security assessments for 20 years, and develop and release software patches to fix security vulnerabilities.”  HTC also agreed to detailed security protocols that would be monitored by a third party.  The FTC did not cite specific harmful security breaches to justify these sanctions; HTC was merely charged with a failure to “take reasonable steps” to secure smartphone software.  Nor did the FTC explain what specific steps short of the decree requirements would have been deemed “reasonable.”

The HTC settlement exemplifies the FTC’s “security by design” approach to data security, under which the agency informs firms after the fact what they should have done, without exploring what they might have done to pass muster.  Although some academics view the FTC settlements as contributing usefully to a developing “common law” of data privacy, supporters of this approach ignore its inherent ex ante vagueness and the costs decree-specific mandates impose on companies.

Another serious problem stems from the enormous investigative and litigation costs associated with challenging an FTC complaint in this area – costs that incentivize most firms to quickly accede to consent decree terms even if they are onerous.  The sad case of LabMD, a small cancer detection lab, serves as warning to businesses that choose to engage in long-term administrative litigation against the FTC.  Due to the cost burden of the FTC’s multi-year litigation against it (which is still ongoing as of this writing), LabMD was forced to wind down its operations, and it stopped accepting new patients in January 2014.

The LabMD case suggests that FTC data security initiatives, carried out without regard to the scale or resources of the affected companies, have the potential to harm competition.  Relatively large companies are much better able to absorb FTC litigation and investigation costs.  Thus, it may be in the large firms’ interests to encourage the FTC to support intrusive and burdensome new FTC data security initiatives, as part of a “raising rivals’ costs” strategy to cripple or eliminate smaller rivals.  As a competition and consumer welfare watchdog, the FTC should keep this risk in mind when weighing the merits of expanding data security regulations or launching new data security investigations.

A common thread runs through the FTC’s myriad activities in data privacy “space” – the FTC’s failure to address whether its actions are cost-beneficial.  There is little doubt that the FTC’s enforcement actions impose substantial costs, both on businesses subject to decree and investigation, and on other firms possessing data that must contemplate business system redesigns to forestall potential future liability.  As a result, business innovation suffers.  Furthermore, those costs are passed on at least in part to consumers, in the form of higher prices and a reduction in the quality and quantity of new products and services.  The FTC should, consistent with its consumer welfare mandate, carefully weigh these costs against the presumed benefits flowing from a reduction in future data breaches.  A failure to carry out a cost-benefit appraisal, even a rudimentary one, makes it impossible to determine whether the FTC’s much touted data privacy projects are enhancing or reducing consumer welfare.

FTC Commissioner Josh Wright recently gave voice to the importance of cost benefit analysis in commenting on the FTC’s data brokerage report – a comment that applies equally well to all of the FTC’s data protection and privacy initiatives:

“I would . . . like to see evidence of the incidence and scope of consumer harms rather than just speculative hypotheticals about how consumers might be harmed before regulation aimed at reducing those harms is implemented.  Accordingly, the FTC would need to quantify more definitively the incidence or value of data broker practices to consumers before taking or endorsing regulatory or legislative action. . . .  We have no idea what the costs for businesses would be to implement consumer control over any and all data shared by data brokers and to what extent these costs would ultimately be passed on to consumers.  Once again, a critical safeguard to insure against the risk that our recommendations and actions do more harm than good for consumers is to require appropriate and thorough cost-benefit analysis before acting.  This failure could be especially important where the costs to businesses from complying with any recommendations are high, but where the ultimate benefit generated for consumers is minimal. . . .  If consumers have minimal concerns about the sharing of certain types of information – perhaps information that is already publicly available – I think we should know that before requiring data brokers to alter their practices and expend resources and incur costs that will be passed on to consumers.”

The FTC could take several actions to improve its data enforcement policies.  First and foremost, it could issue Data Security Guidelines that (1) clarify the FTC’s enforcement actions regarding data security will be rooted in cost-benefit analysis, and (2) will take into account investigative costs as well as (3) reasonable industry self-regulatory efforts.  (Such Guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses.  They should studiously avoid dictating to industry the data security principles that firms should adopt.)  Second, it could establish an FTC website portal that features continuously updated information on the Guidelines and other sources of guidance on data security. Third, it could employ cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection.  Fourth, it could urge its foreign counterpart agencies to adopt similar cost-benefit approaches to data security regulation.

Congress could also improve the situation by enacting a narrowly tailored statute that preempts all state regulation related to data protection.  Forty-seven states now have legislation in this area, which adds additional burdens to those already imposed by federal law.  Furthermore, differences among state laws render the data protection efforts of merchants who may have to safeguard data from across the country enormously complex and onerous.  Given the inherently interstate nature of electronic commerce and associated data breaches, preemption of state regulation in this area would comport with federalism principles.  (Consistent with public choice realities, there is always the risk, of course, that Congress might be tempted to go beyond narrow preemption and create new and unnecessary federal powers in this area.  I believe, however, that such a risk is worth running, given the potential magnitude of excessive regulatory burdens, and the ability to articulate a persuasive public policy case for narrow preemptive legislation.)

Stay tuned for a more fulsome discussion of these issues by me.

The Federal Trade Commission’s recent enforcement actions against Amazon and Apple raise important questions about the FTC’s consumer protection practices, especially its use of economics. How does the Commission weigh the costs and benefits of its enforcement decisions? How does the agency employ economic analysis in digital consumer protection cases generally?

Join the International Center for Law and Economics and TechFreedom on Thursday, July 31 at the Woolly Mammoth Theatre Company for a lunch and panel discussion on these important issues, featuring FTC Commissioner Joshua Wright, Director of the FTC’s Bureau of Economics Martin Gaynor, and several former FTC officials. RSVP here.

Commissioner Wright will present a keynote address discussing his dissent in Apple and his approach to applying economics in consumer protection cases generally.

Geoffrey Manne, Executive Director of ICLE, will briefly discuss his recent paper on the role of economics in the FTC’s consumer protection enforcement. Berin Szoka, TechFreedom President, will moderate a panel discussion featuring:

  • Martin Gaynor, Director, FTC Bureau of Economics
  • David Balto, Fmr. Deputy Assistant Director for Policy & Coordination, FTC Bureau of Competition
  • Howard Beales, Fmr. Director, FTC Bureau of Consumer Protection
  • James Cooper, Fmr. Acting Director & Fmr. Deputy Director, FTC Office of Policy Planning
  • Pauline Ippolito, Fmr. Acting Director & Fmr. Deputy Director, FTC Bureau of Economics

Background

The FTC recently issued a complaint and consent order against Apple, alleging its in-app purchasing design doesn’t meet the Commission’s standards of fairness. The action and resulting settlement drew a forceful dissent from Commissioner Wright, and sparked a discussion among the Commissioners about balancing economic harms and benefits in Section 5 unfairness jurisprudence. More recently, the FTC brought a similar action against Amazon, which is now pending in federal district court because Amazon refused to settle.

Event Info

The “FTC: Technology and Reform” project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology. The Project’s initial report, released in December 2013, identified critical questions facing the agency, Congress, and the courts about the FTC’s future, and proposed a framework for addressing them.

The event will be live streamed here beginning at 12:15pm. Join the conversation on Twitter with the #FTCReform hashtag.

When:

Thursday, July 31
11:45 am – 12:15 pm — Lunch and registration
12:15 pm – 2:00 pm — Keynote address, paper presentation & panel discussion

Where:

Woolly Mammoth Theatre Company – Rehearsal Hall
641 D St NW
Washington, DC 20004

Questions? – Email mail@techfreedom.orgRSVP here.

See ICLE’s and TechFreedom’s other work on FTC reform, including:

  • Geoffrey Manne’s Congressional testimony on the the FTC@100
  • Op-ed by Berin Szoka and Geoffrey Manne, “The Second Century of the Federal Trade Commission”
  • Two posts by Geoffrey Manne on the FTC’s Amazon Complaint, here and here.

About The International Center for Law and Economics:

The International Center for Law and Economics is a non-profit, non-partisan research center aimed at fostering rigorous policy analysis and evidence-based regulation.

About TechFreedom:

TechFreedom is a non-profit, non-partisan technology policy think tank. We work to chart a path forward for policymakers towards a bright future where technology enhances freedom, and freedom enhances technology.

U.S. antitrust law focuses primarily on private anticompetitive restraints, leaving the most serious impediments to a vibrant competitive process – government-initiated restraints – relatively free to flourish.  Thus the Federal Trade Commission (FTC) should be commended for its July 16 congressional testimony that spotlights a fast-growing and particularly pernicious species of (largely state) government restriction on competition – occupational licensing requirements.  Today such disciplines (to name just a few) as cat groomers, flower arrangers, music therapists, tree trimmers, frozen dessert retailers, eyebrow threaders, massage therapists (human and equine), and “shampoo specialists,” in addition to the traditional categories of doctors, lawyers, and accountants, are subject to professional licensure.  Indeed, since the 1950s, the coverage of such rules has risen dramatically, as the percentage of Americans requiring government authorization to do their jobs has risen from less than five percent to roughly 30 percent.

Even though some degree of licensing responds to legitimate health and safety concerns (i.e., no fly-by-night heart surgeons), much occupational regulation creates unnecessary barriers to entry into a host of jobs.  Excessive licensing confers unwarranted benefits on fortunate incumbents, while effectively barring large numbers of capable individuals from the workforce.  (For example, many individuals skilled in natural hair braiding simply cannot afford the 2,100 hours required to obtain a license in Iowa, Nebraska, and South Dakota.)  It also imposes additional economic harms, as the FTC’s testimony explains:  “[Occupational licensure] regulations may lead to higher prices, lower quality services and products, and less convenience for consumers.  In the long term, they can cause lasting damage to competition and the competitive process by rendering markets less responsive to consumer demand and by dampening incentives for innovation in products, services, and business models.”  Licensing requirements are often enacted in tandem with other occupational regulations that unjustifiably limit the scope of beneficial services particular professionals can supply – for instance, a ban on tooth cleaning by dental hygienists not acting under a dentist’s supervision that boosts dentists’ income but denies treatment to poor children who have no access to dentists.

What legal and policy tools are available to chip away at these pernicious and costly laws and regulations, which largely are the fruit of successful special interest lobbying?  The FTC’s competition advocacy program, which responds to requests from legislators and regulators to assess the economic merits of proposed laws and regulations, has focused on unwarranted regulatory restrictions in such licensed professions as real estate brokers, electricians, accountants, lawyers, dentists, dental hygienists, nurses, eye doctors, opticians, and veterinarians.  Retrospective reviews of FTC advocacy efforts suggest it may have helped achieve some notable reforms (for example, 74% of requestors, regulators, and bill sponsors surveyed responded that FTC advocacy initiatives influenced outcomes).  Nevertheless, advocacy’s reach and effectiveness inherently are limited by FTC resource constraints, by the need to obtain “invitations” to submit comments, and by the incentive and ability of licensing scheme beneficiaries to oppose regulatory and legislative reforms.

Former FTC Chairman Kovacic and James Cooper (currently at George Mason University’s Law and Economics Center) have suggested that federal and state antitrust experts could be authorized to have ex ante input into regulatory policy making.  As the authors recognize, however, several factors sharply limit the effectiveness of such an initiative.  In particular, “the political feasibility of this approach at the legislative level is slight”, federal mandates requiring ex ante reviews would raise serious federalism concerns, and resource constraints would loom large.

Antitrust law challenges to anticompetitive licensing schemes likewise offer little solace.  They are limited by the antitrust “state action” doctrine, which shields conduct undertaken pursuant to “clearly articulated” state legislative language that displaces competition – a category that generally will cover anticompetitive licensing requirements.  Even a Supreme Court decision next term (in North Carolina Dental v. FTC) that state regulatory boards dominated by self-interested market participants must be actively supervised to enjoy state action immunity would have relatively little bite.  It would not limit states from issuing simple statutory commands that create unwarranted occupational barriers, nor would it prevent states from implementing “adequate” supervisory schemes that are designed to approve anticompetitive state board rules.

What then is to be done?

Constitutional challenges to unjustifiable licensing strictures may offer the best long-term solution to curbing this regulatory epidemic.  As Clark Neily points out in Terms of Engagement, there is a venerable constitutional tradition of protecting the liberty interest to earn a living, reflected in well-reasoned late 19th and early 20th century “Lochner-era” Supreme Court opinions.  Even if Lochner is not rehabilitated, however, there are a few recent jurisprudential “straws in the wind” that support efforts to rein in “irrational” occupational licensure barriers.  Perhaps acting under divine inspiration, the Fifth Circuit in St. Joseph Abbey (2013) ruled that Louisiana statutes that required all casket manufacturers to be licensed funeral directors – laws that prevented monks from earning a living by making simple wooden caskets – served no other purpose than to protect the funeral industry, and, as such, violated the 14th Amendment’s Equal Protection and Due Process Clauses.  In particular, the Fifth Circuit held that protectionism, standing alone, is not a legitimate state interest sufficient to establish a “rational basis” for a state statute, and that absent other legitimate state interests, the law must fall.  Since the Sixth and Ninth Circuits also have held that intrastate protectionism standing alone is not a legitimate purpose for rational basis review, but the Tenth Circuit has held to the contrary, the time may soon be ripe for the Supreme Court to review this issue and, hopefully, delegitimize pure economic protectionism.  Such a development would place added pressure on defenders of protectionist occupational licensing schemes.  Other possible avenues for constitutional challenges to protectionist licensing regimes (perhaps, for example, under the Dormant Commerce Clause) also merit being explored, of course.  The Institute of Justice already is performing yeoman’s work in litigating numerous cases involving unjustified licensing and other encroachments on economic liberty; perhaps their example can prove an inspiration for pro bono efforts by others.

Eliminating anticompetitive occupational licensing rules – and, more generally, vindicating economic liberties that too long have been neglected – is obviously a long-term project, and far-reaching reform will not happen in the near term.  Nevertheless, while we the currently living may in the long run be dead (pace Keynes), our posterity will be alive, and we owe it to them to pursue the vindication of economic liberties under the Constitution.