Archives For regulatory reform

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

The FTC will hold an “Informational Injury Workshop” in December “to examine consumer injury in the context of privacy and data security.” Defining the scope of cognizable harm that may result from the unauthorized use or third-party hacking of consumer information is, to be sure, a crucial inquiry, particularly as ever-more information is stored digitally. But the Commission — rightly — is aiming at more than mere definition. As it notes, the ultimate objective of the workshop is to address questions like:

How do businesses evaluate the benefits, costs, and risks of collecting and using information in light of potential injuries? How do they make tradeoffs? How do they assess the risks of different kinds of data breach? What market and legal incentives do they face, and how do these incentives affect their decisions?

How do consumers perceive and evaluate the benefits, costs, and risks of sharing information in light of potential injuries? What obstacles do they face in conducting such an evaluation? How do they evaluate tradeoffs?

Understanding how businesses and consumers assess the risk and cost “when information about [consumers] is misused,” and how they conform their conduct to that risk, entails understanding not only the scope of the potential harm, but also the extent to which conduct affects the risk of harm. This, in turn, requires an understanding of the FTC’s approach to evaluating liability under Section 5 of the FTC Act.

The problem, as we discuss in comments submitted by the International Center for Law & Economics to the FTC for the workshop, is that the Commission’s current approach troublingly mixes the required separate analyses of risk and harm, with little elucidation of either.

The core of the problem arises from the Commission’s reliance on what it calls a “reasonableness” standard for its evaluation of data security. By its nature, a standard that assigns liability for only unreasonable conduct should incorporate concepts resembling those of a common law negligence analysis — e.g., establishing a standard of due care, determining causation, evaluating the costs of and benefits of conduct that would mitigate the risk of harm, etc. Unfortunately, the Commission’s approach to reasonableness diverges from the rigor of a negligence analysis. In fact, as it has developed, it operates more like a strict liability regime in which largely inscrutable prosecutorial discretion determines which conduct, which firms, and which outcomes will give rise to liability.

Most troublingly, coupled with the Commission’s untenably lax (read: virtually nonexistent) evidentiary standards, the extremely liberal notion of causation embodied in its “reasonableness” approach means that the mere storage of personal information, even absent any data breach, could amount to an unfair practice under the Act — clearly not a “reasonable” result.

The notion that a breach itself can constitute injury will, we hope, be taken up during the workshop. But even if injury is limited to a particular type of breach — say, one in which sensitive, personal information is exposed to a wide swath of people — unless the Commission’s definition of what it means for conduct to be “likely to cause” harm is fixed, it will virtually always be the case that storage of personal information could conceivably lead to the kind of breach that constitutes injury. In other words, better defining the scope of injury does little to cabin the scope of the agency’s discretion when conduct creating any risk of that injury is actionable.

Our comments elaborate on these issues, as well as providing our thoughts on how the subjective nature of informational injuries can fit into Section 5, with a particular focus on the problem of assessing informational injury given evolving social context, and the need for appropriately assessing benefits in any cost-benefit analysis of conduct leading to informational injury.

ICLE’s full comments are available here.

The comments draw upon our article, When ‘Reasonable’ Isn’t: The FTC’s Standard-Less Data Security Standard, forthcoming in the Journal of Law, Economics and Policy.

On July 24, as part of their newly-announced “Better Deal” campaign, congressional Democrats released an antitrust proposal (“Better Deal Antitrust Proposal” or BDAP) entitled “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power.”  Unfortunately, this antitrust tract is really an “Old Deal” screed that rehashes long-discredited ideas about “bigness is badness” and “corporate abuses,” untethered from serious economic analysis.  (In spirit it echoes the proposal for a renewed emphasis on “fairness” in antitrust made by then Acting Assistant Attorney General Renata Hesse in 2016 – a recommendation that ran counter to sound economics, as I explained in a September 2016 Truth on the Market commentary.)  Implementation of the BDAP’s recommendations would be a “worse deal” for American consumers and for American economic vitality and growth.

The BDAP’s Portrayal of the State of Antitrust Enforcement is Factually Inaccurate, and it Ignores the Real Problems of Crony Capitalism and Regulatory Overreach

The Better Deal Antitrust Proposal begins with the assertion that antitrust has failed in recent decades:

Over the past thirty years, growing corporate influence and consolidation has led to reductions in competition, choice for consumers, and bargaining power for workers.  The extensive concentration of power in the hands of a few corporations hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors.  It means higher prices and less choice for the things the American people buy every day. . .  [This is because] [o]ver the last thirty years, courts and permissive regulators have allowed large companies to get larger, resulting in higher prices and limited consumer choice in daily expenses such as travel, cable, and food and beverages.  And because concentrated market power leads to concentrated political power, these companies deploy armies of lobbyists to increase their stranglehold on Washington.  A Better Deal on competition means that we will revisit our antitrust laws to ensure that the economic freedom of all Americans—consumers, workers, and small businesses—come before big corporations that are getting even bigger.

This statement’s assertions are curious (not to mention problematic) in multiple respects.

First, since Democratic administrations have held the White House for sixteen of the past thirty years, the BDAP appears to acknowledge that Democratic presidents have overseen a failed antitrust policy.

Second, the broad claim that consumers have faced higher prices and limited consumer choice with regard to their daily expenses is baseless.  Indeed, internet commerce and new business models have sharply reduced travel and entertainment costs for the bulk of American consumers, and new “high technology” products such as smartphones and electronic games have been characterized by dramatic improvements in innovation, enhanced variety, and relatively lower costs.  Cable suppliers face vibrant competition from competitive satellite providers, fiberoptic cable suppliers (the major telcos such as Verizon), and new online methods for distributing content.  Consumer price inflation has been extremely low in recent decades, compared to the high inflationary, less innovative environment of the 1960s and 1970s – decades when federal antitrust law was applied much more vigorously.  Thus, the claim that weaker antitrust has denied consumers “economic freedom” is at war with the truth.

Third, the claim that recent decades have seen the creation of “concentrated market power,” safe from antitrust challenge, ignores the fact that, over the last three decades, apolitical government antitrust officials under both Democratic and Republican administrations have applied well-accepted economic tools (wielded by the scores of Ph.D. economists in the Justice Department and Federal Trade Commission) in enforcing the antitrust laws.  Antitrust analysis has used economics to focus on inefficient business conduct that would maintain or increase market power, and large numbers of cartels have been prosecuted and questionable mergers (including a variety of major health care and communications industry mergers) have been successfully challenged.  The alleged growth of “concentrated market power,” untouched by incompetent antitrust enforcers, is a myth.  Furthermore, claims that mere corporate size and “aggregate concentration” are grounds for antitrust concern (“big is bad”) were decisively rejected by empirical economic research published in the 1970s, and are no more convincing today.  (As I pointed out in a January 2017 blog posting at this site, recent research by highly respected economists debunks a few claims that federal antitrust enforcers have been “excessively tolerant” of late in analyzing proposed mergers.)

More interesting is the BDAP’s claim that “armies of [corporate] lobbyists” manage to “increase their stranglehold on Washington.”  This is not an antitrust concern, however, but, rather, a complaint against crony capitalism and overregulation, which became an ever more serious problem under the Obama Administration.  As I explained in my October 2016 critique of the American Antitrust Institute’s September 2008 National Competition Policy Report (a Report which is very similar in tone to the BDAP), the rapid growth of excessive regulation during the Obama years has diminished competition by creating new regulatory schemes that benefit entrenched and powerful firms (such as Dodd-Frank Act banking rules that impose excessive burdens on smaller banks).  My critique emphasized that, “as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.”  And, more generally, excessive regulatory burdens undermine the competitive process, by distorting business decisions in a manner that detracts from competition on the merits.

It follows that, if the BDAP really wanted to challenge “unfair” corporate advantages, it would seek to roll back excessive regulation (see my November 2012 article on Trump Administration competition policy).  Indeed, the Trump Administration’s regulatory reform program (which features agency-specific regulatory reform task forces) seeks to do just that.  Perhaps then the BDAP could be rewritten to focus on endorsing President Trump’s regulatory reform initiative, rather than emphasizing a meritless “big is bad” populist antitrust policy that was consigned to the enforcement dustbin decades ago.

The BDAP’s Specific Proposals Would Harm the Economy and Reduce Consumer Welfare

Unfortunately, the BDAP does more than wax nostalgic about old-time “big is bad” antitrust policy.  It affirmatively recommends policy changes that would harm the economy.

First, the BDAP would require “a broader, longer-term view and strong presumptions that market concentration can result in anticompetitive conduct.”  Specifically, it would create “new standards to limit large mergers that unfairly consolidate corporate power,” including “mergers [that] reduce wages, cut jobs, lower product quality, limit access to services, stifle innovation, or hinder the ability of small businesses and entrepreneurs to compete.”  New standards would also “explicitly consider the ways in which control of consumer data can be used to stifle competition or jeopardize consumer privacy.”

Unlike current merger policy, which evaluates likely competitive effects, centered on price and quality, estimated in economically relevant markets, these new standards are open-ended.  They could justify challenges based on such a wide variety of factors that they would incentivize direct competitors not to merge, even in cases where the proposed merged entity would prove more efficient and able to enhance quality or innovation.  Certain less efficient competitors – say small businesses – could argue that they would be driven out of business, or that some jobs in the industry would disappear, in order to prompt government challenges.  But such challenges would tend to undermine innovation and business improvements, and the inevitable redistribution of assets to higher-valued uses that is a key benefit of corporate reorganizations and acquisitions.  (Mergers might focus instead, for example, on inefficient conglomerate acquisitions among companies in unrelated industries, which were incentivized by the overly strict 1960s rules that prohibited mergers among direct competitors.)  Such a change would represent a retreat from economic common sense, and be at odds with consensus economically-sound merger enforcement guidance that U.S. enforcers have long recommended other countries adopt.  Furthermore, questions of consumer data and privacy are more appropriately dealt with as consumer protection questions, which the Federal Trade Commission has handled successfully for years.

Second, the BDAP would require “frequent, independent [after-the-fact] reviews of mergers” and require regulators “to take corrective measures if they find abusive monopolistic conditions where previously approved [consent decree] measures fail to make good on their intended outcomes.”

While high profile mergers subject to significant divestiture or other remedial requirements have in appropriate circumstances included monitoring requirements, the tone of this recommendation is to require that far more mergers be subjected to detailed and ongoing post-acquisition reviews.  The cost of such monitoring is substantial, however, and routine reliance on it (backed by the threat of additional enforcement actions based merely on changing economic conditions) could create excessive caution in the post-merger management of newly-consolidated enterprises.  Indeed, potential merged parties might decide in close cases that this sort of oversight is not worth accepting, and therefore call off potentially efficient transactions that would have enhanced economic welfare.  (The reality of enforcement error cost, and the possibility of misdiagnosis of post-merger competitive conditions, is not acknowledged by the BDAP.)

Third, a newly created “competition advocate” independent of the existing federal antitrust enforcers would be empowered to publicly recommend investigations, with the enforcers required to justify publicly why they chose not to pursue a particular recommended investigation.  The advocate would ensure that antitrust enforcers are held “accountable,” assure that complaints about “market exploitation and anticompetitive conduct” are heard, and publish data on “concentration and abuses of economic power” with demographic breakdowns.

This third proposal is particularly egregious.  It is at odds with the long tradition of prosecutorial discretion that has been enjoyed by the federal antitrust enforcers (and law enforcers in general).  It would also empower a special interest intervenor to promote the complaints of interest groups that object to efficiency-seeking business conduct, thereby undermining the careful economic and legal analysis that is consistently employed by the expert antitrust agencies.  The references to “concentration” and “economic power” clarify that the “advocate” would have an untrammeled ability to highlight non-economic objections to transactions raised by inefficient competitors, jealous rivals, or self-styled populists who object to excessive “bigness.”  This would strike at the heart of our competitive process, which presumes that private parties will be allowed to fulfill their own goals, free from government micromanagement, absent indications of a clear and well-defined violation of law.  In sum, the “competition advocate” is better viewed as a “special interest” advocate empowered to ignore normal legal constraints and unjustifiably interfere in business transactions.  If empowered to operate freely, such an advocate (better viewed as an albatross) would undoubtedly chill a wide variety of business arrangements, to the detriment of consumers and economic innovation.

Finally, the BDAP refers to a variety of ills that are said to affect specific named industries, in particular airlines, cable/telecom, beer, food prices, and eyeglasses.  Airlines are subject to a variety of capacity limitations (limitations on landing slots and the size/number of airports) and regulatory constraints (prohibitions on foreign entry or investment) that may affect competitive conditions, but airlines mergers are closely reviewed by the Justice Department.  Cable and telecom companies face a variety of federal, state, and local regulations, and their mergers also are closely scrutinized.  The BDAP’s reference to the proposed AT&T/Time Warner merger ignores the potential efficiencies of this “vertical” arrangement involving complementary assets (see my coauthored commentary here), and resorts to unsupported claims about wrongful “discrimination” by “behemoths” – issues that in any event are examined in antitrust merger reviews.  Unsupported references to harm to competition and consumer choice are thrown out in the references to beer and agrochemical mergers, which also receive close economically-focused merger scrutiny under existing law.  Concerns raised about the price of eyeglasses ignore the role of potentially anticompetitive regulation – that is, bad government – in harming consumer welfare in this sector.  In short, the alleged competitive “problems” the BDAP raises with respect to particular industries are no more compelling than the rest of its analysis.  The Justice Department and Federal Trade Commission are hard at work applying sound economics to these sectors.  They should be left to do their jobs, and the BDAP’s industry-specific commentary (sadly, like the rest of its commentary) should be accorded no weight.

Conclusion

Congressional Democrats would be well-advised to ditch their efforts to resurrect the counterproductive antitrust policy from days of yore, and instead focus on real economic problems, such as excessive and inappropriate government regulation, as well as weak protection for U.S. intellectual property rights, here and abroad (see here, for example).  Such a change in emphasis would redound to the benefit of American consumers and producers.

 

 

On July 10, the Consumer Financial Protection Bureau (CFPB) announced a new rule to ban financial service providers, such as banks or credit card companies, from using mandatory arbitration clauses to deny consumers the opportunity to participate in a class action (“Arbitration Rule”).  The Arbitration Rule’s summary explains:

First, the final rule prohibits covered providers of certain consumer financial products and services from using an agreement with a consumer that provides for arbitration of any future dispute between the parties to bar the consumer from filing or participating in a class action concerning the covered consumer financial product or service. Second, the final rule requires covered providers that are involved in an arbitration pursuant to a pre-dispute arbitration agreement to submit specified arbitral records to the Bureau and also to submit specified court records. The Bureau is also adopting official interpretations to the regulation.

The Arbitration Rule’s effective date is 60 days following its publication in the Federal Register (which is imminent), and it applies to contracts entered into more than 180 days after that.

Cutting through the hyperbole that the Arbitration Rule protects consumers from “unfairness” that would deny them “their day in court,” this Rule is in fact highly anti-consumer and harmful to innovation.  As Competitive Enterprise Senior Fellow John Berlau put it, in promulgating this Rule, “[t]he CFPB has disregarded vast data showing that arbitration more often compensates consumers for damages faster and grants them larger awards than do class action lawsuits. This regulation could have particularly harmful effects on FinTech innovations, such as peer-to-peer lending.”  Moreover, in a coauthored paper, Professors Jason Johnston of the University of Virginia Law School and Todd Zywicki of the Scalia Law School debunked a CFPB study that sought to justify the agency’s plans to issue the Arbitration Rule.  They concluded:

The CFPB’s [own] findings show that arbitration is relatively fair and successful at resolving a range of disputes between consumers and providers of consumer financial products, and that regulatory efforts to limit the use of arbitration will likely leave consumers worse off . . . .  Moreover, owing to flaws in the report’s design and a lack of information, the report should not be used as the basis for any legislative or regulatory proposal to limit the use of consumer arbitration.    

Unfortunately, the Arbitration Rule is just the latest of many costly regulatory outrages perpetrated by the CFPB, an unaccountable bureaucracy that offends the Constitution’s separation of powers and should be eliminated by Congress, as I explained in a 2016 Heritage Foundation report.

Legislative elimination of an agency, however, takes time.  Fortunately, in the near term, Congress can apply the Congressional Review Act (CRA) to prevent the Arbitration Rule from taking effect, and to block the CFPB from passing rules similar to it in the future.

As Heritage Senior Legal Fellow Paul Larkin has explained:

[The CRA is] Congress’s most recent effort to trim the excesses of the modern administrative state.  The act requires the executive branch to report every “rule” — a term that includes not only the regulations an agency promulgates, but also its interpretations of the agency’s governing laws — to the Senate and House of Representatives so that each chamber can schedule an up-or-down vote on the rule under the statute’s fast-track procedure.  The act was designed to enable Congress expeditiously to overturn agency regulations by avoiding the delays occasioned by the Senate’s filibuster rules and practices while also satisfying the [U.S. Constitution’s] Article I Bicameralism and Presentment requirements, which force the Congress and President to collaborate to enact, revise, or repeal a law.  Under the CRA, a joint resolution of disapproval signed into law by the President invalidates the rule and bars an agency from thereafter adopting any substantially similar rule absent a new act of Congress.

Although the CRA was almost never invoked before 2017, in recent months it has been used extensively as a tool by Congress and the Trump Administration to roll back specific manifestations Obama Administration regulatory overreach (for example, see here and here).

Application of the CRA to expunge the Arbitration Rule (and any future variations on it) would benefit consumers, financial services innovation, and the overall economy.  Senator Tom Cotton has already gotten the ball rolling to repeal that Rule.  Let us hope that Congress follows his lead and acts promptly.

Today, the International Center for Law & Economics (ICLE) released a study updating our 2014 analysis of the economic effects of the Durbin Amendment to the Dodd-Frank Act.

The new paper, Unreasonable and Disproportionate: How the Durbin Amendment Harms Poorer Americans and Small Businesses, by ICLE scholars, Todd J. Zywicki, Geoffrey A. Manne, and Julian Morris, can be found here; a Fact Sheet highlighting the paper’s key findings is available here.

Introduced as part of the Dodd-Frank Act in 2010, the Durbin Amendment sought to reduce the interchange fees assessed by large banks on debit card transactions. In the words of its primary sponsor, Sen. Richard Durbin, the Amendment aspired to help “every single Main Street business that accepts debit cards keep more of their money, which is a savings they can pass on to their consumers.”

Unfortunately, although the Durbin Amendment did generate benefits for big-box retailers, ICLE’s 2014 analysis found that it had actually harmed many other merchants and imposed substantial net costs on the majority of consumers, especially those from lower-income households.

In the current study, we analyze a welter of new evidence and arguments to assess whether time has ameliorated or exacerbated the Amendment’s effects. Our findings in this report expand upon and reinforce our findings from 2014:

Relative to the period before the Durbin Amendment, almost every segment of the interrelated retail, banking, and consumer finance markets has been made worse off as a result of the Amendment.

Predictably, the removal of billions of dollars in interchange fee revenue has led to the imposition of higher bank fees and reduced services for banking consumers.

In fact, millions of households, regardless of income level, have been adversely affected by the Durbin Amendment through higher overdraft fees, increased minimum balances, reduced access to free checking, higher ATM fees, and lost debit card rewards, among other things.

Nor is there any evidence that merchants have lowered prices for retail consumers; for many small-ticket items, in fact, prices have been driven up.

Contrary to Sen. Durbin’s promises, in other words, increased banking costs have not been offset by lower retail prices.

At the same time, although large merchants continue to reap a Durbin Amendment windfall, there remains no evidence that small merchants have realized any interchange cost savings — indeed, many have suffered cost increases.

And all of these effects fall hardest on the poor. Hundreds of thousands of low-income households have chosen (or been forced) to exit the banking system, with the result that they face higher costs, difficulty obtaining credit, and complications receiving and making payments — all without offset in the form of lower retail prices.

Finally, the 2017 study also details a new trend that was not apparent when we examined the data three years ago: Contrary to our findings then, the two-tier system of interchange fee regulation (which exempts issuing banks with under $10 billion in assets) no longer appears to be protecting smaller banks from the Durbin Amendment’s adverse effects.

This week the House begins consideration of the Amendment’s repeal as part of Rep. Hensarling’s CHOICE Act. Our study makes clear that the Durbin price-control experiment has proven a failure, and that repeal is, indeed, the only responsible option.

Click on the following links to read:

Full Paper

Fact Sheet

Summary

On February 22, 2017, an all-star panel at the Heritage Foundation discussed “Reawakening the Congressional Review Act” – a statute which gives Congress sixty legislative days to disapprove a proposed federal rule (subject to presidential veto), under an expedited review process not subject to Senate filibuster.  Until very recently, the CRA was believed to apply only to very recently promulgated regulations.  Thus, according to conventional wisdom, while the CRA might prove useful in blocking some non-cost-beneficial Obama Administration midnight regulations, it could not be invoked to attack serious regulatory agency overreach dating back many years.

Last week’s panel, however, demonstrated that conventional wisdom is no match for the careful textual analysis of laws – the sort of analysis that too often is given short-shrift by  commentators.  Applying straightforward statutory construction techniques, my Heritage colleague Paul Larkin argued persuasively that the CRA actually reaches back over 20 years to authorize congressional assessment of regulations that were not properly submitted to Congress.  Paul’s short February 15 article on the CRA (reprinted from The Daily Signal), intended for general public consumption, lays it all out, and merits being reproduced in its entirety:

In Washington, there is a saying that regulators never met a rule they didn’t like.  Federal agencies, commonly referred to these days as the “fourth branch of government,” have been binding the hands of the American people for decades with overreaching regulations. 

All the while, Congress sat idly by and let these agencies assume their new legislative role.  What if Congress could not only reverse this trend, but undo years of burdensome regulations dating as far back as the mid-1990s?  It turns out it can, with the Congressional Review Act. 

The Congressional Review Act is Congress’ most recent effort to trim the excesses of the modern administrative state.  Passed into law in 1996, the Congressional Review Act allows Congress to invalidate an agency rule by passing a joint resolution of disapproval, not subject to a Senate filibuster, that the president signs into law. 

Under the Congressional Review Act, Congress is given 60 legislative days to disapprove a rule and receive the president’s signature, after which the rule goes into effect.  But the review act also sets forth a specific procedure for submitting new rules to Congress that executive agencies must carefully follow. 

If they fail to follow these specific steps, Congress can vote to disapprove the rule even if it has long been accepted as part of the Federal Register.  In other words, if the agency failed to follow its obligations under the Congressional Review Act, the 60-day legislative window never officially started, and the rule remains subject to congressional disapproval. 

The legal basis for this becomes clear when we read the text of the Congressional Review Act. 

According to the statute, the period that Congress has to review a rule does not commence until the later of two events: either (1) the date when an agency publishes the rule in the Federal Register, or (2) the date when the agency submits the rule to Congress.

This means that if a currently published rule was never submitted to Congress, then the nonexistent “submission” qualifies as “the later” event, and the rule remains subject to congressional review.

This places dozens of rules going back to 1996 in the congressional crosshairs.

The definition of “rule” under the Congressional Review Act is quite broad—it includes not only the “junior varsity” statutes that an agency can adopt as regulations, but also the agency’s interpretations of those laws. This is vital because federal agencies often use a wide range of documents to strong-arm regulated parties.

The review act reaches regulations, guidance documents, “Dear Colleague” letters, and anything similar.

The Congressional Review Act is especially powerful because once Congress passes a joint resolution of disapproval and the president signs it into law, the rule is nullified and the agency cannot adopt a “substantially similar” rule absent an intervening act of Congress.

This binds the hands of federal agencies to find backdoor ways of re-imposing the same regulations.

The Congressional Review Act gives Congress ample room to void rules that it finds are mistaken.  Congress may find it to be an indispensable tool in its efforts to rein in government overreach.

Now that Congress has a president who is favorable to deregulation, lawmakers should seize this opportunity to find some of the most egregious regulations going back to 1996 that, under the Congressional Review Act, still remain subject to congressional disapproval.

In the coming days, my colleagues will provide some specific regulations that Congress should target.

For a more fulsome exposition of the CRA’s coverage, see Paul’s February 8 Heritage Foundation Legal Memorandum, “The Reach of the Congressional Review Act.”  Hopefully, Congress and the Trump Administration will take advantage of this newly-discovered legal weapon as they explore the most efficacious means to reduce the daunting economic burden of federal overregulation (for a subject matter-specific exploration of the nature and size of that burden, see the most recent Heritage Foundation “Red Tape Rising” report, here).

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

Yesterday the Chairman and Ranking Member of the House Judiciary Committee issued the first set of policy proposals following their long-running copyright review process. These proposals were principally aimed at ensuring that the IT demands of the Copyright Office were properly met so that it could perform its assigned functions, and to provide adequate authority for it to adapt its policies and practices to the evolving needs of the digital age.

In response to these modest proposals, Public Knowledge issued a telling statement, calling for enhanced scrutiny of these proposals related to an agency “with a documented history of regulatory capture.”

The entirety of this “documented history,” however, is a paper published by Public Knowledge itself alleging regulatory capture—as evidenced by the fact that 13 people had either gone from the Copyright Office to copyright industries or vice versa over the past 20+ years. The original document was brilliantly skewered by David Newhoff in a post on the indispensable blog, Illusion of More:

To support its premise, Public Knowledge, with McCarthy-like righteousness, presents a list—a table of thirteen former or current employees of the Copyright Office who either have worked for private-sector, rights-holding organizations prior to working at the Office or who are  now working for these private entities after their terms at the Office. That thirteen copyright attorneys over a 22-year period might be employed in some capacity for copyright owners is a rather unremarkable observation, but PK seems to think it’s a smoking gun…. Or, as one of the named thirteen, Steven Tepp, observes in his response, PK also didn’t bother to list the many other Copyright Office employees who, “went to Internet and tech companies, the Smithsonian, the FCC, and other places that no one would mistake for copyright industries.” One might almost get the idea that experienced copyright attorneys pursue various career paths or something.

Not content to rest on the laurels of its groundbreaking report of Original Sin, Public Knowledge has now doubled down on its audacity, using its own previous advocacy as the sole basis to essentially impugn an entire agency, without more. But, as advocacy goes, that’s pretty specious. Some will argue that there is an element of disingenuousness in all advocacy, even if it is as benign as failing to identify the weaknesses of one’s arguments—and perhaps that’s true. (We all cite our own work at one time or another, don’t we?) But that’s not the situation we have before us. Instead, Public Knowledge creates its own echo chamber, effectively citing only its own idiosyncratic policy preferences as the “documented” basis for new constraints on the Copyright Office. Even in a world of moral relativism, bubbles of information, and competing narratives about the truth, this should be recognizable as thin gruel.

So why would Public Knowledge expose itself in this manner? What is to be gained by seeking to impugn the integrity of the Copyright Office? There the answer is relatively transparent: PK hopes to capitalize on the opportunity to itself capture Copyright Office policy-making by limiting the discretion of the Copyright Office, and by turning it into an “objective referee” rather than the nation’s steward for ensuring the proper functioning of the copyright system.

PK claims that the Copyright Office should not be involved in making copyright policy, other than perhaps technically transcribing the agreements reached by other parties. Thus, in its “indictment” of the Copyright Office (which it now risibly refers to as the Copyright Office’s “documented history of capture”), PK wrote that:

These statements reflect the many specific examples, detailed in Section II, in which the Copyright Office has acted more as an advocate for rightsholder interests than an objective referee of copyright debates.

Essentially, PK seems to believe that copyright policy should be the province of self-proclaimed “consumer advocates” like PK itself—and under no circumstances the employees of the Copyright Office who might actually deign to promote the interests of the creative community. After all, it is staffed by a veritable cornucopia of copyright industry shills: According to PK’s report, fully 1 of its 400 employees has either left the office to work in the copyright industry or joined the office from industry in each of the last 1.5 years! For reference (not that PK thinks to mention it) some 325 Google employees have worked in government offices in just the past 15 years. And Google is hardly alone in this. Good people get good jobs, whether in government, industry, or both. It’s hardly revelatory.

And never mind that the stated mission of the Copyright Office “is to promote creativity by administering and sustaining an effective national copyright system,” and that “the purpose of the copyright system has always been to promote creativity in society.” And never mind that Congress imbued the Office with the authority to make regulations (subject to approval by the Librarian of Congress) and directed the Copyright Office to engage in a number of policy-related functions, including:

  1. Advising Congress on national and international issues relating to copyright;
  2. Providing information and assistance to Federal departments and agencies and the Judiciary on national and international issues relating to copyright;
  3. Participating in meetings of international intergovernmental organizations and meetings with foreign government officials relating to copyright; and
  4. Conducting studies and programs regarding copyright.

No, according to Public Knowledge the Copyright Office is to do none of these things, unless it does so as an “objective referee of copyright debates.” But nowhere in the legislation creating the Office or amending its functions—nor anywhere else—is that limitation to be found; it’s just created out of whole cloth by PK.

The Copyright Office’s mission is not that of a content neutral referee. Rather, the Copyright Office is charged with promoting effective copyright protection. PK is welcome to solicit Congress to change the Copyright Act and the Office’s mandate. But impugning the agency for doing what it’s supposed to do is a deceptive way of going about it. PK effectively indicts and then convicts the Copyright Office for following its mission appropriately, suggesting that doing so could only have been the result of undue influence from copyright owners. But that’s manifestly false, given its purpose.

And make no mistake why: For its narrative to work, PK needs to define the Copyright Office as a neutral party, and show that its neutrality has been unduly compromised. Only then can Public Knowledge justify overhauling the office in its own image, under the guise of magnanimously returning it to its “proper,” neutral role.

Public Knowledge’s implication that it is a better defender of the “public” interest than those who actually serve in the public sector is a subterfuge, masking its real objective of transforming the nature of copyright law in its own, benighted image. A questionable means to a noble end, PK might argue. Not in our book. This story always turns out badly.

Last week, the Internet Association (“IA”) — a trade group representing some of America’s most dynamic and fastest growing tech companies, including the likes of Google, Facebook, Amazon, and eBay — presented the incoming Trump Administration with a ten page policy paper entitled “Policy Roadmap for New Administration, Congress.”

The document’s content is not surprising, given its source: It is, in essence, a summary of the trade association’s members’ preferred policy positions, none of which is new or newly relevant. Which is fine, in principle; lobbying on behalf of members is what trade associations do — although we should be somewhat skeptical of a policy document that purports to represent the broader social welfare while it advocates for members’ preferred policies.

Indeed, despite being labeled a “roadmap,” the paper is backward-looking in certain key respects — a fact that leads to some strange syntax: “[the document is a] roadmap of key policy areas that have allowed the internet to grow, thrive, and ensure its continued success and ability to create jobs throughout our economy” (emphasis added). Since when is a “roadmap” needed to identify past policies? Indeed, as Bloomberg News reporter, Joshua Brustein, wrote:

The document released Monday is notable in that the same list of priorities could have been sent to a President-elect Hillary Clinton, or written two years ago.

As a wishlist of industry preferences, this would also be fine, in principle. But as an ostensibly forward-looking document, aimed at guiding policy transition, the IA paper is disappointingly un-self-aware. Rather than delineating an agenda aimed at improving policies to promote productivity, economic development and social cohesion throughout the economy, the document is overly focused on preserving certain regulations adopted at the dawn of the Internet age (when the internet was capitalized). Even more disappointing given the IA member companies’ central role in our contemporary lives, the document evinces no consideration of how Internet platforms themselves should strive to balance rights and responsibilities in new ways that promote meaningful internet freedom.

In short, the IA’s Roadmap constitutes a policy framework dutifully constructed to enable its members to maintain the status quo. While that might also serve to further some broader social aims, it’s difficult to see in the approach anything other than a defense of what got us here — not where we go from here.

To take one important example, the document reiterates the IA’s longstanding advocacy for the preservation of the online-intermediary safe harbors of the 20 year-old Digital Millennium Copyright Act (“DMCA”) — which were adopted during the era of dial-up, and before any of the principal members of the Internet Association even existed. At the same time, however, it proposes to reform one piece of legislation — the Electronic Communications Privacy Act (“ECPA”) — precisely because, at 30 years old, it has long since become hopelessly out of date. But surely if outdatedness is a justification for asserting the inappropriateness of existing privacy/surveillance legislation — as seems proper, given the massive technological and social changes surrounding privacy — the same concern should apply to copyright legislation with equal force, given the arguably even-more-substantial upheavals in the economic and social role of creative content in society today.

Of course there “is more certainty in reselling the past, than inventing the future,” but a truly valuable roadmap for the future from some of the most powerful and visionary companies in America should begin to tackle some of the most complicated and nuanced questions facing our country. It would be nice to see a Roadmap premised upon a well-articulated theory of accountability across all of the Internet ecosystem in ways that protect property, integrity, choice and other essential aspects of modern civil society.

Each of IA’s companies was principally founded on a vision of improving some aspect of the human condition; in many respects they have succeeded. But as society changes, even past successes may later become inconsistent with evolving social mores and economic conditions, necessitating thoughtful introspection and, often, policy revision. The IA can do better than pick and choose from among existing policies based on unilateral advantage and a convenient repudiation of responsibility.

Neil TurkewitzTruth on the Market is delighted to welcome our newest blogger, Neil Turkewitz. Neil is the newly minted Senior Policy Counsel at the International Center for Law & Economics (so we welcome him to ICLE, as well!).

Prior to joining ICLE, Neil spent 30 years at the Recording Industry Association of America (RIAA), most recently as Executive Vice President, International.

Neil has spent most of his career working to expand economic opportunities for the music industry through modernization of copyright legislation and effective enforcement in global markets. He has worked closely with creative communities around the globe, with the US and foreign governments, and with international organizations (including WIPO and the WTO), to promote legal and enforcement reforms to respond to evolving technology, and to promote a balanced approach to digital trade and Internet governance premised upon the importance of regulatory coherence, elimination of inefficient barriers to global communications, and respect for Internet freedom and the rule of law.

Among other things, Neil was instrumental in the negotiation of the WTO TRIPS Agreement, worked closely with the US and foreign governments in the negotiation of free trade agreements, helped to develop the OECD’s Communique on Principles for Internet Policy Making, coordinated a global effort culminating in the production of the WIPO Internet Treaties, served as a formal advisor to the Secretary of Commerce and the USTR as Vice-Chairman of the Industry Trade Advisory Committee on Intellectual Property Rights, and served as a member of the Board of the Chamber of Commerce’s Global Intellectual Property Center.

You can read some of his thoughts on Internet governance, IP, and international trade here and here.

Welcome Neil!