It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.
But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)
Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.
An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.
In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:
Deals effectively with serious competitive problems; while at the same time
Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.
Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.
Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.
Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication.
The Risks of Antitrust by Anecdote
The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm.
The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.
While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.
The Specter of No-Fault Antitrust Liability
The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.
Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.
Remembering Why Market Power Matters
To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices. The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.
Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.
It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle.
This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.
There is little doubt that Federal Trade Commission (FTC) unfair methods of competition rulemaking proceedings are in the offing. Newly named FTC Chair Lina Khan and Commissioner Rohit Chopra both have extolled the benefits of competition rulemaking in a major law review article. What’s more, in May, Commissioner Rebecca Slaughter (during her stint as acting chair) established a rulemaking unit in the commission’s Office of General Counsel empowered to “explore new rulemakings to prohibit unfair or deceptive practices and unfair methods of competition” (emphasis added).
In short, a majority of sitting FTC commissioners apparently endorse competition rulemaking proceedings. As such, it is timely to ask whether FTC competition rules would promote consumer welfare, the paramount goal of competition policy.
In a recently published Mercatus Center research paper, I assess the case for competition rulemaking from a competition perspective and find it wanting. I conclude that, before proceeding, the FTC should carefully consider whether such rulemakings would be cost-beneficial. I explain that any cost-benefit appraisal should weigh both the legal risks and the potential economic policy concerns (error costs and “rule of law” harms). Based on these considerations, competition rulemaking is inappropriate. The FTC should stick with antitrust enforcement as its primary tool for strengthening the competitive process and thereby promoting consumer welfare.
A summary of my paper follows.
Legal Risks of Competition Rulemaking
Section 6(g) of the original Federal Trade Commission Act authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Section 6(g) rules are enacted pursuant to the “informal rulemaking” requirements of Section 553 of the Administrative Procedures Act (APA), which apply to the vast majority of federal agency rulemaking proceedings.
Before launching Section 6(g) competition rulemakings, however, the FTC would be well-advised first to weigh the legal risks and policy concerns associated with such an endeavor. Rulemakings are resource-intensive proceedings and should not lightly be undertaken without an eye to their feasibility and implications for FTC enforcement policy.
Only one appeals court decision addresses the scope of Section 6(g) rulemaking. In 1971, the FTC enacted a Section 6(g) rule stating that it was both an “unfair method of competition” and an “unfair act or practice” for refiners or others who sell to gasoline retailers “to fail to disclose clearly and conspicuously in a permanent manner on the pumps the minimum octane number or numbers of the motor gasoline being dispensed.” In 1973, in the National Petroleum Refiners case, the U.S. Court of Appeals for the D.C. Circuit upheld the FTC’s authority to promulgate this and other binding substantive rules. The court rejected the argument that Section 6(g) authorized only non-substantive regulations concerning regarding the FTC’s non-adjudicatory, investigative, and informative functions, spelled out elsewhere in Section 6.
In 1975, two years after National Petroleum Refiners was decided, Congress granted the FTC specific consumer-protection rulemaking authority (authorizing enactment of trade regulation rules dealing with unfair or deceptive acts or practices) through Section 202 of the Magnuson-Moss Warranty Act, which added Section 18 to the FTC Act. Magnuson-Moss rulemakings impose adjudicatory-type hearings and other specific requirements on the FTC, unlike more flexible section 6(g) APA informal rulemakings. However, the FTC can obtain civil penalties for violation of Magnuson-Moss rules, something it cannot do if 6(g) rules are violated.
In a recent set of public comments filed with the FTC, the Antitrust Section of the American Bar Association stated:
[T]he Commission’s [6(g)] rulemaking authority is buried in within an enumerated list of investigative powers, such as the power to require reports from corporations and partnerships, for example. Furthermore, the [FTC] Act fails to provide any sanctions for violating any rule adopted pursuant to Section 6(g). These two features strongly suggest that Congress did not intend to give the agency substantive rulemaking powers when it passed the Federal Trade Commission Act.
Rephrased, this argument suggests that the structure of the FTC Act indicates that the rulemaking referenced in Section 6(g) is best understood as an aid to FTC processes and investigations, not a source of substantive policymaking. Although the National Petroleum Refiners decision rejected such a reading, that ruling came at a time of significant judicial deference to federal agency activism, and may be dated.
The U.S. Supreme Court’s April 2021 decision in AMG Capital Management v. FTC further bolsters the “statutory structure” argument that Section 6(g) does not authorize substantive rulemaking. In AMG, the U.S. Supreme Court unanimously held that Section 13(b) of the FTC Act, which empowers the FTC to seek a “permanent injunction” to restrain an FTC Act violation, does not authorize the FTC to seek monetary relief from wrongdoers. The court’s opinion rejected the FTC’s argument that the term “permanent injunction” had historically been understood to include monetary relief. The court explained that the injunctive language was “buried” in a lengthy provision that focuses on injunctive, not monetary relief (note that the term “rules” is similarly “buried” within 6(g) language dealing with unrelated issues). The court also pointed to the structure of the FTC Act, with detailed and specific monetary-relief provisions found in Sections 5(l) and 19, as “confirm[ing] the conclusion” that Section 13(b) does not grant monetary relief.
By analogy, a court could point to Congress’ detailed enumeration of substantive rulemaking provisions in Section 18 (a mere two years after National Petroleum Refiners) as cutting against the claim that Section 6(g) can also be invoked to support substantive rulemaking. Finally, the Supreme Court in AMG flatly rejected several relatively recent appeals court decisions that upheld Section 13(b) monetary-relief authority. It follows that the FTC cannot confidently rely on judicial precedent (stemming from one arguably dated court decision, National Petroleum Refiners) to uphold its competition rulemaking authority.
In sum, the FTC will have to overcome serious fundamental legal challenges to its section 6(g) competition rulemaking authority if it seeks to promulgate competition rules.
Even if the FTC’s 6(g) authority is upheld, it faces three other types of litigation-related risks.
First, applying the nondelegation doctrine, courts might hold that the broad term “unfair methods of competition” does not provide the FTC “an intelligible principle” to guide the FTC’s exercise of discretion in rulemaking. Such a judicial holding would mean the FTC could not issue competition rules.
Second, a reviewing court might strike down individual proposed rules as “arbitrary and capricious” if, say, the court found that the FTC rulemaking record did not sufficiently take into account potentially procompetitive manifestations of a condemned practice.
Third, even if a final competition rule passes initial legal muster, applying its terms to individual businesses charged with rule violations may prove difficult. Individual businesses may seek to structure their conduct to evade the particular strictures of a rule, and changes in commercial practices may render less common the specific acts targeted by a rule’s language.
Economic Policy Concerns Raised by Competition Rulemaking
In addition to legal risks, any cost-benefit appraisal of FTC competition rulemaking should consider the economic policy concerns raised by competition rulemaking. These fall into two broad categories.
First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.
Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules.
A combination of legal risks and economic policy harms strongly counsels against the FTC’s promulgation of substantive competition rules.
First, litigation issues would consume FTC resources and add to the costly delays inherent in developing competition rules in the first place. The compounding of separate serious litigation risks suggests a significant probability that costs would be incurred in support of rules that ultimately would fail to be applied.
Second, even assuming competition rules were to be upheld, their application would raise serious economic policy questions. The inherent inflexibility of rule-based norms is ill-suited to deal with dynamic evolving market conditions, compared with matter-specific antitrust litigation that flexibly applies the latest economic thinking to particular circumstances. New competition rules would also exacerbate costly policy inconsistencies stemming from the existence of dual federal antitrust enforcement agencies, the FTC and the Justice Department.
In conclusion, an evaluation of rule-related legal risks and economic policy concerns demonstrates that a reallocation of some FTC enforcement resources to the development of competition rules would not be cost-effective. Continued sole reliance on case-by-case antitrust litigation would generate greater economic welfare than a mixture of litigation and competition rules.
Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company.
But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.
Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.
The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention).
Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:
But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:
— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.
— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.
— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.
— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.
The report thus asserts that:
The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.
That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]
What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard.
Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark.
Decisions Under Uncertainty
In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.
Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong.
Consider the following passage from FTC economist Ken Heyer’s memo:
The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]
In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.
Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?
In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today.
Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here).
Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.
To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets.
In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.
Putting Erroneous Predictions in Context
So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.
But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.
This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.
In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.
Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:
The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.
FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.
This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.
But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call:
When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.
The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:
Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”
It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.
Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation).
In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.
The FTC Lawyers’ Weak Case for Prosecuting Google
At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.
Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:
A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.
If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.
The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.
Moreover, as Ben Thompson argues in his Stratechery newsletter:
The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.
This difficulty was deftly highlighted by Heyer’s memo:
If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]
Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.
And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.
Google’s ‘revenue-sharing’ agreements
It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:
The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance.
To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).
Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:
This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.
This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:
[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.
Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.
Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):
Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.
Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.
Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system.
In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.
Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:
When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers
The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:
Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites….
…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]
More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:
A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control….
…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….
…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk?
Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time.
Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.
Competitor Harm Is Not an Indicator of the Need for Intervention
Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:
Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.
But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents.
This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:
Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives….
…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest….
…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.
Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:
They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.
Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.
When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.
But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.
A pending case in the U.S. Court of Appeals for the 3rd Circuit has raised several interesting questions about the FTC enforcement approach and patent litigation in the pharmaceutical industry. The case, FTC v. AbbVie, involves allegations that AbbVie (and Besins) filed sham patent infringement cases against generic manufacturer Teva (and Perrigo) for the purpose of preventing or delaying entry into the testosterone gel market in which AbbVie’s AndroGel had a monopoly. The FTC further alleges that AbbVie and Teva settled the testosterone gel litigation in AbbVie’s favor while making a large payment to Teva in an unrelated case, behavior that, considered together, amounted to an illegal reverse payment settlement. The district court dismissed the reverse payment claims, but concluded that the patent infringement cases were sham litigation. It ordered disgorgement damages of $448 million against AbbVie and Besins which was the profit they gained from maintaining the AndroGel monopoly.
3rd Circuit has been asked to review several elements of the
district court’s decision including whether the original patent infringement
cases amounted to sham litigation, whether the payment to Teva in a separate
case amounted to an illegal reverse payment, and whether the
FTC has the authority to seek disgorgement damages. The decision will help to clarify outstanding
issues relating to patent litigation and the FTC’s enforcement abilities, but
it also has the potential to chill pro-competitive behavior in the
pharmaceutical market encouraged under Hatch-Waxman.
the 3rd Circuit will review whether AbbVie’s patent infringement
case was sham litigation by asking whether the district court
applied the right standard and how plaintiffs must prove that lawsuits are
baseless. The district court determined
that the case was a sham because it was objectively baseless (AbbVie couldn’t
reasonably expect to win) and subjectively baseless (AbbVie brought the cases
solely to delay generic entry into the market). AbbVie argues that the district court erred by
not requiring affirmative evidence of bad faith and not requiring the FTC to
present clear and convincing evidence that AbbVie and its attorneys believed
the lawsuits were baseless.
sham litigation should be penalized and deterred, especially when it produces
anticompetitive effects, the 3rd Circuit’s decision, depending on
how it comes out, also has the potential to deter brand drug makers from filing
patent infringement cases in the first place.
This threatens to disrupt the delicate balance that Hatch-Waxman sought to establish
between protecting generic entry while encouraging brand competition.
The 3rd Circuit will also determine whether AbbVie’s payment to Teva in a separate case involving cholesterol medicine was an illegal reverse payment, otherwise known as a “pay-for-delay” settlement. The FTC asserts that the actions in the two cases—one involving testosterone gel and the other involving cholesterol medicine—should be considered together, but the district court disagreed and determined there was no illegal reverse payment. True pay-for-delay settlements are anticompetitive and harm consumers by delaying their access to cheaper generic alternatives. However, an overly-liberal definition of what constitutes an illegal reverse payment will deter legitimate settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place. Moreover, FTC’s argument that two settlements occurring in separate cases around the same time is suspicious overlooks the reality that the pharmaceutical industry has become increasingly concentrated and drug companies often have more than one pending litigation matter against another company involving entirely different products and circumstances.
Finally, the 3rd Circuit will
determine whether the FTC has the authority to seek disgorgement damages on past
acts like settled patent litigation.
AbbVie has argued that the agency has no right to disgorgement because
it isn’t enumerated in the FTC Act and because courts can’t order injunctive
relieve, including disgorgement, on completed past acts.
The FTC has sought disgorgement damages only sparingly, but the frequency with which the agency seeks disgorgement and the amount of the damages have increased in recent years. Proponents of the FTC’s approach argue that the threat of large disgorgement damages provides a strong deterrent to anticompetitive behavior. While true, FTC-ordered disgorgement (even if permissible) may go too far and end up chilling economic activity by exposing businesses to exorbitant liability without clear guidance on when disgorgement will be awarded. The 3rd Circuit will determine whether the FTC’s enforcement approach is authorized, a decision that has important implications for whether the agency’s enforcement can deter unfair practices without depressing economic activity.
I posted this originally on my own blog, but decided to cross-post here since Thom and I have been blogging on this topic.
“The U.S. stock market is having another solid year. You wouldn’t know it by looking at the shares of companies that manage money.”
That’s the lead from Charles Stein on Bloomberg’s Markets’ page today. Stein goes on to offer three possible explanations: 1) a weary bull market, 2) a move toward more active stock-picking by individual investors, and 3) increasing pressure on fees.
So what has any of that to do with the common ownership issue? A few things.
First, it shows that large institutional investors must not be very good at harvesting the benefits of the non-competitive behavior they encourage among the firms the invest in–if you believe they actually do that in the first place. In other words, if you believe common ownership is a problem because CEOs are enriching institutional investors by softening competition, you must admit they’re doing a pretty lousy job of capturing that value.
Second, and more importantly–as well as more relevant–the pressure on fees has led money managers to emphasis low-cost passive index funds. Indeed, among the firms doing well according to the article is BlackRock, “whose iShares exchange-traded fund business tracks indexes, won $20 billion.” In an aggressive move, Fidelity has introduced a total of four zero-fee index funds as a way to draw fee-conscious investors. These index tracking funds are exactly the type of inter-industry diversified funds that negate any incentive for competition softening in any one industry.
Finally, this also illustrates the cost to the investing public of the limits on common ownership proposed by the likes of Einer Elhague, Eric Posner, and Glen Weyl. Were these types of proposals in place, investment managers could not offer diversified index funds that include more than one firm’s stock from any industry with even a moderate level of market concentration. Given competitive forces are pushing investment companies to increase the offerings of such low-cost index funds, any regulatory proposal that precludes those possibilities is sure to harm the investing public.
Just one more piece of real evidence that common ownership is not only not a problem, but that the proposed “fixes” are.
On November 1st and 2nd, Cofece, the Mexican Competition Agency, hosted an International Competition Network (ICN) workshop on competition advocacy, featuring presentations from government agency officials, think tanks, and international organizations. The workshop highlighted the excellent work that the ICN has done in supporting efforts to curb the most serious source of harm to the competitive process worldwide: government enactment of anticompetitive regulatory schemes and guidance, often at the behest of well-connected, cronyist rent-seeking businesses that seek to protect their privileges by imposing costs on rivals.
The mission of the Advocacy Working Group (AWG) is to undertake projects, to develop practical tools and guidance, and to facilitate experience-sharing among ICN member agencies, in order to improve the effectiveness of ICN members in advocating the dissemination of competition principles and to promote the development of a competition culture within society. Advocacy reinforces the value of competition by educating citizens, businesses and policy-makers. In addition to supporting the efforts of competition agencies in tackling private anti-competitive behaviour, advocacy is an important tool in addressing public restrictions to competition. Competition advocacy in this context refers to those activities conducted by the competition agency, that are related to the promotion of a competitive environment by means of non-enforcement mechanisms, mainly through its relationships with other governmental entities and by increasing public awareness in regard to the benefits of competition.
At the Cofece workshop, I moderated a panel on “stakeholder engagement in the advocacy process,” featuring presentations by representatives of Cofece, the Japan Fair Trade Commission, and the Organization for Economic Cooperation and Development. As I emphasized in my panel presentation:
Developing an appropriate competition advocacy strategy is key to successful interventions. Public officials should be mindful of the relative importance of particular advocacy targets, as well as matter-specific political constraints and competing stakeholder interests. In particular, a competition authority may greatly benefit by identifying and motivating stakeholders who are directly affected by the competitive restraints that are targeted by advocacy interventions. The active support of such stakeholders may be key to the success of an advocacy initiative. More generally, by reaching out to business and consumer stakeholders, a competition authority may build alliances that will strengthen its long-term ability to be effective in promoting a pro-competition agenda.
The U.S. Federal Trade Commission, the FTC, has developed a well-thought-out approach to building strong relationships with stakeholders. The FTC holds public publicized workshops highlighting emerging policy issues, in which NGAs and civil society representatives with expertise are invited to participate. Its personnel (and, in particular, its head) speak before a variety of audiences to inform them of what the FTC is doing and of the opportunities for advocacy filings. It reaches out to civil society groups and the general public through the media, utilizing the Internet and other sources of public information dissemination. It is willing to hold informal non-public meetings with NGAs and civil society representatives to hear their candid views and concerns off the record. It carries out major studies (often following up on information gathered at workshops and from non-government sources) in addition to making advocacy filings. It interacts closely with substantive FTC enforcers and economists to obtain “leads” that may inform future advocacy projects and to suggest possible lines for substantive investigations, based on the input it has received. It communicates with other competition authorities on advocacy strategies. Other competition authorities may wish to note the FTC’s approach in organizing their own advocacy programs.
Competition authorities would also benefit from consulting the ICN Market Studies Good Practice Handbook, last released in updated form at the April 2016 ICN 15th Annual Conference. This discussion of the role of stakeholders, though presented in the context of market studies, provides insights that are broadly applicable more generally to the competition advocacy process. As the Handbook explains, stakeholders are any individuals, groups of individuals, or organizations that have an interest in a particular market or that can be affected by market conditions. The Handbook explains the crucial inputs that stakeholders can provide a competition authority and how engaging with stakeholders can influence the authority’s reputation. The Handbook emphasizes that a stakeholder engagement strategy can be used to determine whether particular stakeholders will be influential, supportive, or unsupportive to a particular endeavor; to consider the input expected from the various stakeholders and plan for soliciting and using this input; and to describing how and when the authority will seek to engage stakeholders. The Handbook provides a long list of categories of stakeholders and suggests ways of reaching out to stakeholders, including through public consultations, open seminars, workshops, and roundtables. Next, the Handbook presents tactics for engaging with stakeholders. The Handbook closes by summarizing key good practices, including publicly soliciting broad voluntary stakeholder engagement, developing a stakeholder engagement strategy early in a particular process, and reviewing and updating the engagement strategy as necessary throughout a particular competition authority undertaking.
In sum, properly conducted advocacy initiatives, along with investigations of hard core cartels, are among the highest-valued uses of limited competition agency resources. To the extent advocacy succeeds in unraveling government-imposed impediments to effective competition, it pays long-run dividends in terms of enhanced consumer welfare, greater economic efficiency, and more robust economic growth. Let us hope that governments around the world (including, of course, the United States Government) keep this in mind in making resource commitments and setting priorities for their competition agencies.
Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.
You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.
The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.
The text of my oral remarks follow, or, if you prefer, you can watch them here:
Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.
I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.
I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.
Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.
The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”
So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.
This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.
Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.
But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.
So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.
Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.
Able though FTC staffers are, this can’t be from sheer skill alone.
Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”
Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”
Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.
So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.
But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.
Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.
Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.
The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).
Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.
Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.
More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.
As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.
But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”
Josh Wright is a tour de force. He has broken the mold for a Washington regulator — and created a new one. As a scholar, he carefully crafts his analyses of public policy. As a strategic thinker, he tackles the issues that redound to the greatest social benefit. And as a champion of competitive markets, he forcefully advances rules to encourage innovation and consumer welfare. Nearly as important as his diligence within the regulatory process, he is transparent in his objectives and takes every opportunity to enunciate his principles for action. The public knows what he is doing and why it is important.
As a sample of Commissioner Wright’s dedication to improving regulatory law, I am delighted to suggest the talk he gave April 2, 2015 at Clemson University, hosted by the Information Economy Project. His title: Regulation in High-Tech Markets: Public Choice, Regulatory Capture, and the FTC. He was particularly concerned in describing the harm produced by state and local barriers blocking competitive forces with respect to emerging, disruptive innovations such as Uber and AirBnB, offering remedies available via competition policy. The talk is posted here.
In its February 25 North Carolina Dental decision, the U.S. Supreme Court, per Justice Anthony Kennedy, held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state. In so ruling, the Court struck a significant blow against protectionist rent-seeking and for economic liberty. (As I stated in a recent Heritage Foundation legal memorandum, “[a] Supreme Court decision accepting this [active supervision] principle might help to curb special-interest favoritism conferred through state law. At the very least, it could complicate the efforts of special interests to protect themselves from competition through regulation.”)
A North Carolina law subjects the licensing of dentistry to a North Carolina State Board of Dental Examiners (Board), six of whose eight members must be licensed dentists. After dentists complained to the Board that non-dentists were charging lower prices than dentists for teeth whitening, the Board sent cease-and-desist letter to non-dentist teeth whitening providers, warning that the unlicensed practice dentistry is a crime. This led non-dentists to cease teeth whitening services in North Carolina. The Federal Trade Commission (FTC) held that the Board’s actions violated Section 5 of the FTC Act, which prohibits unfair methods of competition, the Fourth Circuit agreed, and the Court affirmed the Fourth Circuit’s decision.
In its decision, the Court rejected the claim that state action immunity, which confers immunity on the anticompetitive conduct of states acting in their sovereign capacity, applied to the Board’s actions. The Court stressed that where a state delegates control over a market to a non-sovereign actor, immunity applies only if the state accepts political accountability by actively supervising that actor’s decisions. The Court applied its Midcal test, which requires (1) clear state articulation and (2) active state supervision of decisions by non-sovereign actors for immunity to attach. The Court held that entities designated as state agencies are not exempt from active supervision when they are controlled by market participants, because allowing an exemption in such circumstances would pose the risk of self-dealing that the second prong of Midcal was created to address.
Here, the Board did not contend that the state exercised any (let alone active) supervision over its anticompetitive conduct. The Court closed by summarizing “a few constant requirements of active supervision,” namely, (1) the supervisor must review the substance of the anticompetitive decision, (2) the supervisor must have the power to veto or modify particular decisions for consistency with state policy, (3) “the mere potential for state supervision is not an adequate substitute for a decision by the State,” and (4) “the state supervisor may not itself be an active market participant.” The Court cautioned, however, that “the adequacy of supervision otherwise will depend on all the circumstances of a case.”
Justice Samuel Alito, joined by Justices Antonin Scalia and Clarence Thomas, dissented, arguing that the Court ignored precedent that state agencies created by the state legislature (“[t]he Board is not a private or ‘nonsovereign’ entity”) are shielded by the state action doctrine. “By straying from this simple path” and assessing instead whether individual agencies are subject to regulatory capture, the Court spawned confusion, according to the dissenters. Midcal was inapposite, because it involved a private trade association. The dissenters feared that the majority’s decision may require states “to change the composition of medical, dental, and other boards, but it is not clear what sort of changes are needed to satisfy the test that the Court now adopts.” The dissenters concluded “that determining when regulatory capture has occurred is no simple task. That answer provides a reason for relieving courts from the obligation to make such determinations at all. It does not explain why it is appropriate for the Court to adopt the rather crude test for capture that constitutes the holding of today’s decision.”
The Court’s holding in North Carolina Dental helpfully limits the scope of the Court’s infamous Parker v. Brown decision (which shielded from federal antitrust attack a California raisin producers’ cartel overseen by a state body), without excessively interfering in sovereign state prerogatives. State legislatures may still choose to create self-interested professional regulatory bodies – their sovereignty is not compromised. Now, however, they will have to (1) make it clearer up front that they intend to allow those bodies to displace competition, and (2) subject those bodies to disinterested third party review. These changes should make it far easier for competition advocates (including competition agencies) to spot and publicize welfare-inimical regulatory schemes, and weaken the incentive and ability of rent-seekers to undermine competition through state regulatory processes. All told, the burden these new judicially-imposed constraints will impose on the states appears relatively modest, and should be far outweighed by the substantial welfare benefits they are likely to generate.
Singham points out that the transition away from socialist command-and-control economies, accompanied by international trade liberalization, too often failed to create competitive markets within developing countries. Anticompetitive market distortions imposed by government and generated by politically-connected domestic rent-seekers continue to thrive – measures such as entry barriers that favor entrenched incumbent firms, and other regulatory provisions that artificially favor specific powerful domestic business interests (“crony capitalists”). Such widespread distortions reduce competition and discourage inward investment, thereby retarding innovation and economic growth and reducing consumer welfare. Political influence exercised by the elite beneficiaries of the distortions may prevent legal reforms that would remove these regulatory obstacles to economic development. What, then, can be done to disturb this welfare-inimical state of affairs, when sweeping, nationwide legal reforms are politically impossible?
One incremental approach, advanced by Professor Paul Romer and others, is the establishment of “charter cities” – geographic zones within a country that operate under government-approved free market-oriented charters, rather than under restrictive national laws. Building on this concept, Babson Global Institute has established a “Competitiveness and Enterprise Development Project” (CEDP) designed to promote the notion of “Enterprise Cities” (ECs) – geographically demarcated zones of regulatory autonomy within countries, governed by a Board. ECs would be created through negotiations between a national government and a third party group, such as CEDP. The negotiations would establish “Regulatory Framework Agreements” embodying legal rules (implemented through statutory or constitutional amendments by the host country) that would apply solely within the EC. Although EC legal regimes would differ with respect to minor details (reflecting local differences that would affect negotiations), they would be consistent in stressing freedom of contract, flexible labor markets, and robust property rights, and in prohibiting special regulatory/legal favoritism (so as to avoid anticompetitive market distortions). Protecting foreign investment through third party arbitration and related guarantees would be key to garnering foreign investor interest in ECs. The goal would be to foster a business climate favorable to investment, job creation, innovation, and economic growth. The EC Board would ensure that agreed-to rules would be honored and enforced by EC-specific legal institutions, such as courts.
Because market-oriented EC rules will not affect market-distortive laws elsewhere within the host country, well-organized rent-seeking elites may not have as strong an incentive to oppose creating ECs. Indeed, to the extent that a share of EC revenues is transferred to the host country government (depending upon the nature of the EC’s charter), elites might directly benefit, using their political connections to share in the profits. In short, although setting up viable ECs is no easy matter, their establishment need not be politically unfeasible. Indeed, the continued success of Hong Kong as a free market island within China (Hong Kong places first in the Heritage Foundation’s Index of Economic Freedom), operating under the Basic Law of Hong Kong, suggests the potential for ECs to thrive, despite having very different rules than the parent state’s legal regime. (Moreover, the success of Hong Kong may have proven contagious, as China is now promoting a new Shanghai Free Trade Zone thaw would compete with Hong Kong and Singapore.)
The CEDP is currently negotiating the establishment of ECs with a number of governments. As Singham explains, successful launch of an EC requires: (1) a committed developer; (2) land that can be used for a project; (3) a good external infrastructure connecting the EC with the rest of the country; and (4) “a government that recognizes the benefits to its reform agenda and to its own economic plan of such a designation of regulatory autonomy and is willing to confront its own challenges by thinking outside the box.” While the fourth prerequisite may be the most difficult to achieve, internal pressures for faster economic growth and increased investment may lead jurisdictions with burdensome regulatory regimes to consider ECs.
Finally, the beneficial economic effects of ECs could give additional ammunition to national competition authorities as they advocate for less restrictive regulatory frameworks within their jurisdictions. It could thereby render more effective the efforts of the many new national competition authorities, whose success in enhancing competitive conditions within their jurisdictions has been limited at best.
ECs are no panacea – they will not directly affect restrictive national regulatory laws that benefit privileged special interests but harm the overall economy. However, to the extent they prove financial successes, over time they could play a crucial indirect role in enhancing competition, reducing inefficiency, and spurring economic growth within their host countries.
“The growth of monopoly power among health care providers bears much responsibility for driving up the cost of health care over recent years. By mandating that general hospitals provide uncompensated care, state and federal legislators have given them cause to insist on regulations and discriminatory subsidies to protect them from cheaper competitors. Instead of freeing these markets to allow the provision of care by the most efficient organizations, the Affordable Care Act endorses these anti-competitive arrangements. It extends the premium paid for treatment in general hospitals, employs the purchasing power of the Medicare program to encourage the consolidation of medical practices, and reforms insurance law to eliminate many of the margins for competition between carriers. Institutions sheltered from competition tend to accumulate unnecessary costs over time. In the absence of pro-competitive reforms, higher spending under Obamacare is likely to only further inflate prices faced by those seeking affordable care.”
In short, as the study demonstrates, “[t]he shackling of competition is an essential feature of Obamacare, not a bug.” Accordingly, Obamacare’s enactors (Congress) and implementers (especially HHS) could benefit from a dose of competition advocacy aimed at reforming this welfare-destructive regulatory system. The study highlights particular worthwhile reforms:
“■Refuse to prop up monopoly power. Government regulation and spending should not shield dominant providers from competitors. Monopolies are irresponsive to the needs of patients and payers. They are an unreliable method of subsidizing care that tends to both lower quality and inflate costs.
■Repeal certificate-of-need laws. Legislative constraints on the construction of additional medical capacity should be repealed. Innovative providers should be allowed to expand or establish new facilities that challenge incumbents with lower prices and better quality.
■Subsidize patients, not providers. Public policies should be provider-neutral. Payments should reimburse providers for providing care, period. In particular, publicly funded programs should not operate payment systems designed to keep certain providers in business regardless of the quality, volume, or cost of the treatments they provide. If some individuals are unable to pay for their care, policymakers should subsidize such needy individuals directly.
■Allow patients to shop around. Wherever possible governments and employers should put patients in control of the funds expended on their care, and permit them to keep any savings they obtain from seeking out more efficient providers.
■Repeal Obamacare and its mandates. Forcing individuals to purchase standardized health insurance establishes a captive market, making it easier for providers, insurers, and regulators to degrade services and inflate costs with impunity. Repealing Obamacare and its purchase mandates is essential to creating a market in which suppliers have the flexibility to respond to consumer demands for better value for their money.”