Archives For federal trade commission

Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

The U.S. Federal Trade Commission’s (FTC) well-recognized expertise in assessing unfair or deceptive acts or practices can play a vital role in policing abusive broadband practices.  Unfortunately, however, because Section 5(a)(2) of the FTC Act exempts common carriers from the FTC’s jurisdiction, serious questions have been raised about the FTC’s authority to deal with unfair or deceptive practices in cyberspace that are carried out by common carriers, but involve non-common-carrier activity (in contrast, common carrier services have highly regulated terms and must be made available to all potential customers).

Commendably, the Ninth Circuit held on February 26, in FTC v. AT&T Mobility, that harmful broadband data throttling practices by a common carrier were subject to the FTC’s unfair acts or practices jurisdiction, because the common carrier exception is “activity-based,” and the practices in question did not involve common carrier services.  Key excerpts from the summary of the Ninth Circuit’s opinion follow:

The en banc court affirmed the district court’s denial of AT&T Mobility’s motion to dismiss an action brought by th Federal Trade Commission (“FTC”) under Section 5 of the FTC Act, alleging that AT&T’s data-throttling plan was unfair and deceptive. AT&T Mobility’s data-throttling is a practice by which the company reduced customers’ broadband data speed without regard to actual network congestion. Section 5 of the FTC Act gives the agency enforcement authority over “unfair or deceptive acts or practices,” but exempts “common carriers subject to the Acts to regulate commerce.” 15 U.S.C § 45(a)(1), (2). AT&T moved to dismiss the action, arguing that it was exempt from FTC regulation under Section 5. . . .

The en banc court held that the FTC Act’s common carrier exemption was activity-based, and therefore the phrase “common carriers subject to the Acts to regulate commerce” provided immunity from FTC regulation only to the extent that a common carrier was engaging in common carrier services. In reaching this conclusion, the en banc court looked to the FTC Act’s text, the meaning of “common carrier” according to the courts around the time the statute was passed in 1914, decades of judicial interpretation, the expertise of the FTC and Federal Communications Commission (“FCC”), and legislative history.

Addressing the FCC’s order, issued on March 12, 2015, reclassifying mobile data service from a non-common carriage service to a common carriage service, the en banc court held that the prospective reclassification order did not rob the FTC of its jurisdiction or authority over conduct occurring before the order. Accordingly, the en banc court affirmed the district court’s denial of AT&T’s motion to dismiss.

A key introductory paragraph in the Ninth Circuit’s opinion underscores the importance of the court’s holding for sound regulatory policy:

This statutory interpretation [that the common carrier exception is activity-based] also accords with common sense. The FTC is the leading federal consumer protection agency and, for many decades, has been the chief federal agency on privacy policy and enforcement. Permitting the FTC to oversee unfair and deceptive non-common-carriage practices of telecommunications companies has practical ramifications. New technologies have spawned new regulatory challenges. A phone company is no longer just a phone company. The transformation of information services and the ubiquity of digital technology mean that telecommunications operators have expanded into website operation, video distribution, news and entertainment production, interactive entertainment services and devices, home security and more. Reaffirming FTC jurisdiction over activities that fall outside of common-carrier services avoids regulatory gaps and provides consistency and predictability in regulatory enforcement.

But what can the FTC do about unfair or deceptive practices affecting broadband services, offered by common carriers, subsequent to the FCC’s 2015 reclassification of mobile data service as a common carriage service?  The FTC will be able to act, assuming that the Federal Communications Commission’s December 2017 rulemaking, reclassifying mobile broadband Internet access service as not involving a common carrier service, passes legal muster (as it should).  In order to avoid any legal uncertainty, however, Congress could take the simple step of eliminating the FTC Act’s common carrier exception – an outdated relic that threatens to generate disparate enforcement outcomes toward the same abusive broadband practice, based merely upon whether the parent company is deemed a “common carrier.”

On January 23rd, the Heritage Foundation convened its Fourth Annual Antitrust Conference, “Trump Antitrust Policy after One Year.”  The entire Conference can be viewed online (here).  The Conference featured a keynote speech, followed by three separate panels that addressed  developments at the Federal Trade Commission (FTC), at the Justice Department’s Antitrust Division (DOJ), and in the international arena, developments that can have a serious effect on the country’s economic growth and expansion of our business and industrial sector.

  1. Professor Bill Kovacic’s Keynote Speech

The conference started with a bang, featuring a stellar keynote speech (complemented by excellent power point slides) by GW Professor and former FTC Chairman Bill Kovacic, who also serves as a Member of the Board of the UK Government’s Competitive Markets Authority.  Kovacic began by noting the claim by senior foreign officials that “nothing is happening” in U.S. antitrust enforcement.  Although this perception may be inaccurate, Kovacic argued that it colors foreign officials’ dealings with the U.S., and continues a preexisting trend of diminishing U.S. influence on foreign governments’ antitrust enforcement systems.  (It is widely believed that the European antitrust model is dominant internationally.)

In order to enhance the perceived effectiveness (and prestige) of American antitrust on the global plane, American antitrust enforcers should, according to Kovacic, adopt a positive agenda citing specific priorities for action (as opposed to a “negative approach” focused on what actions will not be taken) – an orientation which former FTC Chairman Muris employed successfully in the last Bush Administration.  The positive engagement themes should be communicated powerfully to the public here and abroad through active public engagement by agency officials.  Agency strengths, such as FTC market studies and economic expertise, should be highlighted.

In addition, the FTC and Justice Department should act more like an “antitrust policy joint venture” at home and abroad, extending cooperation beyond guidelines to economic research, studies, and other aspects of their missions.  This would showcase the outstanding capabilities of the U.S. public antitrust enterprise.

  1. FTC Panel

A panel on FTC developments (moderated by Dr. Jeff Eisenach, Managing Director of NERA Economic Consulting and former Chief of Staff to FTC Chairman James Miller) followed Kovacic’s presentation.

Acting Bureau of Competition Chief Bruce Hoffman began by stressing that FTC antitrust enforcers are busier than ever, with a number of important cases in litigation and resources stretched to the limit.  Thus, FTC enforcement is neither weak nor timid – to the contrary, it is quite vigorous.  Hoffman was surprised by recent political attacks on the 40 year bipartisan consensus regarding the economics-centered consumer welfare standard that has set the direction of U.S. antitrust enforcement.  According to Hoffman, noted economist Carl Shapiro has debunked the notion that supposed increases in industry concentration even at the national level are meaningful.  In short, there is no empirical basis to dethrone the consumer welfare standard and replace it with something else.

Other former senior FTC officials engaged in a discussion following Hoffman’s remarks.  Orrick Partner Alex Okuliar, a former Attorney-Advisor to FTC Acting Chairman Maureen Ohlhausen, noted Ohlhausen’s emphasis on “regulatory humility” ( recognizing the inherent limitations of regulation and acting in accordance with those limits) and on the work of the FTC’s Economic Liberty Task Force, which centers on removing unnecessary regulatory restraints on competition (such as excessive occupational licensing requirements).

Wilson Sonsini Partner Susan Creighton, a former Director of the FTC’s Bureau of Competition, discussed the importance of economics-based “technocratic antitrust” (applied by sophisticated judges) for a sound and manageable antitrust system – something still not well understood by many foreign antitrust agencies.  Creighton had three reform suggestions for the Trump Administration:

(1) the DOJ and the FTC should stress the central role of economics in the institutional arrangements of antitrust (DOJ’s “economics structure” is a bit different than the FTC’s);

(2) both agencies should send relatively more economists to represent us at antitrust meetings abroad, thereby enabling the agencies to place a greater stress on the importance of economic rigor in antitrust enforcement; and

(3) the FTC and the DOJ should establish a task force to jointly carry out economics research and hone a consistent economic policy message.

Sidley & Austin Partner Bill Blumenthal, a former FTC General Counsel, noted the problems of defining Trump FTC policy in the absence of new Trump FTC Commissioners.  Blumenthal noted that signs of a populist uprising against current antitrust norms extend beyond antitrust, and that the agencies may have to look to new unilateral conduct cases to show that they are “doing something.”  He added that the populist rejection of current economics-based antitrust analysis is intellectually incoherent.  There is a tension between protecting consumers and protecting labor; for example, anti-consumer cartels may be beneficial to labor union interests.

In a follow-up roundtable discussion, Hoffman noted that theoretical “existence theorems” of anticompetitive harm that lack empirical support in particular cases are not administrable.  Creighton opined that, as an independent agency, the FTC may be a bit more susceptible to congressional pressure than DOJ.  Blumenthal stated that congressional interest may be able to trigger particular investigations, but it does not dictate outcomes.

  1. DOJ Panel

Following lunch, a panel of antitrust experts (moderated by Morgan Lewis Partner and former Chief of Staff to the Assistant Attorney General Hill Wellford) addressed DOJ developments.

The current Principal Deputy Assistant Attorney General for Antitrust, Andrew Finch, began by stating that the three major Antitrust Division initiatives involve (1) intellectual property (IP), (2) remedies, and (3) criminal enforcement.  Assistant Attorney General Makan Delrahim’s November 2017 speech explained that antitrust should not undermine legitimate incentives of patent holders to maximize returns to their IP through licensing.  DOJ is looking into buyer and seller cartel behavior (including in standard setting) that could harm IP rights.  DOJ will work to streamline and improve consent decrees and other remedies, and make it easier to go after decree violations.  In criminal enforcement, DOJ will continue to go after “no employee poaching” employer agreements as criminal violations.

Former Assistant Attorney General Tom Barnett, a Covington & Burling Partner, noted that more national agencies are willing to intervene in international matters, leading to inconsistencies in results.  The International Competition Network is important, but major differences in rhetoric have created a sense that there is very little agreement among enforcers, although the reality may be otherwise.  Muted U.S. agency voices on the international plane and limited resources have proven unfortunate – the FTC needs to engage better in international discussions and needs new Commissioners.

Former Counsel to the Assistant Attorney Eric Grannon, a White & Case Partner, made three specific comments:

(1) DOJ should look outside the career criminal enforcement bureaucracy and consider selecting someone with significant private sector experience as Deputy Assistant Attorney General for Criminal Enforcement;

(2) DOJ needs to go beyond merely focusing on metrics that show increased aggregate fines and jail time year-by-year (something is wrong if cartel activities and penalties keep rising despite the growing emphasis on inculcating an “anti-cartel culture” within firms); and

(3) DOJ needs to reassess its “amnesty plus” program, in which an amnesty applicant benefits by highlighting the existence of a second cartel in which it participates (non-culpable firms allegedly in the second cartel may be fingered, leading to unjustified potential treble damages liability for them in private lawsuits).

Grannon urged that DOJ hold a public workshop on the amnesty plus program in the coming year.  Grannon also argued against the classification of antitrust offenses as crimes of “moral turpitude” (moral turpitude offenses allow perpetrators to be excluded from the U.S. for 20 years).  Finally, as a good government measure, Grannon recommended that the Antitrust Division should post all briefs on its website, including those of opposing parties and third parties.

Baker and Botts Partner Stephen Weissman, a former Deputy Director of the FTC’s Bureau of Competition, found a great deal of continuity in DOJ civil enforcement.  Nevertheless, he expressed surprise at Assistant Attorney General Delrahim’s recent remarks that suggested that DOJ might consider asking the Supreme Court to overturn the Illinois Brick ban on indirect purchaser suits under federal antitrust law.  Weissman noted the increased DOJ focus on the rights of IP holders, not implementers, and the beneficial emphasis on the importance of DOJ’s amicus program.

The following discussion among the panelists elicited agreement (Weissman and Barnett) that the business community needs more clear-cut guidance on vertical mergers (and perhaps on other mergers as well) and affirmative statements on DOJ’s plans.  DOJ was characterized as too heavy-handed in setting timing agreements in mergers.  The panelists were in accord that enforcers should continue to emphasize the American consumer welfare model of antitrust.  The panelists believed the U.S. gets it right in stressing jail time for cartelists and in detrebling for amnesty applicants.  DOJ should, however, apply a proper dose of skepticism in assessing the factual content of proffers made by amnesty applicants.  Former enforcers saw no need to automatically grant markers to those applicants.  Andrew Finch returned to the topic of Illinois Brick, explaining that the Antitrust Modernization Commission had suggested reexamining that case’s bar on federal indirect purchaser suits.  In response to an audience question as to which agency should do internet oversight, Finch stressed that relevant agency experience and resources are assessed on a matter-specific basis.

  1. International Panel

The last panel of the afternoon, which focused on international developments, was moderated by Cadwalader Counsel (and former Attorney-Advisor to FTC Chairman Tim Muris) Bilal Sayyed.

Deputy Assistant Attorney General for International Matters, Roger Alford, began with an overview of trade and antitrust considerations.  Alford explained that DOJ adds a consumer welfare and economics perspective to Trump Administration trade policy discussions.  On the international plane, DOJ supports principles of non-discrimination, strong antitrust enforcement, and opposition to national champions, plus the addition of a new competition chapter in “NAFTA 2.0” negotiations.  The revised 2017 DOJ International Antitrust Guidelines dealt with economic efficiency and the consideration of comity.  DOJ and the Executive Branch will take into account the degree of conflict with other jurisdictions’ laws (fleshing out comity analysis) and will push case coordination as well as policy coordination.  DOJ is considering new ideas for dealing with due process internationally, in addition to working within the International Competition Network to develop best practices.  Better international coordination is also needed on the cartel leniency program.

Next, Koren Wong-Ervin, Qualcomm Director of IP and Competition Policy (and former Director of the Scalia Law School’s Global Antitrust Institute) stated that the Korea Fair Trade Commission had ignored comity and guidance from U.S. expert officials in imposing global licensing remedies and penalties on Qualcomm.  The U.S. Government is moving toward a sounder approach on the evaluation of standard essential patents, as is Europe, with a move away from required component-specific patent licensing royalty determinations.  More generally, a return to an economic effects-based approach to IP licensing is important.  Comprehensive revisions to China’s Anti-Monopoly Law, now under consideration, will have enormous public policy importance.  Balanced IP licensing rules, with courts as gatekeepers, are important.  Chinese law still has overly broad essential facilities and deception law; IP price regulation proposals are very troublesome.  New FTC Commissioners are needed, accompanied by robust budget support for international work.

Latham & Watkins’ Washington, D.C. Managing Partner Michael Egge focused on the substantial divergence in merger enforcement practice around the world.  The cost of compliance imposed by European Commission pre-notification filing requirements is overly high; this pre-notification practice is not written down and has escaped needed public attention.  Chinese merger filing practice (“China is struggling to cope”) features a costly 1-3 month pre-filing acceptance period, and merger filing requirements in India are particularly onerous.

Jim Rill, former Assistant Attorney General for Antitrust and former ABA Antitrust Section Chair, stressed that due process improvements can help promote substantive antitrust convergence around the globe.  Rill stated that U.S. Government officials, with the assistance of private sector stakeholders, need a mechanism (a “report card”) to measure foreign agencies’ implementation of OECD antitrust recommendations.  U.S. Government officials should consider participating in foreign proceedings where the denial of due process is blatant, and where foreign governments indirectly dictate a particular harmful policy result.  Multilateral review of international agreements is valuable as well.  The comity principles found in the 1991 EU-U.S. Antitrust Cooperation Agreement are quite useful.  Trade remedies in antitrust agreements are not a competition solution, and are not helpful.  More and better training programs for foreign officials are called for; International Chamber of Commerce, American Bar Association, and U.S. Chamber of Commerce principles are generally sound.  Some consideration should be given to old ICPAC recommendations, such as (perhaps) the development of a common merger notification form for use around the world.

Douglas Ginsburg, Senior Judge (and former Chief Judge) of the U.S. Court of Appeals for the D.C. Circuit, and former Assistant Attorney General for Antitrust, spoke last, focusing on the European Court of Justice’s Intel decision, which laid bare the deficiencies in the European Commission’s finding of a competition law violation in that matter.

In a brief closing roundtable discussion, Roger Alford suggested possible greater involvement by business community stakeholders in training foreign antitrust officials.

  1. Conclusion

Heritage Foundation host Alden Abbott closed the proceedings with a brief capsule summary of panel highlights.  As in prior years, the Fourth Annual Heritage Antitrust Conference generated spirited discussion among the brightest lights in the American antitrust firmament on recent developments and likely trends in antitrust enforcement and policy development, here and abroad.

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

As the Federal Communications (FCC) prepares to revoke its economically harmful “net neutrality” order and replace it with a free market-oriented “Restoring Internet Freedom Order,” the FCC and the Federal Trade Commission (FTC) commendably have announced a joint policy for cooperation on online consumer protection.  According to a December 11 FTC press release:

The Federal Trade Commission and Federal Communications Commission (FCC) announced their intent to enter into a Memorandum of Understanding (MOU) under which the two agencies would coordinate online consumer protection efforts following the adoption of the Restoring Internet Freedom Order.

“The Memorandum of Understanding will be a critical benefit for online consumers because it outlines the robust process by which the FCC and FTC will safeguard the public interest,” said FCC Chairman Ajit Pai. “Instead of saddling the Internet with heavy-handed regulations, we will work together to take targeted action against bad actors. This approach protected a free and open Internet for many years prior to the FCC’s 2015 Title II Order and it will once again following the adoption of the Restoring Internet Freedom Order.”

“The FTC is committed to ensuring that Internet service providers live up to the promises they make to consumers,” said Acting FTC Chairman Maureen K. Ohlhausen. “The MOU we are developing with the FCC, in addition to the decades of FTC law enforcement experience in this area, will help us carry out this important work.”

The draft MOU, which is being released today, outlines a number of ways in which the FCC and FTC will work together to protect consumers, including:

The FCC will review informal complaints concerning the compliance of Internet service providers (ISPs) with the disclosure obligations set forth in the new transparency rule. Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization, and congestion management. Should an ISP fail to make the required disclosures—either in whole or in part—the FCC will take enforcement action.

The FTC will investigate and take enforcement action as appropriate against ISPs concerning the accuracy of those disclosures, as well as other deceptive or unfair acts or practices involving their broadband services.

The FCC and the FTC will broadly share legal and technical expertise, including the secure sharing of informal complaints regarding the subject matter of the Restoring Internet Freedom Order. The two agencies also will collaborate on consumer and industry outreach and education.

The FCC’s proposed Restoring Internet Freedom Order, which the agency is expected to vote on at its December 14 meeting, would reverse a 2015 agency decision to reclassify broadband Internet access service as a Title II common carrier service. This previous decision stripped the FTC of its authority to protect consumers and promote competition with respect to Internet service providers because the FTC does not have jurisdiction over common carrier activities.

The FCC’s Restoring Internet Freedom Order would return jurisdiction to the FTC to police the conduct of ISPs, including with respect to their privacy practices. Once adopted, the order will also require broadband Internet access service providers to disclose their network management practices, performance, and commercial terms of service. As the nation’s top consumer protection agency, the FTC will be responsible for holding these providers to the promises they make to consumers.

Particularly noteworthy is the suggestion that the FCC and FTC will work to curb regulatory duplication and competitive empire building – a boon to Internet-related businesses that would be harmed by regulatory excess and uncertainty.  Stay tuned for future developments.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

The FTC will hold an “Informational Injury Workshop” in December “to examine consumer injury in the context of privacy and data security.” Defining the scope of cognizable harm that may result from the unauthorized use or third-party hacking of consumer information is, to be sure, a crucial inquiry, particularly as ever-more information is stored digitally. But the Commission — rightly — is aiming at more than mere definition. As it notes, the ultimate objective of the workshop is to address questions like:

How do businesses evaluate the benefits, costs, and risks of collecting and using information in light of potential injuries? How do they make tradeoffs? How do they assess the risks of different kinds of data breach? What market and legal incentives do they face, and how do these incentives affect their decisions?

How do consumers perceive and evaluate the benefits, costs, and risks of sharing information in light of potential injuries? What obstacles do they face in conducting such an evaluation? How do they evaluate tradeoffs?

Understanding how businesses and consumers assess the risk and cost “when information about [consumers] is misused,” and how they conform their conduct to that risk, entails understanding not only the scope of the potential harm, but also the extent to which conduct affects the risk of harm. This, in turn, requires an understanding of the FTC’s approach to evaluating liability under Section 5 of the FTC Act.

The problem, as we discuss in comments submitted by the International Center for Law & Economics to the FTC for the workshop, is that the Commission’s current approach troublingly mixes the required separate analyses of risk and harm, with little elucidation of either.

The core of the problem arises from the Commission’s reliance on what it calls a “reasonableness” standard for its evaluation of data security. By its nature, a standard that assigns liability for only unreasonable conduct should incorporate concepts resembling those of a common law negligence analysis — e.g., establishing a standard of due care, determining causation, evaluating the costs of and benefits of conduct that would mitigate the risk of harm, etc. Unfortunately, the Commission’s approach to reasonableness diverges from the rigor of a negligence analysis. In fact, as it has developed, it operates more like a strict liability regime in which largely inscrutable prosecutorial discretion determines which conduct, which firms, and which outcomes will give rise to liability.

Most troublingly, coupled with the Commission’s untenably lax (read: virtually nonexistent) evidentiary standards, the extremely liberal notion of causation embodied in its “reasonableness” approach means that the mere storage of personal information, even absent any data breach, could amount to an unfair practice under the Act — clearly not a “reasonable” result.

The notion that a breach itself can constitute injury will, we hope, be taken up during the workshop. But even if injury is limited to a particular type of breach — say, one in which sensitive, personal information is exposed to a wide swath of people — unless the Commission’s definition of what it means for conduct to be “likely to cause” harm is fixed, it will virtually always be the case that storage of personal information could conceivably lead to the kind of breach that constitutes injury. In other words, better defining the scope of injury does little to cabin the scope of the agency’s discretion when conduct creating any risk of that injury is actionable.

Our comments elaborate on these issues, as well as providing our thoughts on how the subjective nature of informational injuries can fit into Section 5, with a particular focus on the problem of assessing informational injury given evolving social context, and the need for appropriately assessing benefits in any cost-benefit analysis of conduct leading to informational injury.

ICLE’s full comments are available here.

The comments draw upon our article, When ‘Reasonable’ Isn’t: The FTC’s Standard-Less Data Security Standard, forthcoming in the Journal of Law, Economics and Policy.

In her distinguished tenure as a Commissioner and as Acting Chairman of the FTC, Maureen Ohlhausen has done an outstanding job in explaining the tie between robust patent protection and economic growth and innovation (see, for example, her Harvard Journal of Law and Technology article, here).  Her latest public pronouncement on this topic, an October 13 speech entitled “Strong Patent Rights, Strong Economy,” also makes a highly valuable contribution to the patent policy debate.  Ohlhausen’s speech centers on two key points:  “First, strong patent rights are crucial to economic success.  And, second, economically grounded analysis will reveal the right path through thickets of IP [intellectual property] skepticism.”  Ohlhausen concludes with a reaffirmation of the importance of having the United States lead by example on the world stage in defending strong patent rights:

Patents have been at the heart of US innovation since the founding of our country, and respect for patent rights is fundamental to advance innovation.  The United States is more technologically innovative than any other country in the world.  This reality reflects, in part, the property rights that the United States government grants to inventors.  Still, foreign counterparts take or allow the taking of American proprietary technologies without due payment.  For example, emerging competition regimes view “unfairly high royalties” as illegal under antitrust law.  The FTC’s recent policy work offers an important counterweight to this approach, illustrating the important role that patents play in promoting innovation and benefiting consumers.     

In closing, while we may live in an age of patent skepticism, there is hope. Criticism of IP rights frequently does not hold up upon closer examination. Rather, empirical research favors the close tie between strong IP rights and R&D.  This is not to say that changes to the patent system are always unwarranted.  Rather, the key to addressing the U.S. patent system lies in incremental adjustment where necessary based on a firm empirical foundation.  The U.S. economy stands as a shining reminder of everything that American innovation policy has achieved – and intellectual property rights, and patents, are the important cornerstones of those achievements.

Ohlhausen’s remarks are, as always, thoughtful and well worth studying.

In a recent post at the (appallingly misnamed) ProMarket blog (the blog of the Stigler Center at the University of Chicago Booth School of Business — George Stigler is rolling in his grave…), Marshall Steinbaum keeps alive the hipster-antitrust assertion that lax antitrust enforcement — this time in the labor market — is to blame for… well, most? all? of what’s wrong with “the labor market and the broader macroeconomic conditions” in the country.

In this entry, Steinbaum takes particular aim at the US enforcement agencies, which he claims do not consider monopsony power in merger review (and other antitrust enforcement actions) because their current consumer welfare framework somehow doesn’t recognize monopsony as a possible harm.

This will probably come as news to the agencies themselves, whose Horizontal Merger Guidelines devote an entire (albeit brief) section (section 12) to monopsony, noting that:

Mergers of competing buyers can enhance market power on the buying side of the market, just as mergers of competing sellers can enhance market power on the selling side of the market. Buyer market power is sometimes called “monopsony power.”

* * *

Market power on the buying side of the market is not a significant concern if suppliers have numerous attractive outlets for their goods or services. However, when that is not the case, the Agencies may conclude that the merger of competing buyers is likely to lessen competition in a manner harmful to sellers.

Steinbaum fails to mention the HMGs, but he does point to a US submission to the OECD to make his point. In that document, the agencies state that

The U.S. Federal Trade Commission (“FTC”) and the Antitrust Division of the Department of Justice (“DOJ”) [] do not consider employment or other non-competition factors in their antitrust analysis. The antitrust agencies have learned that, while such considerations “may be appropriate policy objectives and worthy goals overall… integrating their consideration into a competition analysis… can lead to poor outcomes to the detriment of both businesses and consumers.” Instead, the antitrust agencies focus on ensuring robust competition that benefits consumers and leave other policies such as employment to other parts of government that may be specifically charged with or better placed to consider such objectives.

Steinbaum, of course, cites only the first sentence. And he uses it as a launching-off point to attack the notion that antitrust is an improper tool for labor market regulation. But if he had just read a little bit further in the (very short) document he cites, Steinbaum might have discovered that the US antitrust agencies have, in fact, challenged the exercise of collusive monopsony power in labor markets. As footnote 19 of the OECD submission notes:

Although employment is not a relevant policy goal in antitrust analysis, anticompetitive conduct affecting terms of employment can violate the Sherman Act. See, e.g., DOJ settlement with eBay Inc. that prevents the company from entering into or maintaining agreements with other companies that restrain employee recruiting or hiring; FTC settlement with ski equipment manufacturers settling charges that companies illegally agreed not to compete for one another’s ski endorsers or employees. (Emphasis added).

And, ironically, while asserting that labor market collusion doesn’t matter to the agencies, Steinbaum himself points to “the Justice Department’s 2010 lawsuit against Silicon Valley employers for colluding not to hire one another’s programmers.”

Steinbaum instead opts for a willful misreading of the first sentence of the OECD submission. But what the OECD document refers to, of course, are situations where two firms merge, no market power is created (either in input or output markets), but people are laid off because the merged firm does not need all of, say, the IT and human resources employees previously employed in the pre-merger world.

Does Steinbaum really think this is grounds for challenging the merger on antitrust grounds?

Actually, his post suggests that he does indeed think so, although he doesn’t come right out and say it. What he does say — as he must in order to bring antitrust enforcement to bear on the low- and unskilled labor markets (e.g., burger flippers; retail cashiers; Uber drivers) he purports to care most about — is that:

Employers can have that control [over employees, as opposed to independent contractors] without first establishing themselves as a monopoly—in fact, reclassification [of workers as independent contractors] is increasingly standard operating procedure in many industries, which means that treating it as a violation of Section 2 of the Sherman Act should not require that outright monopolization must first be shown. (Emphasis added).

Honestly, I don’t have any idea what he means. Somehow, because firms hire independent contractors where at one time long ago they might have hired employees… they engage in Sherman Act violations, even if they don’t have market power? Huh?

I get why he needs to try to make this move: As I intimated above, there is probably not a single firm in the world that hires low- or unskilled workers that has anything approaching monopsony power in those labor markets. Even Uber, the example he uses, has nothing like monopsony power, unless perhaps you define the market (completely improperly) as “drivers already working for Uber.” Even then Uber doesn’t have monopsony power: There can be no (or, at best, virtually no) markets in the world where an Uber driver has no other potential employment opportunities but working for Uber.

Moreover, how on earth is hiring independent contractors evidence of anticompetitive behavior? ”Reclassification” is not, in fact, “standard operating procedure.” It is the case that in many industries firms (unilaterally) often decide to contract out the hiring of low- and unskilled workers over whom they do not need to exercise direct oversight to specialized firms, thus not employing those workers directly. That isn’t “reclassification” of existing workers who have no choice but to accept their employer’s terms; it’s a long-term evolution of the economy toward specialization, enabled in part by technology.

And if we’re really concerned about what “employee” and “independent contractor” mean for workers and employment regulation, we should reconsider those outdated categories. Firms are faced with a binary choice: hire workers or independent contractors. Neither really fits many of today’s employment arrangements very well, but that’s the choice firms are given. That they sometimes choose “independent worker” over “employee” is hardly evidence of anticompetitive conduct meriting antitrust enforcement.

The point is: The notion that any of this is evidence of monopsony power, or that the antitrust enforcement agencies don’t care about monopsony power — because, Bork! — is absurd.

Even more absurd is the notion that the antitrust laws should be used to effect Steinbaum’s preferred market regulations — independent of proof of actual anticompetitive effect. I get that it’s hard to convince Congress to pass the precise laws you want all the time. But simply routing around Congress and using the antitrust statutes as a sort of meta-legislation to enact whatever happens to be Marshall Steinbaum’s preferred regulation du jour is ridiculous.

Which is a point the OECD submission made (again, if only Steinbaum had read beyond the first sentence…):

[T]wo difficulties with expanding the scope of antitrust analysis to include employment concerns warrant discussion. First, a full accounting of employment effects would require consideration of short-term effects, such as likely layoffs by the merged firm, but also long-term effects, which could include employment gains elsewhere in the industry or in the economy arising from efficiencies generated by the merger. Measuring these effects would [be extremely difficult.]. Second, unless a clear policy spelling out how the antitrust agency would assess the appropriate weight to give employment effects in relation to the proposed conduct or transaction’s procompetitive and anticompetitive effects could be developed, [such enforcement would be deeply problematic, and essentially arbitrary].

To be sure, the agencies don’t recognize enough that they already face the problem of reconciling multidimensional effects — e.g., short-, medium-, and long-term price effects, innovation effects, product quality effects, etc. But there is no reason to exacerbate the problem by asking them to also consider employment effects. Especially not in Steinbaum’s world in which certain employment effects are problematic even without evidence of market power or even actual anticompetitive harm, just because he says so.

Consider how this might play out:

Suppose that Pepsi, Coca-Cola, Dr. Pepper… and every other soft drink company in the world attempted to merge, creating a monopoly soft drink manufacturer. In what possible employment market would even this merger create a monopsony in which anticompetitive harm could be tied to the merger? In the market for “people who know soft drink secret formulas?” Yet Steinbaum would have the Sherman Act enforced against such a merger not because it might create a product market monopoly, but because the existence of a product market monopoly means the firm must be able to bad things in other markets, as well. For Steinbaum and all the other scolds who see concentration as the source of all evil, the dearth of evidence to support such a claim is no barrier (on which, see, e.g., this recent, content-less NYT article (that, naturally, quotes Steinbaum) on how “big business may be to blame” for the slowing rate of startups).

The point is, monopoly power in a product market does not necessarily have any relationship to monopsony power in the labor market. Simply asserting that it does — and lambasting the enforcement agencies for not just accepting that assertion — is farcical.

The real question, however, is what has happened to the University of Chicago that it continues to provide a platform for such nonsense?