Archives For consumer protection

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

As the Federal Communications (FCC) prepares to revoke its economically harmful “net neutrality” order and replace it with a free market-oriented “Restoring Internet Freedom Order,” the FCC and the Federal Trade Commission (FTC) commendably have announced a joint policy for cooperation on online consumer protection.  According to a December 11 FTC press release:

The Federal Trade Commission and Federal Communications Commission (FCC) announced their intent to enter into a Memorandum of Understanding (MOU) under which the two agencies would coordinate online consumer protection efforts following the adoption of the Restoring Internet Freedom Order.

“The Memorandum of Understanding will be a critical benefit for online consumers because it outlines the robust process by which the FCC and FTC will safeguard the public interest,” said FCC Chairman Ajit Pai. “Instead of saddling the Internet with heavy-handed regulations, we will work together to take targeted action against bad actors. This approach protected a free and open Internet for many years prior to the FCC’s 2015 Title II Order and it will once again following the adoption of the Restoring Internet Freedom Order.”

“The FTC is committed to ensuring that Internet service providers live up to the promises they make to consumers,” said Acting FTC Chairman Maureen K. Ohlhausen. “The MOU we are developing with the FCC, in addition to the decades of FTC law enforcement experience in this area, will help us carry out this important work.”

The draft MOU, which is being released today, outlines a number of ways in which the FCC and FTC will work together to protect consumers, including:

The FCC will review informal complaints concerning the compliance of Internet service providers (ISPs) with the disclosure obligations set forth in the new transparency rule. Those obligations include publicly providing information concerning an ISP’s practices with respect to blocking, throttling, paid prioritization, and congestion management. Should an ISP fail to make the required disclosures—either in whole or in part—the FCC will take enforcement action.

The FTC will investigate and take enforcement action as appropriate against ISPs concerning the accuracy of those disclosures, as well as other deceptive or unfair acts or practices involving their broadband services.

The FCC and the FTC will broadly share legal and technical expertise, including the secure sharing of informal complaints regarding the subject matter of the Restoring Internet Freedom Order. The two agencies also will collaborate on consumer and industry outreach and education.

The FCC’s proposed Restoring Internet Freedom Order, which the agency is expected to vote on at its December 14 meeting, would reverse a 2015 agency decision to reclassify broadband Internet access service as a Title II common carrier service. This previous decision stripped the FTC of its authority to protect consumers and promote competition with respect to Internet service providers because the FTC does not have jurisdiction over common carrier activities.

The FCC’s Restoring Internet Freedom Order would return jurisdiction to the FTC to police the conduct of ISPs, including with respect to their privacy practices. Once adopted, the order will also require broadband Internet access service providers to disclose their network management practices, performance, and commercial terms of service. As the nation’s top consumer protection agency, the FTC will be responsible for holding these providers to the promises they make to consumers.

Particularly noteworthy is the suggestion that the FCC and FTC will work to curb regulatory duplication and competitive empire building – a boon to Internet-related businesses that would be harmed by regulatory excess and uncertainty.  Stay tuned for future developments.

The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.

The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018alleges various consumer protection violations and unfair competition practices.

But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.  

According to the press release issued by the AG’s office:

[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.  

The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.

Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:

We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.

But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.

The antitrust issues

To begin with, AG Hawley references the EU antitrust investigation as evidence that

this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.

True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:

  • United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
  • South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
  • Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.

Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.

As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:

Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.

The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.

The Yelp Claim

Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”

While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:

Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.

In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:

make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….

The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.

Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).

The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.

It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.

Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.

To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.

For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.

The privacy issues

The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”

Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.

The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:

  • “[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
  • “Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
  • “[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
  • Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”

What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?

Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?

And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?

Lest anyone think the FTC is falling down on the job, a year after it issued that original consent order the Commission fined Google $22.5 million for violating the order in a questionable decision that was signed on to by all of the FTC’s Commissioners (both Republican and Democrat) — except the one who thought it didn’t go far enough.

That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.

So what’s really going on in Jefferson City?

While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).

To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.  

Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?

Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:

[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.

AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.

Which raises the spectre of a further problem with the Missouri case: “rent extraction.”

It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.

It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actually notifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.  

Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.

Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.

But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):

Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.

Missouri, in other words, may just be carrying Yelp’s water.

The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”

The FTC will hold an “Informational Injury Workshop” in December “to examine consumer injury in the context of privacy and data security.” Defining the scope of cognizable harm that may result from the unauthorized use or third-party hacking of consumer information is, to be sure, a crucial inquiry, particularly as ever-more information is stored digitally. But the Commission — rightly — is aiming at more than mere definition. As it notes, the ultimate objective of the workshop is to address questions like:

How do businesses evaluate the benefits, costs, and risks of collecting and using information in light of potential injuries? How do they make tradeoffs? How do they assess the risks of different kinds of data breach? What market and legal incentives do they face, and how do these incentives affect their decisions?

How do consumers perceive and evaluate the benefits, costs, and risks of sharing information in light of potential injuries? What obstacles do they face in conducting such an evaluation? How do they evaluate tradeoffs?

Understanding how businesses and consumers assess the risk and cost “when information about [consumers] is misused,” and how they conform their conduct to that risk, entails understanding not only the scope of the potential harm, but also the extent to which conduct affects the risk of harm. This, in turn, requires an understanding of the FTC’s approach to evaluating liability under Section 5 of the FTC Act.

The problem, as we discuss in comments submitted by the International Center for Law & Economics to the FTC for the workshop, is that the Commission’s current approach troublingly mixes the required separate analyses of risk and harm, with little elucidation of either.

The core of the problem arises from the Commission’s reliance on what it calls a “reasonableness” standard for its evaluation of data security. By its nature, a standard that assigns liability for only unreasonable conduct should incorporate concepts resembling those of a common law negligence analysis — e.g., establishing a standard of due care, determining causation, evaluating the costs of and benefits of conduct that would mitigate the risk of harm, etc. Unfortunately, the Commission’s approach to reasonableness diverges from the rigor of a negligence analysis. In fact, as it has developed, it operates more like a strict liability regime in which largely inscrutable prosecutorial discretion determines which conduct, which firms, and which outcomes will give rise to liability.

Most troublingly, coupled with the Commission’s untenably lax (read: virtually nonexistent) evidentiary standards, the extremely liberal notion of causation embodied in its “reasonableness” approach means that the mere storage of personal information, even absent any data breach, could amount to an unfair practice under the Act — clearly not a “reasonable” result.

The notion that a breach itself can constitute injury will, we hope, be taken up during the workshop. But even if injury is limited to a particular type of breach — say, one in which sensitive, personal information is exposed to a wide swath of people — unless the Commission’s definition of what it means for conduct to be “likely to cause” harm is fixed, it will virtually always be the case that storage of personal information could conceivably lead to the kind of breach that constitutes injury. In other words, better defining the scope of injury does little to cabin the scope of the agency’s discretion when conduct creating any risk of that injury is actionable.

Our comments elaborate on these issues, as well as providing our thoughts on how the subjective nature of informational injuries can fit into Section 5, with a particular focus on the problem of assessing informational injury given evolving social context, and the need for appropriately assessing benefits in any cost-benefit analysis of conduct leading to informational injury.

ICLE’s full comments are available here.

The comments draw upon our article, When ‘Reasonable’ Isn’t: The FTC’s Standard-Less Data Security Standard, forthcoming in the Journal of Law, Economics and Policy.

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

I’ll be participating in two excellent antitrust/consumer protection events next week in DC, both of which may be of interest to our readers:

5th Annual Public Policy Conference on the Law & Economics of Privacy and Data Security

hosted by the GMU Law & Economics Center’s Program on Economics & Privacy, in partnership with the Future of Privacy Forum, and the Journal of Law, Economics & Policy.

Conference Description:

Data flows are central to an increasingly large share of the economy. A wide array of products and business models—from the sharing economy and artificial intelligence to autonomous vehicles and embedded medical devices—rely on personal data. Consequently, privacy regulation leaves a large economic footprint. As with any regulatory enterprise, the key to sound data policy is striking a balance between competing interests and norms that leaves consumers better off; finding an approach that addresses privacy concerns, but also supports the benefits of technology is an increasingly complex challenge. Not only is technology continuously advancing, but individual attitudes, expectations, and participation vary greatly. New ideas and approaches to privacy must be identified and developed at the same pace and with the same focus as the technologies they address.

This year’s symposium will include panels on Unfairness under Section 5: Unpacking “Substantial Injury”, Conceptualizing the Benefits and Costs from Data Flows, and The Law and Economics of Data Security.

I will be presenting a draft paper, co-authored with Kristian Stout, on the FTC’s reasonableness standard in data security cases following the Commission decision in LabMD, entitled, When “Reasonable” Isn’t: The FTC’s Standard-less Data Security Standard.

Conference Details:

  • Thursday, June 8, 2017
  • 8:00 am to 3:40 pm
  • at George Mason University, Founders Hall (next door to the Law School)
    • 3351 Fairfax Drive, Arlington, VA 22201

Register here

View the full agenda here

 

The State of Antitrust Enforcement

hosted by the Federalist Society.

Panel Description:

Antitrust policy during much of the Obama Administration was a continuation of the Bush Administration’s minimal involvement in the market. However, at the end of President Obama’s term, there was a significant pivot to investigations and blocks of high profile mergers such as Halliburton-Baker Hughes, Comcast-Time Warner Cable, Staples-Office Depot, Sysco-US Foods, and Aetna-Humana and Anthem-Cigna. How will or should the new Administration analyze proposed mergers, including certain high profile deals like Walgreens-Rite Aid, AT&T-Time Warner, Inc., and DraftKings-FanDuel?

Join us for a lively luncheon panel discussion that will cover these topics and the anticipated future of antitrust enforcement.

Speakers:

  • Albert A. Foer, Founder and Senior Fellow, American Antitrust Institute
  • Profesor Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Honorable Joshua D. Wright, Professor of Law, George Mason University School of Law
  • Moderator: Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law and President, Cass & Associates, PC

Panel Details:

  • Friday, June 09, 2017
  • 12:00 pm to 2:00 pm
  • at the National Press Club, MWL Conference Rooms
    • 529 14th Street, NW, Washington, DC 20045

Register here

Hope to see everyone at both events!

Today, the International Center for Law & Economics (ICLE) released a study updating our 2014 analysis of the economic effects of the Durbin Amendment to the Dodd-Frank Act.

The new paper, Unreasonable and Disproportionate: How the Durbin Amendment Harms Poorer Americans and Small Businesses, by ICLE scholars, Todd J. Zywicki, Geoffrey A. Manne, and Julian Morris, can be found here; a Fact Sheet highlighting the paper’s key findings is available here.

Introduced as part of the Dodd-Frank Act in 2010, the Durbin Amendment sought to reduce the interchange fees assessed by large banks on debit card transactions. In the words of its primary sponsor, Sen. Richard Durbin, the Amendment aspired to help “every single Main Street business that accepts debit cards keep more of their money, which is a savings they can pass on to their consumers.”

Unfortunately, although the Durbin Amendment did generate benefits for big-box retailers, ICLE’s 2014 analysis found that it had actually harmed many other merchants and imposed substantial net costs on the majority of consumers, especially those from lower-income households.

In the current study, we analyze a welter of new evidence and arguments to assess whether time has ameliorated or exacerbated the Amendment’s effects. Our findings in this report expand upon and reinforce our findings from 2014:

Relative to the period before the Durbin Amendment, almost every segment of the interrelated retail, banking, and consumer finance markets has been made worse off as a result of the Amendment.

Predictably, the removal of billions of dollars in interchange fee revenue has led to the imposition of higher bank fees and reduced services for banking consumers.

In fact, millions of households, regardless of income level, have been adversely affected by the Durbin Amendment through higher overdraft fees, increased minimum balances, reduced access to free checking, higher ATM fees, and lost debit card rewards, among other things.

Nor is there any evidence that merchants have lowered prices for retail consumers; for many small-ticket items, in fact, prices have been driven up.

Contrary to Sen. Durbin’s promises, in other words, increased banking costs have not been offset by lower retail prices.

At the same time, although large merchants continue to reap a Durbin Amendment windfall, there remains no evidence that small merchants have realized any interchange cost savings — indeed, many have suffered cost increases.

And all of these effects fall hardest on the poor. Hundreds of thousands of low-income households have chosen (or been forced) to exit the banking system, with the result that they face higher costs, difficulty obtaining credit, and complications receiving and making payments — all without offset in the form of lower retail prices.

Finally, the 2017 study also details a new trend that was not apparent when we examined the data three years ago: Contrary to our findings then, the two-tier system of interchange fee regulation (which exempts issuing banks with under $10 billion in assets) no longer appears to be protecting smaller banks from the Durbin Amendment’s adverse effects.

This week the House begins consideration of the Amendment’s repeal as part of Rep. Hensarling’s CHOICE Act. Our study makes clear that the Durbin price-control experiment has proven a failure, and that repeal is, indeed, the only responsible option.

Click on the following links to read:

Full Paper

Fact Sheet

Summary

In a recent long-form article in the New York Times, reporter Noam Scheiber set out to detail some of the ways Uber (and similar companies, but mainly Uber) are engaged in “an extraordinary experiment in behavioral science to subtly entice an independent work force to maximize its growth.”

That characterization seems innocuous enough, but it is apparent early on that Scheiber’s aim is not only to inform but also, if not primarily, to deride these efforts. The title of the piece, in fact, sets the tone:

How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons

Uber and its relationship with its drivers are variously described by Scheiber in the piece as secretive, coercive, manipulative, dominating, and exploitative, among other things. As Schreiber describes his article, it sets out to reveal how

even as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

What’s so galling about the piece is that, if you strip away the biased and frequently misguided framing, it presents a truly engaging picture of some of the ways that Uber sets about solving a massively complex optimization problem, abetted by significant agency costs.

So I did. Strip away the detritus, add essential (but omitted) context, and edit the article to fix the anti-Uber bias, the one-sided presentation, the mischaracterizations, and the fundamentally non-economic presentation of what is, at its core, a fascinating illustration of some basic problems (and solutions) from industrial organization economics. (For what it’s worth, Scheiber should know better. After all, “He holds a master’s degree in economics from the University of Oxford, where he was a Rhodes Scholar, and undergraduate degrees in math and economics from Tulane University.”)

In my retelling, the title becomes:

How Uber Uses Innovative Management Tactics to Incentivize Its Drivers

My transformed version of the piece, with critical commentary in the form of tracked changes to the original, is here (pdf).

It’s a long (and, as I said, fundamentally interesting) piece, with cool interactive graphics, well worth the read (well, at least in my retelling, IMHO). Below is just a taste of the edits and commentary I added.

For example, where Scheiber writes:

Uber exists in a kind of legal and ethical purgatory, however. Because its drivers are independent contractors, they lack most of the protections associated with employment. By mastering their workers’ mental circuitry, Uber and the like may be taking the economy back toward a pre-New Deal era when businesses had enormous power over workers and few checks on their ability to exploit it.

With my commentary (here integrated into final form rather than tracked), that paragraph becomes:

Uber operates under a different set of legal constraints, however, also duly enacted and under which millions of workers have profitably worked for decades. Because its drivers are independent contractors, they receive their compensation largely in dollars rather than government-mandated “benefits” that remove some of the voluntariness from employer/worker relationships. And, in the case of overtime pay, for example, the Uber business model that is built in part on offering flexible incentives to match supply and demand using prices and compensation, would be next to impossible. It is precisely through appealing to drivers’ self-interest that Uber and the like may be moving the economy forward to a new era when businesses and workers have more flexibility, much to the benefit of all.

Elsewhere, Scheiber’s bias is a bit more subtle, but no less real. Thus, he writes:

As he tried to log off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted.

With my edits and commentary, that paragraph becomes:

As he started the process of logging off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted, but the former was listed first. It’s anyone’s guess whether either characteristic — placement or coloring — had any effect on drivers’ likelihood of clicking one button or the other.

And one last example. Scheiber writes:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, there is another way to think of the logic of forward dispatch: It overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

This pre-emptive hard-wiring can have a huge influence on behavior, said David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably, as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be.

Here’s how I would recast that, and add some much-needed economics:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies — by giving them more income-earning opportunities.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, and seems like another win-win, some critics have tried to paint even this means of satisfying both driver and consumer preferences in a negative light by claiming that the forward dispatch algorithm overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

Tweaks like these put paid to the arguments that Uber is simply trying to abuse its drivers. And yet, critics continue to make such claims:

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

It’s difficult to take seriously claims that Uber “abuses” drivers by setting a default that drivers almost certainly prefer; surely drivers seek out another fare following the last fare more often than they seek out another bathroom break. In any case, the difference between one default and the other is a small change in the number of times drivers might have to push a single button; hardly a huge impediment.

But such claims persist, nevertheless. Setting a trivially different default can have a huge influence on behavior, claims David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably — and to change the subject — as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be. But there are any number of defenses of this practice, from both a driver- and consumer-welfare standpoint. Not least, such disclosure could well create isolated scarcity for a huge range of individual ride requests (as opposed to the general scarcity during a “surge”), leading to longer wait times, the need to adjust prices for consumers on the basis of individual rides, and more intense competition among drivers for the most profitable rides. Given these and other explanations, it is extremely unlikely that the practice is actually aimed at “abusing” drivers.

As they say, read the whole thing!

What does it mean to “own” something? A simple question (with a complicated answer, of course) that, astonishingly, goes unasked in a recent article in the Pennsylvania Law Review entitled, What We Buy When We “Buy Now,” by Aaron Perzanowski and Chris Hoofnagle (hereafter “P&H”). But how can we reasonably answer the question they pose without first trying to understand the nature of property interests?

P&H set forth a simplistic thesis for their piece: when an e-commerce site uses the term “buy” to indicate the purchase of digital media (instead of the term “license”), it deceives consumers. This is so, the authors assert, because the common usage of the term “buy” indicates that there will be some conveyance of property that necessarily includes absolute rights such as alienability, descendibility, and excludability, and digital content doesn’t generally come with these attributes. The authors seek to establish this deception through a poorly constructed survey regarding consumers’ understanding of the parameters of their property interests in digitally acquired copies. (The survey’s considerable limitations is a topic for another day….)

The issue is more than merely academic: NTIA and the USPTO have just announced that they will hold a public meeting

to discuss how best to communicate to consumers regarding license terms and restrictions in connection with online transactions involving copyrighted works… [as a precursor to] the creation of a multistakeholder process to establish best practices to improve consumers’ understanding of license terms and restrictions in connection with online transactions involving creative works.

Whatever the results of that process, it should not begin, or end, with P&H’s problematic approach.

Getting to their conclusion that platforms are engaged in deceptive practices requires two leaps of faith: First, that property interests are absolute and that any restraint on the use of “property” is inconsistent with the notion of ownership; and second, that consumers’ stated expectations (even assuming that they were measured correctly) alone determine the appropriate contours of legal (and economic) property interests. Both leaps are meritless.

Property and ownership are not absolute concepts

P&H are in such a rush to condemn downstream restrictions on the alienability of digital copies that they fail to recognize that “property” and “ownership” are not absolute terms, and are capable of being properly understood only contextually. Our very notions of what objects may be capable of ownership change over time, along with the scope of authority over owned objects. For P&H, the fact that there are restrictions on the use of an object means that it is not properly “owned.” But that overlooks our everyday understanding of the nature of property.

Ownership is far more complex than P&H allow, and ownership limited by certain constraints is still ownership. As Armen Alchian and Harold Demsetz note in The Property Right Paradigm (1973):

In common speech, we frequently speak of someone owning this land, that house, or these bonds. This conversational style undoubtedly is economical from the viewpoint of quick communication, but it masks the variety and complexity of the ownership relationship. What is owned are rights to use resources, including one’s body and mind, and these rights are always circumscribed, often by the prohibition of certain actions. To “own land” usually means to have the right to till (or not to till) the soil, to mine the soil, to offer those rights for sale, etc., but not to have the right to throw soil at a passerby, to use it to change the course of a stream, or to force someone to buy it. What are owned are socially recognized rights of action. (Emphasis added).

Literally, everything we own comes with a range of limitations on our use rights. Literally. Everything. So starting from a position that limitations on use mean something is not, in fact, owned, is absurd.

Moreover, in defining what we buy when we buy digital goods by reference to analog goods, P&H are comparing apples and oranges, without acknowledging that both apples and oranges are bought.

There has been a fair amount of discussion about the nature of digital content transactions (including by the USPTO and NTIA), and whether they are analogous to traditional sales of objects or more properly characterized as licenses. But this is largely a distinction without a difference, and the nature of the transaction is unnecessary in understanding that P&H’s assertion of deception is unwarranted.

Quite simply, we are accustomed to buying licenses as well as products. Whenever we buy a ticket — e.g., an airline ticket or a ticket to the movies — we are buying the right to use something or gain some temporary privilege. These transactions are governed by the terms of the license. But we certainly buy tickets, no? Alchian and Demsetz again:

The domain of demarcated uses of a resource can be partitioned among several people. More than one party can claim some ownership interest in the same resource. One party may own the right to till the land, while another, perhaps the state, may own an easement to traverse or otherwise use the land for specific purposes. It is not the resource itself which is owned; it is a bundle, or a portion, of rights to use a resource that is owned. In its original meaning, property referred solely to a right, title, or interest, and resources could not be identified as property any more than they could be identified as right, title, or interest. (Emphasis added).

P&H essentially assert that restrictions on the use of property are so inconsistent with the notion of property that it would be deceptive to describe the acquisition transaction as a purchase. But such a claim completely overlooks the fact that there are restrictions on any use of property in general, and on ownership of copies of copyright-protected materials in particular.

Take analog copies of copyright-protected works. While the lawful owner of a copy is able to lend that copy to a friend, sell it, or even use it as a hammer or paperweight, he or she can not offer it for rental (for certain kinds of works), cannot reproduce it, may not publicly perform or broadcast it, and may not use it to bludgeon a neighbor. In short, there are all kinds of restrictions on the use of said object — yet P&H have little problem with defining the relationship of person to object as “ownership.”

Consumers’ understanding of all the terms of exchange is a poor metric for determining the nature of property interests

P&H make much of the assertion that most users don’t “know” the precise terms that govern the allocation of rights in digital copies; this is the source of the “deception” they assert. But there is a cost to marking out the precise terms of use with perfect specificity (no contract specifies every eventuality), a cost to knowing the terms perfectly, and a cost to caring about them.

When we buy digital goods, we probably care a great deal about a few terms. For a digital music file, for example, we care first and foremost about whether it will play on our device(s). Other terms are of diminishing importance. Users certainly care whether they can play a song when offline, for example, but whether their children will be able to play it after they die? Not so much. That eventuality may, in fact, be specified in the license, but the nature of this particular ownership relationship includes a degree of rational ignorance on the users’ part: The typical consumer simply doesn’t care. In other words, she is, in Nobel-winning economist Herbert Simon’s term, “boundedly rational.” That isn’t deception; it’s a feature of life without which we would be overwhelmed by “information overload” and unable to operate. We have every incentive and ability to know the terms we care most about, and to ignore the ones about which we care little.

Relatedly, P&H also fail to understand the relationship between price and ownership. A digital song that is purchased from Amazon for $.99 comes with a set of potentially valuable attributes. For example:

  • It may be purchased on its own, without the other contents of an album;
  • It never degrades in quality, and it’s extremely difficult to misplace;
  • It may be purchased from one’s living room and be instantaneously available;
  • It can be easily copied or transferred onto multiple devices; and
  • It can be stored in Amazon’s cloud without taking up any of the consumer’s physical memory resources.

In many ways that matter to consumers, digital copies are superior to analog or physical ones. And yet, compared to physical media, on a per-song basis (assuming one could even purchase a physical copy of a single song without purchasing an entire album), $.99 may represent a considerable discount. Moreover, in 1982 when CDs were first released, they cost an average of $15. In 2017 dollars, that would be $38. Yet today most digital album downloads can be found for $10 or less.

Of course, songs purchased on CD or vinyl offer other benefits that a digital copy can’t provide. But the main thing — the ability to listen to the music — is approximately equal, and yet the digital copy offers greater convenience at (often) lower price. It is impossible to conclude that a consumer is duped by such a purchase, even if it doesn’t come with the ability to resell the song.

In fact, given the price-to-value ratio, it is perhaps reasonable to think that consumers know full well (or at least suspect) that there might be some corresponding limitations on use — the inability to resell, for example — that would explain the discount. For some people, those limitations might matter, and those people, presumably, figure out whether such limitations are present before buying a digital album or song For everyone else, however, the ability to buy a digital song for $.99 — including all of the benefits of digital ownership, but minus the ability to resell — is a good deal, just as it is worth it to a home buyer to purchase a house, regardless of whether it is subject to various easements.

Consumers are, in fact, familiar with “buying” property with all sorts of restrictions

The inability to resell digital goods looms inordinately large for P&H: According to them, by virtue of the fact that digital copies may not be resold, “ownership” is no longer an appropriate characterization of the relationship between the consumer and her digital copy. P&H believe that digital copies of works are sufficiently similar to analog versions, that traditional doctrines of exhaustion (which would permit a lawful owner of a copy of a work to dispose of that copy as he or she deems appropriate) should apply equally to digital copies, and thus that the inability to alienate the copy as the consumer wants means that there is no ownership interest per se.

But, as discussed above, even ownership of a physical copy doesn’t convey to the purchaser the right to make or allow any use of that copy. So why should we treat the ability to alienate a copy as the determining factor in whether it is appropriate to refer to the acquisition as a purchase? P&H arrive at this conclusion only through the illogical assertion that

Consumers operate in the marketplace based on their prior experience. We suggest that consumers’ “default” behavior is based on the experiences of buying physical media, and the assumptions from that context have carried over into the digital domain.

P&H want us to believe that consumers can’t distinguish between the physical and virtual worlds, and that their ability to use media doesn’t differentiate between these realms. But consumers do understand (to the extent that they care) that they are buying a different product, with different attributes. Does anyone try to play a vinyl record on his or her phone? There are perceived advantages and disadvantages to different kinds of media purchases. The ability to resell is only one of these — and for many (most?) consumers not likely the most important.

And, furthermore, the notion that consumers better understood their rights — and the limitations on ownership — in the physical world and that they carried these well-informed expectations into the digital realm is fantasy. Are we to believe that the consumers of yore understood that when they bought a physical record they could sell it, but not rent it out? That if they played that record in a public place they would need to pay performance royalties to the songwriter and publisher? Not likely.

Simply put, there is a wide variety of goods and services that we clearly buy, but that have all kinds of attributes that do not fit P&H’s crabbed definition of ownership. For example:

  • We buy tickets to events and membership in clubs (which, depending upon club rules, may not be alienated, and which always lapse for non-payment).
  • We buy houses notwithstanding the fact that in most cases all we own is the right to inhabit the premises for as long as we pay the bank (which actually retains more of the incidents of “ownership”).
  • In fact, we buy real property encumbered by a series of restrictive covenants: Depending upon where we live, we may not be able to build above a certain height, we may not paint the house certain colors, we may not be able to leave certain objects in the driveway, and we may not be able to resell without approval of a board.

We may or may not know (or care) about all of the restrictions on our use of such property. But surely we may accurately say that we bought the property and that we “own” it, nonetheless.

The reality is that we are comfortable with the notion of buying any number of limited property interests — including the purchasing of a license — regardless of the contours of the purchase agreement. The fact that some ownership interests may properly be understood as licenses rather than as some form of exclusive and permanent dominion doesn’t suggest that a consumer is not involved in a transaction properly characterized as a sale, or that a consumer is somehow deceived when the transaction is characterized as a sale — and P&H are surely aware of this.

Conclusion: The real issue for P&H is “digital first sale,” not deception

At root, P&H are not truly concerned about consumer deception; they are concerned about what they view as unreasonable constraints on the “rights” of consumers imposed by copyright law in the digital realm. Resale looms so large in their analysis not because consumers care about it (or are deceived about it), but because the real object of their enmity is the lack of a “digital first sale doctrine” that exactly mirrors the law regarding physical goods.

But Congress has already determined that there are sufficient distinctions between ownership of digital copies and ownership of analog ones to justify treating them differently, notwithstanding ownership of the particular copy. And for good reason: Trade in “used” digital copies is not a secondary market. Such copies are identical to those traded in the primary market and would compete directly with “pristine” digital copies. It makes perfect sense to treat ownership differently in these cases — and still to say that both digital and analog copies are “bought” and “owned.”

P&H’s deep-seated opposition to current law colors and infects their analysis — and, arguably, their failure to be upfront about it is the real deception. When one starts an analysis with an already-identified conclusion, the path from hypothesis to result is unlikely to withstand scrutiny, and that is certainly the case here.

In an October 25 blog commentary posted at this site, Geoffrey Manne and Kristian Stout argued against a proposed Federal Communications Commission (FCC) ban on the use of mandatory arbitration clauses in internet service providers’ consumer service agreements.  This proposed ban is just one among many unfortunate features in the latest misguided effort by the Federal Communications Commission (FCC) to regulate the privacy of data transmitted over the Internet (FCC Privacy NPRM), discussed by me in an October 27, 2016 Heritage Foundation Legal Memorandum:

The growth of the Internet economy has highlighted the costs associated with the unauthorized use of personal information transmitted online. The federal government’s consumer protection agency, the Federal Trade Commission (FTC), has taken enforcement actions for online privacy violations based on its authority to proscribe “unfair or deceptive” practices affecting commerce. The FTC’s economically influenced case-by-case approach to privacy violations focuses on practices that harm consumers. The FCC has proposed a rule that that would impose intrusive privacy regulation on broadband Internet service providers (but not other Internet companies), without regard to consumer harm.  If implemented, the FCC’s rule would impose major economic costs and would interfere with neutral implementation of the FTC’s less intrusive approach, as well as the FTC’s lead role in federal regulatory privacy coordination with foreign governments.

My analysis concludes with the following recommendations:

The FCC’s Privacy NPRM is at odds with the pro-competitive, economic welfare enhancing goals of the 1996 Telecommunications Act. It ignores the limitations imposed by that act and, if implemented, would harm consumers and producers and slow innovation. This prompts four recommendations.

The FCC should withdraw the NPRM and leave it to the FTC to oversee all online privacy practices under its Section 5 unfairness and deception authority. The adoption of the Privacy Shield, which designates the FTC as the responsible American privacy oversight agency, further strengthens the case against FCC regulation in this area.

In overseeing online privacy practices, the FTC should employ a very light touch that stresses economic analysis and cost-benefit considerations. Moreover, it should avoid requiring that rigid privacy policy conditions be kept in place for long periods of time through consent decree conditions, in order to allow changing market conditions to shape and improve business privacy policies.

Moreover, the FTC should borrow a page from former FTC Commissioner Joshua Wright by implementing an “economic approach” to privacy.  Under such an approach, FTC economists would help make the commission a privacy “thought leader” by developing a rigorous academic research agenda on the economics of privacy, featuring the economic evaluation of industry sectors and practices;

The FTC would bear the burden of proof in showing that violations of a company’s privacy policy are material to consumer decision-making;

FTC economists would report independently to the FTC about proposed privacy-related enforcement initiatives; and

The FTC would publish the views of its Bureau of Economics in all privacy-related consent decrees that are placed on the public record.

The FTC should encourage the European Commission and other foreign regulators to take into account the economics of privacy in developing their privacy regulatory policies. In so doing, it should emphasize that innovation is harmed, the beneficial development of the Internet is slowed, and consumer welfare and rights are undermined through highly prescriptive regulation in this area (well-intentioned though it may be). Relatedly, the FTC and other U.S. government negotiators should argue against adoption of a “one-size-fits-all” global privacy regulation framework.  Such a global framework could harmfully freeze into place over-regulatory policies and preclude beneficial experimentation in alternative forms of “lighter-touch” regulation and enforcement.

Although not a panacea, these recommendations would help deter (or, at least, constrain) the economically harmful government micromanagement of businesses’ privacy practices in the United States and abroad.  The Internet economy would in turn benefit from such a restraint on the grasping hand of big government.

Stay tuned.