We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the way of using “standard contractual clauses” (SCC) to transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (CJEU) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.
In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.
The IDPC’s decision to invalidate some SCCs employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then SCCs are incapable of satisfying the requirements of EU law.
The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms completely forbidden from doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.
But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.
It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.
After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.
That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.
U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.
As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.
Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.
Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.
In the battle of ideas, it is quite useful to be able to brandish clear and concise debating points in support of a proposition, backed by solid analysis. Toward that end, in a recent primer about antitrust law published by the Mercatus Center, I advance four reasons to reject neo-Brandeisian critiques of the consensus (at least, until very recently) consumer welfare-centric approach to antitrust enforcement. My four points, drawn from the primer (with citations deleted and hyperlinks added) are as follows:
First, the underlying assumptions of rising concentration and declining competition on which the neo-Brandeisian critique is largely based (and which are reflected in the introductory legislative findings of the Competition and Antitrust Law Enforcement Reform Act [of 2021, introduced by Senator Klobuchar on February 4, lack merit]. Chapter 6 of the 2020 Economic Report of the President, dealing with competition policy, summarizes research debunking those assumptions. To begin with, it shows that studies complaining that competition is in decline are fatally flawed. Studies such as one in 2016 by the Council of Economic Advisers rely on overbroad market definitions that say nothing about competition in specific markets, let alone across the entire economy. Indeed, in 2018, professor Carl Shapiro, chief DOJ antitrust economist in the Obama administration, admitted that a key summary chart in the 2016 study “is not informative regarding overall trends in concentration in well-defined relevant markets that are used by antitrust economists to assess market power, much less trends in concentration in the U.S. economy.” Furthermore, as the 2020 report points out, other literature claiming that competition is in decline rests on a problematic assumption that increases in concentration (even assuming such increases exist) beget softer competition. Problems with this assumption have been understood since at least the 1970s. The most fundamental problem is that there are alternative explanations (such as exploitation of scale economies) for why a market might demonstrate both high concentration and high markups—explanations that are still consistent with procompetitive behavior by firms. (In a related vein, research by other prominent economists has exposed flaws in studies that purport to show a weakening of merger enforcement standards in recent years.) Finally, the 2020 report notes that the real solution to perceived economic problems may be less government, not more: “As historic regulatory reform across American industries has shown, cutting government-imposed barriers to innovation leads to increased competition, strong economic growth, and a revitalized private sector.”
Second, quite apart from the flawed premises that inform the neo-Brandeisian critique, specific neo-Brandeisian reforms appear highly problematic on economic grounds. Breakups of dominant firms or near prohibitions on dominant firm acquisitions would sacrifice major economies of scale and potential efficiencies of integration, harming consumers without offering any proof that the new market structures in reshaped industries would yield consumer or producer benefits. Furthermore, a requirement that merging parties prove a negative (that the merger will not harm competition) would limit the ability of entrepreneurs and market makers to act on information about misused or underutilized assets through the merger process. This limitation would reduce economic efficiency. After-the-fact studies indicating that a large percentage of mergers do not add wealth and do not otherwise succeed as much as projected miss this point entirely. They ignore what the world would be like if mergers were much more difficult to enter into: a world where there would be lower efficiency and dynamic economic growth because there would be less incentive to seek out market-improving opportunities.
Third, one aspect of the neo-Brandeisian approach to antitrust policy is at odds with fundamental notions of fair notice of wrongdoing and equal treatment under neutral principles, notions that are central to the rule of law. In particular, the neo-Brandeisian call for considering a multiplicity of new factors such as fairness, labor, and the environment when enforcing policy is troublesome. There is no neutral principle for assigning weights to such divergent interests, and (even if weights could be assigned) there are no economic tools for accurately measuring how a transaction under review would affect those interests. It follows that abandoning antitrust law’s consumer-welfare standard in favor of an ill-defined multifactor approach would spawn confusion in the private sector and promote arbitrariness in enforcement decisions, undermining the transparency that is a key aspect of the rule of law. Whereas concerns other than consumer welfare may of course be validly considered in setting public policy, they are best dealt with under other statutory schemes, not under antitrust law.
Fourth, and finally, neo-Brandeisian antitrust proposals are not a solution to widely expressed concerns that big companies in general, and large digital platforms in particular, are undermining free speech by censoring content of which they disapprove. Antitrust law is designed to prevent businesses from creating impediments to market competition that reduce economic welfare; it is not well-suited to policing companies’ determinations regarding speech. To the extent that policymakers wish to address speech censorship on large platforms, they should consider other regulatory institutions that would be better suited to the task (such as communications law), while keeping in mind First Amendment limitations on the ability of government to control private speech.
In light of these four points, the primer concludes that the neo-Brandeisian-inspired antitrust “reform” proposals being considered by Congress should be rejected:
[E]fforts to totally reshape antitrust policy into a quasi-regulatory system that arbitrarily blocks and disincentivizes (1) welfare-enhancing mergers and (2) an array of actions by dominant firms are highly troubling. Such interventionist proposals ignore the lack of evidence of serious competitive problems in the American economy and appear arbitrary compared to the existing consumer-welfare-centric antitrust enforcement regime. To use a metaphor, Congress and public officials should avoid a drastic new antitrust cure for an anticompetitive disease that can be handled effectively with existing antitrust medications.
Let us hope that the serious harm associated with neo-Brandeisian legislative “deformation” (a more apt term than reformation) of the antitrust laws is given a full legislative airing before Congress acts.
Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company.
But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.
Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.
The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention).
Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:
But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:
— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.
— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.
— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.
— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.
The report thus asserts that:
The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.
That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]
What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard.
Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark.
Decisions Under Uncertainty
In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.
Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong.
Consider the following passage from FTC economist Ken Heyer’s memo:
The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]
In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.
Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?
In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today.
Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here).
Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.
To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets.
In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.
Putting Erroneous Predictions in Context
So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.
But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.
This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.
In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.
Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:
The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.
FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.
This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.
But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call:
When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.
The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:
Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”
It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.
Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation).
In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.
The FTC Lawyers’ Weak Case for Prosecuting Google
At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.
Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:
A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.
If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.
The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.
Moreover, as Ben Thompson argues in his Stratechery newsletter:
The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.
This difficulty was deftly highlighted by Heyer’s memo:
If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]
Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.
And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.
Google’s ‘revenue-sharing’ agreements
It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:
The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance.
To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).
Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:
This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.
This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:
[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.
Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.
Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):
Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.
Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.
Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system.
In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.
Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:
When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers
The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:
Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites….
…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]
More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:
A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control….
…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….
…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk?
Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time.
Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.
Competitor Harm Is Not an Indicator of the Need for Intervention
Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:
Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.
But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents.
This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:
Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives….
…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest….
…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.
Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:
They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.
Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.
When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.
But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.
In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.
The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).
But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.
Misinformation Is Usually Legal
Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:
Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.
Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.
It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.
The Case of Stolen Valor
The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection.
Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:
Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]
As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.”
While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech.
In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.
A Social Media Ministry of Truth
Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.
The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech.
Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.
There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so.
Extremely Limited Room to Regulate Extremism
The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:
Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.
Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.
The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”
One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.
By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.
When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohiodecision in 1969, which laid out that:
the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]
In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”
Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators.
Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.
What Can the Government Do?
One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.
But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.
In order to understand the lack of apparent basis for the European Commission’s claims that AstraZeneca is in breach of its contractual obligations to supply it with vaccine doses, it is necessary to understand the difference between stock and flow.
If I have 1,000 widgets in my warehouse, and agree to sell 700 of them to Ursula, and 600 of them to Boris, I will be unable to perform both contracts. They’re inconsistent with one another, and if I choose to perform my contract with Boris, Ursula will be understandably aggrieved. Is this what AstraZeneca have done? No.
At the time of the contracts AstraZenca entered into with the Commission and the United Kingdom no vaccine doses existed. What AstraZeneca promised was to use best reasonable efforts to acquire approval for and production of vaccines, and to deliver what it succeeded in making.
The United Kingdom was involved from an early stage (January/February) in the roll out of what was to become the Oxford/AstraZeneca vaccine. It was a third party beneficiary of the original licensing agreement of 17 May between Oxford and AstraZeneca, and provided the initial funding of £65 million (quickly greatly increased). Approval for use was given on 30 December, with the first dose given outside a trial on 4 January.
What each counterparty is entitled to is the doses that AstraZeneca succeeds, using best reasonable efforts, in producing under its contract. A metaphor is that each is buying a place in a production queue [Flow]. Neither was buying doses current in existence [Stock].
The metaphor of the queue is however somewhat misleading. It implies that the Commission is having to wait behind the United Kingdom. This is wrong. In fact, the Commission (and other parties) are benefitting from the earlier development and ramp up of production that occurred because of the United Kingdom’s contractual arrangements. Far from being prejudiced by the United Kingdom’s actions, the Commission and others have benefitted from it.
The Commission’s argument is not, and never has been, as some have supposed, that AstraZeneca has failed in its best reasonable efforts obligation to manufacture doses. Such an argument does the Commission no good. It would leave it with a claim for damages before a Belgian court in several years’ time. It is also seems unlikely that a claim that AstraZeneca have been dilatory in rolling out a vaccine in a fraction of the time anyone had achieved before this year, and which other suppliers failed altogether to do, has much prospect for success.
What it (and the Member States) want are doses today.
So, the argument instead is that AstraZeneca has succeeded and that there are doses in existence that the Commission is entitled to. This is based in part upon the frustration in seeing deliveries of vaccine doses to the United Kingdom from factories that the Commission’s contract says that AstraZeneca can deliver doses to it from.
Their position appears untenable. The Commission is entitled to those doses that its supplier succeeds, using best reasonable efforts, in producing under its contract with it. It is not entitled to doses that are only in existence because of earlier contractual arrangements with an entirely different counterparty.
In practice, which doses are being produced under which contract will be obvious from the fact that most production is being done by subcontractors (AstraZeneca is a relatively small producer). The shortfall in production under the Commission’s contract appears to have been caused by a failure of a sub-contractor in Belgium.
It is because the Commission’s arguments under its contract are so obviously weak that we are now seeing calls for export bans. If there really were any contractual entitlement to what has been produced, and AstraZeneca were in breach of contract in failing to deliver, then the usual civil recourse would be the obvious and easy path for the Commission. The nuclear option is being relied upon because of the lack of any such contractual right.
Conversely there is no equivalence between the United Kingdom requiring that doses that it is contractually entitled to are delivered to it, and the Commission’s proposed export ban.
Two common objections to the above have been put forward that it is helpful to rule out. First the Commission’s contract is governed by Belgian law. However, there is no rule specific to any jurisdiction in play here. All that needs to be known is pacta sunt servanda, a principle applicable across Europe.
Second is that the UK’s supply contract was only actually formalised in August. The earlier agreement was however months before, as was the funding that has resulted in the doses that there are for anybody.
Policy discussions about the use of personal data often have “less is more” as a background assumption; that data is overconsumed relative to some hypothetical optimal baseline. This overriding skepticism has been the backdrop for sweeping new privacy regulations, such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR).
More recently, as part of the broad pushback against data collection by online firms, some have begun to call for creating property rights in consumers’ personal data or for data to be treated as labor. Prominent backers of the idea include New York City mayoral candidate Andrew Yang and computer scientist Jaron Lanier.
The discussion has escaped the halls of academia and made its way into popular media. During a recent discussion with Tesla founder Elon Musk, comedian and podcast host Joe Rogan argued that Facebook is “one gigantic information-gathering business that’s decided to take all of the data that people didn’t know was valuable and sell it and make f***ing billions of dollars.” Musk appeared to agree.
The animosity exhibited toward data collection might come as a surprise to anyone who has taken Econ 101. Goods ideally end up with those who value them most. A firm finding profitable ways to repurpose unwanted scraps is just the efficient reallocation of resources. This applies as much to personal data as to literal trash.
Unfortunately, in the policy sphere, few are willing to recognize the inherent trade-off between the value of privacy, on the one hand, and the value of various goods and services that rely on consumer data, on the other. Ideally, policymakers would look to markets to find the right balance, which they often can. When the transfer of data is hardwired into an underlying transaction, parties have ample room to bargain.
But this is not always possible. In some cases, transaction costs will prevent parties from bargaining over the use of data. The question is whether such situations are so widespread as to justify the creation of data property rights, with all of the allocative inefficiencies they entail. Critics wrongly assume the solution is both to create data property rights and to allocate them to consumers. But there is no evidence to suggest that, at the margin, heightened user privacy necessarily outweighs the social benefits that new data-reliant goods and services would generate. Recent experience in the worlds of personalized medicine and the fight against COVID-19 help to illustrate this point.
Data Property Rights and Personalized Medicine
The world is on the cusp of a revolution in personalized medicine. Advances such as the improved identification of biomarkers, CRISPR genome editing, and machine learning, could usher a new wave of treatments to markedly improve health outcomes.
Personalized medicine uses information about a person’s own genes or proteins to prevent, diagnose, or treat disease. Genetic-testing companies like 23andMe or Family Tree DNA, with the large troves of genetic information they collect, could play a significant role in helping the scientific community to further medical progress in this area.
However, despite the obvious potential of personalized medicine, many of its real-world applications are still very much hypothetical. While governments could act in any number of ways to accelerate the movement’s progress, recent policy debates have instead focused more on whether to create a system of property rights covering personal genetic data.
Some raise concerns that it is pharmaceutical companies, not consumers, who will reap the monetary benefits of the personalized medicine revolution, and that advances are achieved at the expense of consumers’ and patients’ privacy. They contend that data property rights would ensure that patients earn their “fair” share of personalized medicine’s future profits.
But it’s worth examining the other side of the coin. There are few things people value more than their health. U.S. governmental agencies place the value of a single life at somewhere between $1 million and $10 million. The commonly used quality-adjusted life year metric offers valuations that range from $50,000 to upward of $300,000 per incremental year of life.
It therefore follows that the trivial sums users of genetic-testing kits might derive from a system of data property rights would likely be dwarfed by the value they would enjoy from improved medical treatments. A strong case can be made that policymakers should prioritize advancing the emergence of new treatments, rather than attempting to ensure that consumers share in the profits generated by those potential advances.
These debates drew increased attention last year, when 23andMe signed a strategic agreement with the pharmaceutical company Almirall to license the rights related to an antibody Almirall had developed. Critics pointed out that 23andMe’s customers, whose data had presumably been used to discover the potential treatment, received no monetary benefits from the deal. Journalist Laura Spinney wrote in The Guardian newspaper:
23andMe, for example, asks its customers to waive all claims to a share of the profits arising from such research. But given those profits could be substantial—as evidenced by the interest of big pharma—shouldn’t the company be paying us for our data, rather than charging us to be tested?
In the deal’s wake, some argued that personal health data should be covered by property rights. A cardiologist quoted in Fortune magazine opined: “I strongly believe that everyone should own their medical data—and they have a right to that.” But this strong belief, however widely shared, ignores important lessons that law and economics has to teach about property rights and the role of contractual freedom.
Why Do We Have Property Rights?
Among the many important features of property rights is that they create “excludability,” the ability of economic agents to prevent third parties from using a given item. In the words of law professor Richard Epstein:
[P]roperty is not an individual conception, but is at root a social conception. The social conception is fairly and accurately portrayed, not by what it is I can do with the thing in question, but by who it is that I am entitled to exclude by virtue of my right. Possession becomes exclusive possession against the rest of the world…
Excludability helps to facilitate the trade of goods, offers incentives to create those goods in the first place, and promotes specialization throughout the economy. In short, property rights create a system of exclusion that supports creating and maintaining valuable goods, services, and ideas.
But property rights are not without drawbacks. Physical or intellectual property can lead to a suboptimal allocation of resources, namely market power (though this effect is often outweighed by increased ex ante incentives to create and innovate). Similarly, property rights can give rise to thickets that significantly increase the cost of amassing complementary pieces of property. Often cited are the historic (but contested) examples of tolling on the Rhine River or the airplane patent thicket of the early 20th century. Finally, strong property rights might also lead to holdout behavior, which can be addressed through top-down tools, like eminent domain, or private mechanisms, like contingent contracts.
In short, though property rights—whether they cover physical or information goods—can offer vast benefits, there are cases where they might be counterproductive. This is probably why, throughout history, property laws have evolved to achieve a reasonable balance between incentives to create goods and to ensure their efficient allocation and use.
Personal Health Data: What Are We Trying to Incentivize?
There are at least three critical questions we should ask about proposals to create property rights over personal health data.
What goods or behaviors would these rights incentivize or disincentivize that are currently over- or undersupplied by the market?
Are goods over- or undersupplied because of insufficient excludability?
Could these rights undermine the efficient use of personal health data?
Much of the current debate centers on data obtained from direct-to-consumer genetic-testing kits. In this context, almost by definition, firms only obtain consumers’ genetic data with their consent. In western democracies, the rights to bodily integrity and to privacy generally make it illegal to administer genetic tests against a consumer or patient’s will. This makes genetic information naturally excludable, so consumers already benefit from what is effectively a property right.
When consumers decide to use a genetic-testing kit, the terms set by the testing firm generally stipulate how their personal data will be used. 23andMe has a detailed policy to this effect, as does Family Tree DNA. In the case of 23andMe, consumers can decide whether their personal information can be used for the purpose of scientific research:
You have the choice to participate in 23andMe Research by providing your consent. … 23andMe Research may study a specific group or population, identify potential areas or targets for therapeutics development, conduct or support the development of drugs, diagnostics or devices to diagnose, predict or treat medical or other health conditions, work with public, private and/or nonprofit entities on genetic research initiatives, or otherwise create, commercialize, and apply this new knowledge to improve health care.
Because this transfer of personal information is hardwired into the provision of genetic-testing services, there is space for contractual bargaining over the allocation of this information. The right to use personal health data will go toward the party that values it most, especially if information asymmetries are weeded out by existing regulations or business practices.
Regardless of data property rights, consumers have a choice: they can purchase genetic-testing services and agree to the provider’s data policy, or they can forgo the services. The service provider cannot obtain the data without entering into an agreement with the consumer. While competition between providers will affect parties’ bargaining positions, and thus the price and terms on which these services are provided, data property rights likely will not.
So, why do consumers transfer control over their genetic data? The main reason is that genetic information is inaccessible and worthless without the addition of genetic-testing services. Consumers must pass through the bottleneck of genetic testing for their genetic data to be revealed and transformed into usable information. It therefore makes sense to transfer the information to the service provider, who is in a much stronger position to draw insights from it. From the consumer’s perspective, the data is not even truly “transferred,” as the consumer had no access to it before the genetic-testing service revealed it. The value of this genetic information is then netted out in the price consumers pay for testing kits.
If personal health data were undersupplied by consumers and patients, testing firms could sweeten the deal and offer them more in return for their data. U.S. copyright law covers original compilations of data, while EU law gives 15 years of exclusive protection to the creators of original databases. Legal protections for trade secrets could also play some role. Thus, firms have some incentives to amass valuable health datasets.
But some critics argue that health data is, in fact, oversupplied. Generally, such arguments assert that agents do not account for the negative privacy externalities suffered by third-parties, such as adverse-selection problems in insurance markets. For example, Jay Pil Choi, Doh Shin Jeon, and Byung Cheol Kim argue:
Genetic tests are another example of privacy concerns due to informational externalities. Researchers have found that some subjects’ genetic information can be used to make predictions of others’ genetic disposition among the same racial or ethnic category. … Because of practical concerns about privacy and/or invidious discrimination based on genetic information, the U.S. federal government has prohibited insurance companies and employers from any misuse of information from genetic tests under the Genetic Information Nondiscrimination Act (GINA).
But if these externalities exist (most of the examples cited by scholars are hypothetical), they are likely dwarfed by the tremendous benefits that could flow from the use of personal health data. Put differently, the assertion that “excessive” data collection may create privacy harms should be weighed against the possibility that the same collection may also lead to socially valuable goods and services that produce positive externalities.
In any case, data property rights would do little to limit these potential negative externalities. Consumers and patients are already free to agree to terms that allow or prevent their data from being resold to insurers. It is not clear how data property rights would alter the picture.
Proponents of data property rights often claim they should be associated with some form of collective bargaining. The idea is that consumers might otherwise fail to receive their “fair share” of genetic-testing firms’ revenue. But what critics portray as asymmetric bargaining power might simply be the market signaling that genetic-testing services are in high demand, with room for competitors to enter the market. Shifting rents from genetic-testing services to consumers would undermine this valuable price signal and, ultimately, diminish the quality of the services.
Perhaps more importantly, to the extent that they limit the supply of genetic information—for example, because firms are forced to pay higher prices for data and thus acquire less of it—data property rights might hinder the emergence of new treatments. If genetic data is a key input to develop personalized medicines, adopting policies that, in effect, ration the supply of that data is likely misguided.
Even if policymakers do not directly put their thumb on the scale, data property rights could still harm pharmaceutical innovation. If existing privacy regulations are any guide—notably, thepreviously mentioned GDPR and CCPA, as well as the federal Health Insurance Portability and Accountability Act (HIPAA)—such rights might increase red tape for pharmaceutical innovators. Privacy regulations routinely limit firms’ ability to put collected data to new and previously unforeseen uses. They also limit parties’ contractual freedom when it comes to gathering consumers’ consent.
At the margin, data property rights would make it more costly for firms to amass socially valuable datasets. This would effectively move the personalized medicine space further away from a world of permissionless innovation, thus slowing down medical progress.
In short, there is little reason to believe health-care data is misallocated. Proposals to reallocate rights to such data based on idiosyncratic distributional preferences threaten to stifle innovation in the name of privacy harms that remain mostly hypothetical.
Data Property Rights and COVID-19
The trade-off between users’ privacy and the efficient use of data also has important implications for the fight against COVID-19. Since the beginning of the pandemic, several promising initiatives have been thwarted by privacy regulations and concerns about the use of personal data. This has potentially prevented policymakers, firms, and consumers from putting information to its optimal social use. High-profile issues have included:
Each of these cases may involve genuine privacy risks. But to the extent that they do, those risks must be balanced against the potential benefits to society. If privacy concerns prevent us from deploying contact tracing or green passes at scale, we should question whether the privacy benefits are worth the cost. The same is true for rules that prohibit amassing more data than is strictly necessary, as is required by data-minimization obligations included in regulations such as the GDPR.
If our initial question was instead whether the benefits of a given data-collection scheme outweighed its potential costs to privacy, incentives could be set such that competition between firms would reduce the amount of data collected—at least, where minimized data collection is, indeed, valuable to users. Yet these considerations are almost completely absent in the COVID-19-related privacy debates, as they are in the broader privacy debate. Against this backdrop, the case for personal data property rights is dubious.
The key question is whether policymakers should make it easier or harder for firms and public bodies to amass large sets of personal data. This requires asking whether personal data is currently under- or over-provided, and whether the additional excludability that would be created by data property rights would offset their detrimental effect on innovation.
Swaths of personal data currently lie untapped. With the proper incentive mechanisms in place, this idle data could be mobilized to develop personalized medicines and to fight the COVID-19 outbreak, among many other valuable uses. By making such data more onerous to acquire, property rights in personal data might stifle the assembly of novel datasets that could be used to build innovative products and services.
On the other hand, when dealing with diffuse and complementary data sources, transaction costs become a real issue and the initial allocation of rightscan matter a great deal. In such cases, unlike the genetic-testing kits example, it is not certain that users will be able to bargain with firms, especially where their personal information is exchanged by third parties.
If optimal reallocation is unlikely, should property rights go to the person covered by the data or to the collectors (potentially subject to user opt-outs)? Proponents of data property rights assume the first option is superior. But if the goal is to produce groundbreaking new goods and services, granting rights to data collectors might be a superior solution. Ultimately, this is an empirical question.
As Richard Epstein puts it, the goal is to “minimize the sum of errors that arise from expropriation and undercompensation, where the two are inversely related.” Rather than approach the problem with the preconceived notion that initial rights should go to users, policymakers should ensure that data flows to those economic agents who can best extract information and knowledge from it.
As things stand, there is little to suggest that the trade-offs favor creating data property rights. This is not an argument for requisitioning personal information or preventing parties from transferring data as they see fit, but simply for letting markets function, unfettered by misguided public policies.
The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:
We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …
A strong, diverse, free press is critical for any successful democracy. …
Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.
This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.
The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.
Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”
In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):
In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.
The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.
Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.
Yun concludes by summarizing the case against this legislation (citations omitted):
Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.
Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.
Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.
There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.
Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.
This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.
In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.
In the wake of its departure from the European Union, the United Kingdom will have the opportunity to enter into new free trade agreements (FTAs) with its international trading partners that lower existing tariff and non-tariff barriers. Achieving major welfare-enhancing reductions in trade restrictions will not be easy. Trade negotiations pose significant political sensitivities, such as those arising from the high levels of protection historically granted certain industry sectors, particularly agriculture.
Nevertheless, the political economy of protectionism suggests that, given deepening globalization and the sudden change in U.K. trade relations wrought by Brexit, the outlook for substantial liberalization of U.K. trade has become much brighter. Below, I address some of the key challenges facing U.K. trade negotiators as they seek welfare-enhancing improvements in trade relations and offer a proposal to deal with novel trade distortions in the least protectionist manner.
Two New Challenges Affecting Trade Liberalization
In addition to traditional trade issues, such as tariff levels and industry sector-specific details, U.K, trade negotiators—indeed, trade negotiators from all nations—will have to confront two relatively new and major challenges that are creating several frictions.
First, behind-the-border anticompetitive market distortions (ACMDs) have largely replaced tariffs as the preferred means of protection in many areas. As I explained in a previous post on this site (citing an article by trade-law scholar Shanker Singham and me), existing trade and competition law have not been designed to address the ACMD problem:
[I]nternational trade agreements simply do not reach a variety of anticompetitive welfare-reducing government measures that create de facto trade barriers by favoring domestic interests over foreign competitors. Moreover, many of these restraints are not in place to discriminate against foreign entities, but rather exist to promote certain favored firms. We dub these restrictions “anticompetitive market distortions” or “ACMDs,” in that they involve government actions that empower certain private interests to obtain or retain artificial competitive advantages over their rivals, be they foreign or domestic. ACMDs are often a manifestation of cronyism, by which politically-connected enterprises successfully pressure government to shield them from effective competition, to the detriment of overall economic growth and welfare. …
As we emphasize in our article, existing international trade rules have been able to reach ACMDs, which include: (1) governmental restraints that distort markets and lessen competition; and (2) anticompetitive private arrangements that are backed by government actions, have substantial effects on trade outside the jurisdiction that imposes the restrictions, and are not readily susceptible to domestic competition law challenge. Among the most pernicious ACMDs are those that artificially alter the cost-base as between competing firms. Such cost changes will have large and immediate effects on market shares, and therefore on international trade flows.
Second, in recent years, the trade remit has expanded to include “nontraditional” issues such as labor, the environment, and now climate change. These concerns have generated support for novel tariffs that could help promote protectionism and harmful trade distortions. As explained in a recent article by the Special Trade Commission advisory group (former senior trade and antitrust officials who have provided independent policy advice to the U.K. government):
[The rise of nontraditional trade issues] has renewed calls for border tax adjustments or dual tariffs on an ex-ante basis. This is in sharp tension with the W[orld Trade Organization’s] long-standing principle of technological neutrality, and focus on outcomes as opposed to discriminating on the basis of the manner of production of the product. The problem is that it is too easy to hide protectionist impulses into concerns about the manner of production, and once a different tariff applies, it will be very difficult to remove. The result will be to significantly damage the liberalisation process itself leading to severe harm to the global economy at a critical time as we recover from Covid-19. The potentially damaging effects of ex ante tariffs will be visited most significantly in developing countries.
Dealing with New Trade Challenges in the Least Protectionist Manner
A broad approach to U.K. trade liberalization that also addresses the two new trade challenges is advanced in a March 2 report by the U.K. government’s Trade and Agricultural Commission (TAC, an independent advisory agency established in 2020). Although addressed primarily to agricultural trade, the TAC report enunciates principles applicable to U.K. trade policy in general, considering the impact of ACMDs and nontraditional issues. Key aspects of the TAC report are summarized in an article by Shanker Singham (the scholar who organized and convened the Special Trade Commission and who also served as a TAC commissioner):
The heart of the TAC report’s import policy contains an innovative proposal that attempts to simultaneously promote a trade liberalising agenda in agriculture, while at the same time protecting the UK’s high standards in food production and ensuring the UK fully complies with WTO rules on animal and plant health, as well as technical regulations that apply to food trade.
This proposal includes a mechanism to deal with some of the most difficult issues in agricultural trade which relate to animal welfare, environment and labour rules. The heart of this mechanism is the potential for the application of a tariff in cases where an aggrieved party can show that a trading partner is violating agreed standards in an FTA.
The result of the mechanism is a tariff based on the scale of the distortion which operates like a trade remedy. The mechanism can also be used offensively where a country is preventing market access by the UK as a result of the market distortion, or defensively where a distortion in a foreign market leads to excess exports from that market. …
[T]he tariff would be calibrated to the scale of the distortion and would apply only to the product category in which the distortion is occurring. The advantage of this over a more conventional trade remedy is that it is based on cost as opposed to price and is designed to remove the effects of the distorting activity. It would not be applied on a retaliatory basis in other unrelated sectors.
In exchange for this mechanism, the UK commits to trade liberalisation and, within a reasonable timeframe, zero tariffs and zero quotas. This in turn will make the UK’s advocacy of higher standards in international organisations much more credible, another core TAC proposal.
The TAC report also notes that behind the border barriers and anti-competitive market distortions (“ACMDs”) have the capacity to damage UK exports and therefore suggests a similar mechanism or set of disciplines could be used offensively. Certainly, where the ACMD is being used to protect a particular domestic industry, using the ACMD mechanism to apply a tariff for the exports of that industry would help, but this may not apply where the purpose is protective, and the industry does not export much.
I would argue that in this case, it would be important to ensure that UK FTAs include disciplines on these ACMDs which if breached could lead to dispute settlement and the potential for retaliatory tariffs for sectors in the UK’s FTA partner that do export. This is certainly normal WTO-sanctioned practice, and could be used here to encourage compliance. It is clear from the experience in dealing with countries that engage in ACMDs for trade or competition advantage that unless there are robust disciplines, mere hortatory language would accomplish little or nothing.
But this sort of mechanism with its concomitant commitment to freer trade has much wider potential application than just UK agricultural trade policy. It could also be used to solve a number of long standing trade disputes such as the US-China dispute, and indeed the most vexed questions in trade involving environment and climate change in ways that do not undermine the international trading system itself.
This is because the mechanism is based on an ex post tariff as opposed to an ex ante one which contains within it the potential for protectionism, and is prone to abuse. Because the tariff is actually calibrated to the cost advantage which is secured as a result of the violation of agreed international standards, it is much more likely that it will be simply limited to removing this cost advantage as opposed to becoming a punitive measure that curbs ordinary trade flows.
It is precisely this type of problem solving and innovative thinking that the international trading system needs as it faces a range of challenges that threaten liberalisation itself and the hard-won gains of the post war GATT/WTO system itself. The TAC report represents UK leadership that has been sought after since the decision to leave the EU. It has much to commend it.
Assessment and Conclusion
Even when administered by committed free traders, real-world trade liberalization is an exercise in welfare optimization, subject to constraints imposed by the actions of organized interest groups expressed through the political process. The rise of new coalitions (such as organizations committed to specified environmental goals, including limiting global warming) and the proliferation of ADMCs further complicates the trade negotiation calculus.
Fortunately, recognizing the “reform moment” created by Brexit, free trade-oriented experts (in particular, the TAC, supported by the Special Trade Commission) have recommended that the United Kingdom pursue a bold move toward zero tariffs and quotas. Narrow exceptions to this policy would involve after-the-fact tariffications to offset (1) the distortive effects of ACMDs and (2) derogation from rules embodying nontraditional concerns, such as environmental commitments. Such tariffications would be limited and cost-based, and, as such, welfare-superior to ex ante tariffs calibrated to price.
While the details need to be worked out, the general outlines of this approach represent a thoughtful and commendable market-oriented effort to secure substantial U.K. trade liberalization, subject to unavoidable constraints. More generally, one would hope that other jurisdictions (including the United States) take favorable note of this development as they generate their own trade negotiation policies. Stay tuned.
Critics of big tech companies like Google and Amazon are increasingly focused on the supposed evils of “self-preferencing.” This refers to when digital platforms like Amazon Marketplace or Google Search, which connect competing services with potential customers or users, also offer (and sometimes prioritize) their own in-house products and services.
The objection, raised by several members and witnesses during a Feb. 25 hearing of the House Judiciary Committee’s antitrust subcommittee, is that it is unfair to third parties that use those sites to allow the site’s owner special competitive advantages. Is it fair, for example, for Amazon to use the data it gathers from its service to design new products if third-party merchants can’t access the same data? This seemingly intuitive complaint was the basis for the European Commission’s landmark case against Google.
But we cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers.
It’s probably true that Amazon’s access to customer search and purchase data can help it spot products it can undercut with its own versions, driving down prices. But that’s not unusual; most retailers do this, many to a much greater extent than Amazon. For example, you can buy AmazonBasics batteries for less than half the price of branded alternatives, and they’re pretty good.
There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets.
Store-branded versions of iPhone cables and Nespresso pods are certainly inconvenient for those companies, but they offer consumers cheaper alternatives. Where such copying may be problematic (say, by deterring investments in product innovations), the law awards and enforces patents and copyrights to reward novel discoveries and creative works, and trademarks to protect brand identity. But in the absence of those cases where a company has intellectual property, this is simply how competition works.
The fundamental question is “what benefits consumers?” Services like Yelp object that they cannot compete with Google when Google embeds its Google Maps box in Google Search results, while Yelp cannot do the same. But for users, the Maps box adds valuable information to the results page, making it easier to get what they want. Google is not making Yelp worse by making its own product better. Should it have to refrain from offering services that benefit its users because doing so might make competing products comparatively less attractive?
Self-preferencing also enables platforms to promote their offerings in other markets, which is often how large tech companies compete with each other. Amazon has a photo-hosting app that competes with Google Photos and Apple’s iCloud. It recently emailed its customers to promote it. That is undoubtedly self-preferencing, since other services cannot market themselves to Amazon’s customers like this, but if it makes customers aware of an alternative they might not have otherwise considered, that is good for competition.
This kind of behavior also allows companies to invest in offering services inexpensively, or for free, that they intend to monetize by preferencing their other, more profitable products. For example, Google invests in Android’s operating system and gives much of it away for free precisely because it can encourage Android customers to use the profitable Google Search service. Despite claims to the contrary, it is difficult to see this sort of cross-subsidy as harmful to consumers.
All platforms are open or closed to varying degrees. Retail “platforms,” for example, exist on a spectrum on which Craigslist is more open and neutral than eBay, which is more so than Amazon, which is itself relatively more so than, say, Walmart.com. Each position on this spectrum offers its own benefits and trade-offs for consumers. Indeed, some customers’ biggest complaint against Amazon is that it is too open, filled with third parties who leave fake reviews, offer counterfeit products, or have shoddy returns policies. Part of the role of the site is to try to correct those problems by making better rules, excluding certain sellers, or just by offering similar options directly.
Regulators and legislators often act as if the more open and neutral, the better, but customers have repeatedly shown that they often prefer less open, less neutral options. And critics of self-preferencing frequently find themselves arguing against behavior that improves consumer outcomes, because it hurts competitors. But that is the nature of competition: what’s good for consumers is frequently bad for competitors. If we have to choose, it’s consumers who should always come first.
In current discussions of technology markets, few words are heard more often than “platform.” Initial public offering (IPO) prospectuses use “platform” to describe a service that is bound to dominate a digital market. Antitrust regulators use “platform” to describe a service that dominates a digital market or threatens to do so. In either case, “platform” denotes power over price. For investors, that implies exceptional profits; for regulators, that implies competitive harm.
Conventional wisdom holds that platforms enjoy high market shares, protected by high barriers to entry, which yield high returns. This simple logic drives the market’s attribution of dramatically high valuations to dramatically unprofitable businesses and regulators’ eagerness to intervene in digital platform markets characterized by declining prices, increased convenience, and expanded variety, often at zero out-of-pocket cost. In both cases, “burning cash” today is understood as the path to market dominance and the ability to extract a premium from consumers in the future.
This logic is usually wrong.
The Overlooked Basics of Platform Economics
To appreciate this perhaps surprising point, it is necessary to go back to the increasingly overlooked basics of platform economics. A platform can refer to any service that matches two complementary populations. A search engine matches advertisers with consumers, an online music service matches performers and labels with listeners, and a food-delivery service matches restaurants with home diners. A platform benefits everyone by facilitating transactions that otherwise might never have occurred.
A platform’s economic value derives from its ability to lower transaction costs by funneling a multitude of individual transactions into a single convenient hub. In pursuit of minimum costs and maximum gains, users on one side of the platform will tend to favor the most popular platforms that offer the largest number of users on the other side of the platform. (There are partial exceptions to this rule when users value being matched with certain typesof other users, rather than just with more users.) These “network effects” mean that any successful platform market will always converge toward a handful of winners. This positive feedback effect drives investors’ exuberance and regulators’ concerns.
There is a critical point, however, that often seems to be overlooked.
Market share only translates into market power to the extent the incumbent is protected against entry within some reasonable time horizon. If Warren Buffett’s moat requirement is not met, market share is immaterial. If XYZ.com owns 100% of the online pet food delivery market but entry costs are asymptotic, then market power is negligible. There is another important limiting principle. In platform markets, the depth of the moat depends not only on competitors’ costs to enter the market, but users’ costs in switching from one platform to another or alternating between multiple platforms. If users can easily hop across platforms, then market share cannot confer market power given the continuous threat of user defection. Put differently: churn limits power over price.
Contrary to natural intuitions, this is why a platform market consisting of only a few leaders can still be intensely competitive, keeping prices low (down to and including $0) even if the number of competitors is low. It is often asserted, however, that users are typically locked into the dominant platform and therefore face high switching costs, which therefore implicitly satisfies the moat requirement. If that is true, then the “high churn” scenario is a theoretical curiosity and a leading platform’s high market share would be a reliable signal of market power. In fact, this common assumption likely describes the atypical case.
AWS and the Cloud Data-Storage Market
This point can be illustrated by considering the cloud data-storage market. This would appear to be an easy case where high switching costs (due to the difficulty in shifting data among storage providers) insulate the market leader against entry threats. Yet the real world does not conform to these expectations.
While Amazon Web Services pioneered the $100 billion-plus market and is still the clear market leader, it now faces vigorous competition from Microsoft Azure, Google Cloud, and other data-storage or other cloud-related services. This may reflect the fact that the data storage market is far from saturated, so new users are up for grabs and existing customers can mitigate lock-in by diversifying across multiple storage providers. Or it may reflect the fact that the market’s structure is fluid as a function of technological changes, enabling entry at formerly bundled portions of the cloud data-services package. While it is not always technologically feasible, the cloud storage market suggests that users’ resistance to platform capture can represent a competitive opportunity for entrants to challenge dominant vendors on price, quality, and innovation parameters.
The Surprising Instability of Platform Dominance
The instability of leadership positions in the cloud storage market is not exceptional.
Consider a handful of once-powerful platforms that were rapidly dethroned once challenged by a more efficient or innovative rival: Yahoo and Alta Vista in the search-engine market (displaced by Google); Netscape in the browser market (displaced by Microsoft’s Internet Explorer, then displaced by Google Chrome); Nokia and then BlackBerry in the mobile wireless-device market (displaced by Apple and Samsung); and Friendster in the social-networking market (displaced by Myspace, then displaced by Facebook). AOL was once thought to be indomitable; now it is mostly referenced as a vintage email address. The list could go on.
Overestimating platform dominance—or more precisely, assuming platform dominance without close factual inquiry—matters because it promotes overestimates of market power. That, in turn, cultivates both market and regulatory bubbles: investors inflate stock valuations while regulators inflate the risk of competitive harm.
DoorDash and the Food-Delivery Services Market
Consider the DoorDash IPO that launched in early December 2020. The market’s current approximately $50 billion valuation of a business that has been almost consistently unprofitable implicitly assumes that DoorDash will maintain and expand its position as the largest U.S. food-delivery platform, which will then yield power over price and exceptional returns for investors.
There are reasons to be skeptical. Even where DoorDash captures and holds a dominant market share in certain metropolitan areas, it still faces actual and potential competition from other food-delivery services, in-house delivery services (especially by well-resourced national chains), and grocery and other delivery services already offered by regional and national providers. There is already evidence of these expected responses to DoorDash’s perceived high delivery fees, a classic illustration of the disciplinary effect of competitive forces on the pricing choices of an apparently dominant market leader. These “supply-side” constraints imposed by competitors are compounded by “demand-side” constraints imposed by customers. Home diners incur no more than minimal costs when swiping across food-delivery icons on a smartphone interface, casting doubt that high market share is likely to translate in this context into market power.
Deliveroo and the Costs of Regulatory Autopilot
Just as the stock market can suffer from delusions of platform grandeur, so too some competition regulators appear to have fallen prey to the same malady.
A vivid illustration is provided by the 2019 decision by the Competition Markets Authority (CMA), the British competition regulator, to challenge Amazon’s purchase of a 16% stake in Deliveroo, one of three major competitors in the British food-delivery services market. This intervention provides perhaps the clearest illustration of policy action based on a reflexive assumption of market power, even in the face of little to no indication that the predicate conditions for that assumption could plausibly be satisfied.
Far from being a dominant platform, Deliveroo was (and is) a money-losing venture lagging behind money-losing Just Eat (now Just Eat Takeaway) and Uber Eats in the U.K. food-delivery services market. Even Amazon had previously closed its own food-delivery service in the U.K. due to lack of profitability. Despite Deliveroo’s distressed economic circumstances and the implausibility of any market power arising from Amazon’s investment, the CMA nonetheless elected to pursue the fullest level of investigation. While the transaction was ultimately approved in August 2020, this intervention imposed a 15-month delay and associated costs in connection with an investment that almost certainly bolstered competition in a concentrated market by funding a firm reportedly at risk of insolvency. This is the equivalent of a competition regulator driving in reverse.
There seems to be an increasingly common assumption in commentary by the press, policymakers, and even some scholars that apparently dominant platforms usually face little competition and can set, at will, the terms of exchange. For investors, this is a reason to buy; for regulators, this is a reason to intervene. This assumption is sometimes realized, and, in that case, antitrust intervention is appropriate whenever there is reasonable evidence that market power is being secured through something other than “competition on the merits.” However, several conditions must be met to support the market power assumption without which any such inquiry would be imprudent. Contrary to conventional wisdom, the economics and history of platform markets suggest that those conditions are infrequently satisfied.
Without closer scrutiny, reflexively equating market share with market power is prone to lead both investors and regulators astray.
The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.
Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.
Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.
This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.
Blank Check #1
The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.
In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.
Blank Check #2
The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.
This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.
The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.
While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.
The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).
Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.
This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.
There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.
What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.
This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.
All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.
As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.
As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.
The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.
When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.
However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.
There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.
Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.
The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.
Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.