Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service.
The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy.
The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.
First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”
Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).
Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.
My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.
It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.
Treasury Secretary Steve Mnuchin recently claimed that Amazon has “destroyed the retail industry across the United States” and should be investigated for antitrust violations. The claim doesn’t pass the laugh test. What’s more, the allegation might more rightly be levelled at Mnuchin himself.
Mnuchin. Is. Wrong.
First, while Amazon’s share of online retail in the U.S. is around 38 percent, that still only represents around 4 percent of total retail sales. It is unclear how Mnuchin imagines a company with a market share of 4 percent can have “destroyed” its competitors.
Second, nearly 60 percent of Amazon’s sales come from third party vendors — i.e. other retailers — many of whom would not exist but for Amazon’s platform. So, far from destroying U.S. retail, Amazon arguably has enabled U.S. online retail to thrive.
Third, even many of the brick-and-mortar retailers allegedly destroyed by Amazon have likely actually benefited from its innovative, cost-cutting approaches, which have reduced the cost of inputs. For example, in its Business Prime Program, Amazon offers discounts on a large array of goods, as well as incentives for bulk purchases, and flexible financing offers. Along with those direct savings, it also allows small businesses to use its analytics capabilities to track and manage the supply chain inputs they purchase through Amazon.
It’s no doubt true that many retailers are unhappy about the price-cutting and retail price visibility that Amazon (and many other online retailers) offer to consumers. But, fortunately, online competition is a fact that will not go away even if Amazon does. Meanwhile, investigating Amazon for antitrust violations — presumably with the objective of imposing some structural remedy? — would harm a truly great American innovator. And to what end? To protect inefficient, overpriced retailers?
Indeed, the better response, for retailers, is not to gripe about Amazon but to invest in better ways to serve consumers in order more effectively to compete. And that’s what many retailers are doing: Walmart, Target and Kroger are investing billions to improve both their brick-and-mortar retail businesses and their online businesses. As a result, each of them still sell more, individually, than Amazon
It is ironic that Steve Mnuchin should claim that Amazon has “destroyed” U.S. retail, given his support for the administration’s tariff policy, which is actuallyseverely harming U.S. retailers. In the apparel industry, “[b]usinesses have barely been able to survive the 10 percent tariff. [The administration’s proposed] 25 percent is not survivable.” Low-margin retailers like Dollar Tree suffered punishing hits to stock value in the wake of the tariff announcements. And small producers and retailers would face, at best, dramatic income losses and, at worst, the need to fold up in the face of the current proposals.
So, if Mr Mnuchin is actually concerned about the state of U.S. retail, perhaps he should try to persuade his boss to stop with the tariff war instead of attacking a great American retailer.
The Department of Justice announced it has approved the $26 billion T-Mobile/Sprint merger. Once completed, the deal will create a mobile carrier with around 136 million customers in the U.S., putting it just behind Verizon (158 million) and AT&T (156 million).
While all the relevant federal government agencies have now approved the merger, it still faces a legal challenge from state attorneys general. At the very least, this challenge is likely to delay the merger; if successful, it could scupper it. In this blog post, we evaluate the state AG’s claims (and find them wanting).
Four firms good, three firms bad?
The state AG’s opposition to the T-Mobile/Sprint merger is based on a claim that a competitive mobile market requires four national providers, as articulated in their redacted complaint:
The Big Four MNOs [mobile network operators] compete on many dimensions, including price, network quality, network coverage, and features. The aggressive competition between them has resulted in falling prices and improved quality. The competition that currently takes place across those dimensions, and others, among the Big Four MNOs would be negatively impacted if the Merger were consummated. The effects of the harm to competition on consumers will be significant because the Big Four MNOs have wireless service revenues of more than $160 billion.
. . .
Market consolidation from four to three MNOs would also serve to increase the possibility of tacit collusion in the markets for retail mobile wireless telecommunications services.
But there are no economic grounds for the assertion that a four firm industry is on a competitive tipping point. Four is an arbitrary number, offered up in order to squelch any further concentration in the industry.
A proper assessment of this transaction—as well as any other telecom merger—requires accounting for the specific characteristics of the markets affected by the merger. The accounting would include, most importantly, the dynamic, fast-moving nature of competition and the key role played by high fixed costs of production and economies of scale. This is especially important given the expectation that the merger will facilitate the launch of a competitive, national 5G network.
Opponents claim this merger takes us from four to three national carriers. But Sprint was never a serious participant in the launch of 5G. Thus, in terms of future investment in general, and the roll-out of 5G in particular, a better characterization is that it this deal takes the U.S. from two to three national carriers investing to build out next-generation networks.
In the past, the capital expenditures made by AT&T and Verizon have dwarfed those of T-Mobile and Sprint. But a combined T-Mobile/Sprint would be in a far better position to make the kinds of large-scale investments necessary to develop a nationwide 5G network. As a result, it is likely that both the urban-rural digital divide and the rich-poor digital divide will decline following the merger. And this investment will drive competition with AT&T and Verizon, leading to innovation, improving service and–over time–lowering the cost of access.
Is prepaid a separate market?
The state AGs complain that the merger would disproportionately affect consumers of prepaid plans, which they claim constitutes a separate product market:
There are differences between prepaid and postpaid service, the most notable being that individuals who cannot pass a credit check and/or who do not have a history of bill payment with a MNO may not be eligible for postpaid service. Accordingly, it is informative to look at prepaid mobile wireless telecommunications services as a separate segment of the market for mobile wireless telecommunications services.
Claims that prepaid services constitute a separate market are questionable, at best. While at one time there might have been a fairly distinct divide between pre and postpaid markets, today the line between them is at least blurry, and may not even be a meaningful divide at all.
To begin with, the arguments regarding any expected monopolization in the prepaid market appear to assume that the postpaid market imposes no competitive constraint on the prepaid market.
But that can’t literally be true. At the very least, postpaid plans put a ceiling on prepaid prices for many prepaid users. To be sure, there are some prepaid consumers who don’t have the credit history required to participate in the postpaid market at all. But these are inframarginal consumers, and they will benefit from the extent of competition at the margins unless operators can effectively price discriminate in ways they have not in the past, and which has not been demonstrated is possible or likely.
One source of this competition will come from Dish, which has been a vocal critic of the T-Mobile/Sprint merger. Under the deal with DOJ, T-Mobile and Sprint must spin-off Sprint’s prepaid businesses to Dish. The divested products include Boost Mobile, Virgin Mobile, and Sprint prepaid. Moreover the deal requires Dish be allowed to use T-Mobile’s network during a seven-year transition period.
Will the merger harm low-income consumers?
While the states’ complaint alleges that low-income consumers will suffer, it pays little attention to the so-called “digital divide” separating urban and rural consumers. This seems curious given the attention it was given in submissions to the federal agencies. For example, the Communication Workers of America opined:
the data in the Applicants’ Public Interest Statement demonstrates that even six years after a T-Mobile/Sprint merger, “most of New T-Mobile’s rural customers would be forced to settle for a service that has significantly lower performance than the urban and suburban parts of the network.” The “digital divide” is likely to worsen, not improve, post-merger.
This is merely an assertion, and a misleading assertion. To the extent the “digital divide” would grow following the merger, it would be because urban access will improve more rapidly than rural access would improve.
Indeed, there is no real suggestion that the merger will impede rural access relative to a world in which T-Mobile and Sprint do not merge.
And yet, in the absence of a merger, Sprint would be less able to utilize its own spectrum in rural areas than would the merged T-Mobile/Sprint, because utilization of that spectrum would require substantial investment in new infrastructure and additional, different spectrum. And much of that infrastructure and spectrum is already owned by T-Mobile.
It likely that the combined T-Mobile/Sprint will make that investment, given the cost savings that are expected to be realized through the merger. So, while it might be true that urban customers will benefit more from the merger, rural customers will also benefit. It is impossible to know, of course, by exactly how much each group will benefit. But, prima facie, the prospect of improvement in rural access seems a strong argument in favor of the merger from a public interest standpoint.
The merger is also likely to reduce another digital divide: that between wealthier and poorer consumers in more urban areas. The proportion of U.S. households with access to the Internet has for several years been rising faster among those with lower incomes than those with higher incomes, thereby narrowing this divide. Since 2011, access by households earning $25,000 or less has risen from 52% to 62%, while access among the U.S. population as a whole has risen only from 72% to 78%. In part, this has likely resulted from increased mobile access (a greater proportion of Americans now access the Internet from mobile devices than from laptops), which in turn is the result of widely available, low-cost smartphones and the declining cost of mobile data.
By enabling the creation of a true, third national mobile (phone and data) network, the merger will almost certainly drive competition and innovation that will lead to better services at lower prices, thereby expanding access for all and, if current trends hold, especially those on lower incomes. Beyond its effect on the “digital divide” per se, the merger is likely to have broadly positive effects on access more generally.
If a firm is too big, it will be because it is “a merger for monopoly”;
If the firms aren’t that big, it will be for “coordinated effects”;
If a firm is small, then it will be because it will “eliminate a maverick”.
It’s a version of Ronald Coase’s complaint about antitrust, asrelated by William Landes:
Ronald said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down, they said it was predatory pricing, and when they stayed the same, they said it was tacit collusion.
Of all the reasons to block a merger, the maverick notion is the weakest, and it’s well past time to ditch it.
TheHorizontal Merger Guidelines define a “maverick” as “a firm that plays a disruptive role in the market to the benefit of customers.” According to the Guidelines, this includes firms:
With a new technology or business model that threatens to disrupt market conditions;
With an incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices;
That resist otherwise prevailing industry norms to cooperate on price setting or other terms of competition; and/or
With an ability and incentive to expand production rapidly using available capacity to “discipline prices.”
There appears to be no formal model of maverick behavior that does not rely on some a priori assumption that the firm is a maverick.
For example, John Kwoka’s 1989model assumes the maverick firm has different beliefs about how competing firms would react if the maverick varies its output or price. Louis Kaplow and Carl Shapiro developed a simplemodel in which the firm with the smallest market share may play the role of a maverick. They note, however, that this raises the question—in a model in which every firm faces the same cost and demand conditions—why would there be any variation in market shares? The common solution, according to Kaplow and Shapiro, is cost asymmetries among firms. If that is the case, then “maverick” activity is merely a function of cost, rather than some uniquely maverick-like behavior.
The idea of the maverick firm requires that the firm play a critical role in the market. The maverick must be the firm that outflanks coordinated action or acts as a bulwark against unilateral action. By this loosey goosey definition of maverick, a single firm can make the difference between success or failure of anticompetitive behavior by its competitors. Thus, the ability and incentive to expand production rapidly is a necessary condition for a firm to be considered a maverick. For example, Kaplow and Shapiroexplain:
Of particular note is the temptation of one relatively small firm to decline to participate in the collusive arrangement or secretly to cut prices to serve, say, 4% rather than 2% of the market. As long as price cuts by a small firm are less likely to be accurately observed or inferred by the other firms than are price cuts by larger firms, the presence of small firms that are capable of expanding significantly is especially disruptive to effective collusion.
A “maverick” firm’s ability to “discipline prices” depends crucially on its ability to expand output in the face of increased demand for its products. Similarly, the other non-maverick firms can be “disciplined” by the maverick only in the face of a credible threat of (1) a noticeable drop in market share that (2) leads to lower profits.
Relying on its disruptive pricing plans, its improved high-speed HSPA+ network, and a variety of other initiatives, T-Mobile aimed to grow its nationwide share to 17 percent within the next several years, and to substantially increase its presence in the enterprise and government market. AT&T’s acquisition of T-Mobile would eliminate the important price, quality, product variety, and innovation competition that an independent T-Mobile brings to the marketplace.
At the time of the proposed merger, T-Mobileaccounted for 11% of U.S. wireless subscribers. At the end of 2016, its market share had hit 17%. About half of the increase can be attributed to its 2012 merger with MetroPCS. Over the same period, Verizon’s market share increased from 33% to 35% and AT&T market share remained stable at 32%. It appears that T-Mobile’s so-called maverick behavior did more to disrupt the market shares of smaller competitors Sprint and Leap (which was acquired by AT&T). Thus, it is not clear, ex post, that T-Mobile posed any threat to AT&T or Verizon’s market shares.
Geoffrey Manne raised somequestions about the government’s maverick theory which also highlights a fundamental problem with the willy nilly way in which firms are given the maverick label:
. . . it’s just not enough that a firm may be offering products at a lower price—there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.
While T-Mobile had a reputation for lower mobile prices, in 2011, the firm waslagging behind Verizon, Sprint, and AT&T in the rollout of 4G technology. In other words, T-Mobile was offering an inferior product at a lower price. That’s not a maverick, that’s product differentiation with hedonic pricing.
More recently, in his opposition to the proposed T-Mobile/Sprint merger, Gene Kimmelman from Public Knowledgeasserts that both firms are mavericks and their combination would cause their maverick magic to disappear:
Sprint, also, can be seen as a maverick. It has offered “unlimited” plans and simplified its rate plans, for instance, driving the rest of the industry forward to more consumer-friendly options. As Sprint CEO Marcelo Claure stated, “Sprint and T-Mobile have similar DNA and have eliminated confusing rate plans, converging into one rate plan: Unlimited.” Whether both or just one of the companies can be seen as a “maverick” today, in either case the newly combined company would simply have the same structural incentives as the larger carriers both Sprint and T-Mobile today work so hard to differentiate themselves from.
Kimmelman provides no mechanism by which the magic would go missing, but instead offers a version of an adversity-builds-character argument:
Allowing T-Mobile to grow to approximately the same size as AT&T, rather than forcing it to fight for customers, will eliminate the combined company’s need to disrupt the market and create an incentive to maintain the existing market structure.
For 30 years, the notion of the maverick firm has been a concept in search of a model. If the concept cannot be modeled decades after being introduced, maybe the maverick can’t be modeled.
What’s left are ad hoc assertions mixed with speculative projections in hopes that some sympathetic judge can be swayed. However, some judges seem to be more skeptical than sympathetic, as inH&R Block/TaxACT :
The parties have spilled substantial ink debating TaxACT’s maverick status. The arguments over whether TaxACT is or is not a “maverick” — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis. The government even put forward as supposed evidence a TaxACT promotional press release in which the company described itself as a “maverick.” This type of evidence amounts to little more than a game of semantic gotcha. Here, the record is clear that while TaxACT has been an aggressive and innovative competitor in the market, as defendants admit, TaxACT is not unique in this role. Other competitors, including HRB and Intuit, have also been aggressive and innovative in forcing companies in the DDIY market to respond to new product offerings to the benefit of consumers.
It’s time to send the maverick out of town and into the sunset.
[This post is the fifth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]
[This post is authored by William Rinehart, Director of Technology and Innovation Policy at American Action Forum.]
Back in May, the New York Times published an op-ed by Chris Hughes, one of the founders of Facebook, in which he called for the break up of his former firm. Hughes joins a growing chorus, including Senator Warren, Roger McNamee and others who have called for the break up of “Big Tech” companies. If Business Insider’s polling is correct, this chorus seems to be quite effective: Nearly 40 percent of Americans now support breaking up Facebook.
Hughes’ position is perhaps understandable given his other advocacy activities. But it is also worth bearing in mind that he likely was never particularly familiar with or involved in Facebook’s technical backend or business development or sales. Rather, he was important in setting up the public relations and feedback mechanisms. This is relevant because the technical and organizational challenges in breaking up big tech are enormous and underappreciated.
That list, however, leaves out the company’s backend AI platform, known as Horizon. As Christopher Mims reported in the Wall Street Journal, Facebook put serious resources into creating Horizon and it has paid off. About a fourth of the engineers at the company were using this platform in 2017, even though only 30 percent of them were experts in it. The system, as Joaquin Candela explained, is powerful because it was built to be “a very modular layered cake where you can plug in at any level you want.” As Mim was careful to explain, the platform was designed to be “domain-specific,” or highly modular. In other words, Horizon was meant to be useful across a range of complex problems and different domains. If WhatsApp and Instagram were separated from Facebook, who gets that asset? Does Facebook retain the core tech and then have to sell it at a regulated rate?
Lessons from Attempts to Manage Competition in the Tobacco Industry
For all of the talk about breaking up Facebook and other tech companies, few really grasp just how lackluster this remedy has been in the past. The classic case to study isn’t AT&T or Standard Oil, but American Tobacco Company.
The American Tobacco Company came about after a series of mergers in 1890 orchestrated by J.B. Duke. Then, between 1907 and 1911, the federal government filed and eventually won an antitrust lawsuit, which dissolved the trust into three companies.
Duke was unique for his time because he worked to merge all of the previous companies into a working coherent firm. The organization that stood trial in 1907 was a modern company, organized around a functional structure. A single purchasing department managed all the leaf purchasing. Tobacco processing plants were dedicated to specific products without any concern for their previous ownership. The American Tobacco Company was rational in a way few other companies were at the time.
These divisions were pulled apart over eight months. Factories, distribution and storage facilities, back offices and name brands were all separated by government fiat. It was a difficult task. As historian Allan M. Brandt details in “The Cigarette Century,”
It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.
So how did consumers fare after the breakup? Most research suggests that the breakup didn’t substantially change the markets where American Tobacco was involved. Real cigarette prices for consumers were stable, suggesting there wasn’t price competition. The three companies coming out of the suit earned the same profit from 1912 to 1949 as the original American Tobacco Company Trust earned in its heyday from 1898 to 1908. As for the upstream suppliers, the price paid to tobacco farmers didn’t change either. The breakup was a bust.
The difficulties in breaking up American Tobacco stand in contrast to the methods employed with Standard Oil and AT&T. For them, the split was made along geographic lines. Standard Oil was broken into 34 regional companies. Standard Oil of New Jersey became Exxon, while Standard Oil of California changed its name to Chevron. In the same way, AT&T was broken up in Regional Bell Operating Companies. Facebook doesn’t have geographic lines.
The Lessons of the Past Applied to Facebook
Facebook combines elements of the two primary firm structures and is thus considered a “matrix form” company. While the American Tobacco Company employed a functional organization, the most common form of company organization today is the divisional form. This method of firm rationalization separates the company’s operational functions by product, in order to optimize efficiencies. Under a divisional structure, each product is essentially a company unto itself. Engineering, finance, sales, and customer service are all unified within one division, which sits separate from other divisions within a company. Like countless other tech companies, Facebook merges elements of the two forms. It relies upon flexible teams to solve problems that tend to cross the normal divisional and functional bounds. Communication and coordination is prioritized among teams and Facebook invests heavily to ensure cross-company collaboration.
Advocates think that undoing the WhatsApp and Instagram mergers will be easy, but there aren’t clean divisional lines within the company. Indeed, Facebook has been working towards a vast reengineering of its backend for some time that, when completed later this year or early 2020, will effectively merge all of the companies into one ecosystem. Attempting to dismember this ecosystem would almost certainly be disastrous; not just a legal nightmare, but a technical and organizational nightmare as well.
Much like American Tobacco, any attempt to split off WhatsApp and Instagram from Facebook will probably fall flat on its face because government officials will have to create three regulated firms, each with essentially duplicative structures. As a result, the quality of services offered to consumers will likely be inferior to those available from the integrated firm. In other words, this would be a net loss to consumers.
Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment.
When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC.
Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.
But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.
And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.
The European Commission and Austria’s Federal Competition Authority are investigating Amazon over its use of Marketplace sellers’ data. US senator Elizabeth Warren has said that one reason to require “large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” is to prevent them from using data they obtain from third parties on the platform to benefit their own participation on the platform.
Amazon tweeted in response to Warren: “We don’t use individual sellers’ data to launch private label products.” However, an Amazon spokeswoman would not answer questions about whether it uses aggregated non-public data about sellers, or data from buyers; and whether any formal firewall prevents Amazon’s retail operation from accessing Marketplace data.
If the problem is solely that Amazon’s own retail operation can access data from the Marketplace, structurally breaking up the company and forbidding it and other platforms from participating on those platforms may be a far more extensive intervention than is needed. A targeted response such as a firewall could remedy the specific competitive harm.
Germany’s Federal Cartel Office implicitly recognised this with its Facebook decision, which did not demand the divestiture of every business beyond the core social network – the “Mark Zuckerberg Production” that began in 2004. Instead, the competition authority prohibited Facebook from conditioning the use of that social network on consent to the collection and combination of data from WhatsApp, Oculus, Masquerade, Instagram and any other sites or apps where Facebook might track them.
The decision does not limit data collection on Facebook itself. “It is taken into account that an advertising-funded social network generally needs to process a large amount of personal data,” the authority said. “However, the Bundeskartellamt holds that the efficiencies in a business model based on personalised advertising do not outweigh the interests of the users when it comes to processing data from sources outside of the social network.”
The Federal Cartel Office thus aims to wall off the data collected on Facebook from data that can be collected anywhere else. It ordered Facebook to present a road map for how it would implement these changes within four months of the February 2019 decision, but the time limit was suspended by the company’s emergency appeal to the Düsseldorf Higher Regional Court.
Federal Cartel Office president Andreas Mundt has described the kind of remedy he had ordered for Facebook as not exactly structural, but going in a “structural direction” that might work for other cases as well. Keeping the data apart is a way to “break up this market power” without literally breaking up the corporation, and the first step to an “internal divestiture”, he said.
Mundt claimed that this kind of remedy gets to “the core of the problem”: big internet companies being able to out-compete new entrants, because the former can obtain and process data even beyond what they collected on a single service that has attracted a large number of users.
He used terms like “silo” rather than “firewall”, but the essential idea is to protect competition by preventing the dissemination of certain information. Antitrust authorities worldwide have considered firewalls, particularly in vertical merger remedies, as a way to prevent the anticompetitive movement of data while still allowing for some efficiencies of business units being under the same corporate umbrella.
Notwithstanding Mundt’s reference to a “structural direction”, competition authorities including his own have traditionally classified firewalls as a behavioural or conduct remedy. They purport to solve a specific problem: the movement of information.
Other aspects of big companies that can give them an advantage – such as the use of profits from one part of a company to invest in another part, perhaps to undercut rivals on prices – would not be addressed by firewalls. They would more likely would require dividing up a company at the corporate level.
But if data are the central concern, then the way forward might be found in firewalls.
What do the enforcers say?
The Federal Cartel Office’s May 2017 guidance on merger remedies disfavours firewalls, stating that such obligations are “not suitable to remedy competitive harm” because they require continuous oversight. Employees of a corporation in almost any sector will commonly exchange information on a daily basis in almost every industry, making it “extremely difficult to identify, stop and prevent non-compliance with the firewall obligations”, the guidance states. In a footnote, it acknowledges that other, unspecified jurisdictions have regarded firewalls “as an effective remedy to remove competition concerns”.
The UK’s Competition and Markets Authority takes a more optimistic view of the ability to keep a firewall in place, at least in the context of a vertical integration to prevent the use of “privileged information generated by competitors’ use of the merged company’s facilities or products”. In addition to setting up the company to restrict information flows, staff interactions and the sharing of services, physical premises and management, the CMA also requires the commitment of “significant resources to educating staff about the requirements of the measures and supporting the measures with disciplinary procedures and independent monitoring”.
The European Commission’s merger remedies notice is quite short. It does not mention firewalls or Chinese walls by name, simply noting that any non-structural remedy is problematic “due to the absence of effective monitoring of its implementation” by the commission or even other market participants. A 2011 European Commission submission to the Organisation for Economic Co-operation and Development was gloomier: “We have also found that firewalls are virtually impossible to monitor.”
The US antitrust agencies have been inconsistent in their views, and not on a consistent partisan basis. Under George W Bush, the Department of Justice’s antitrust division’s 2004 merger guidance said “a properly designed and enforced firewall” could prevent certain competition harms. But it also would require the DOJ and courts to expend “considerable time and effort” on monitoring, and “may frequently destroy the very efficiency that the merger was designed to generate. For these reasons, the use of firewalls in division decrees is the exception and not the rule.”
Under Barack Obama, the Antitrust Division revised its guidance in 2011 to omit the most sceptical language about firewalls, replacing it with a single sentence about the need for effective monitoring. Under Donald Trump, the Antitrust Division has withdrawn the 2011 guidance, and the 2004 guidance is operative.
At the Federal Trade Commission, on the other hand, firewalls had long been relatively uncontroversial among both Republicans and Democrats. For example, the commissioners unanimously agreed to a firewall remedy for PepsiCo’s and Coca-Cola’s separate 2010 acquisitions of bottlers and distributors that also dealt with rival a rival beverage maker, the Dr Pepper Snapple Group. (The FTC later emphasised the importance in those cases of obtaining industry expert monitors, who “have provided commission staff with invaluable insight and evaluation regarding each company’s compliance with the commission’s orders”.)
In 2017, the two commissioners who remained from the Obama administration both signed off on the Broadcom/Brocade merger based on a firewall – as did the European Commission, which also mandated interoperability commitments. And the Democratic commissioners appointed by President Trump voted with their Republican colleagues in 2018 to clear the Northrop Grumman/Orbital ATK deal subject to a behavioural remedy that included supply commitments and firewalls.
Several months later, however, those Democrats dissented from the FTC’s approval of Staples/Essendant, which the agency conditioned solely on a firewall between Essendant’s wholesale business and the Staples unit that handles corporate sales. While a firewall to prevent Staples from exploiting Essendant’s commercially-sensitive data about Staples’ rivals “will reduce the chance of misuse of data, it does not eliminate it,” Commissioner Rohit Chopra said. He emphasised the difficulty of policing oral communications, and said the FTC instead could have required Essendant to return its customers’ data. Commissioner Rebecca Kelly Slaughter said she shared Chopra’s “concerns about the efficacy of the firewall to remedy the information sharing harm”.
The majority defended firewalls’ effectiveness, noting that it had used them solve competition concerns in past vertical mergers, “and the integrity of those firewalls was robust.” The Republican commissioners cited the FTC’s review of the merger remedies it had imposed from 2006 to 2012, which concluded: “All vertical merger orders were judged successful.”
Republican commissioner Christine Wilson wrote separately about the importance of choosing “a remedy that is narrowly tailored to address the likely competitive harms without doing collateral damage.” Certain behavioural remedies for past vertical mergers had gone too far and even resulted in less competition, she said. “I have substantially fewer qualms about long-standing and less invasive tools, such as the ‘firewalls, fair dealing, and transparency provisions’ the Antitrust Division endorsed in the 2004 edition of its Policy Guide.”
Why firewalls don’t work, especially for big tech
Firewalls are designed to prevent the anticompetitive harm of information exchange, but whether they work depends on whether the companies and their employees behave themselves – and if they do not, on whether antitrust enforcers can know it and prove it. Deputy assistant attorney general Barry Nigro at the Antitrust Division has questioned the effectiveness of firewalls as a remedy for deals where the relevant business units are operationally close. The same problem may arise outside the merger context.
For example, Amazon’s investment fund for products to complement its Alexa virtual assistant could be seen as having the kind of firewall that is undercut by the practicalities of how a business operates. CNBC reported in September 2017 that “Alexa Fund representatives called a handful of its portfolio companies to say a clear ‘firewall’ exists between the Alexa Fund and Amazon’s product development teams.” The chief executive from Nucleus, one of those portfolio companies, had complained that Amazon’s Echo Show was a copycat of Nucleus’s product. While Amazon claimed that the Alexa Fund has “measures” to ensure “appropriate treatment” of confidential information, the companies said the process of obtaining the fund’s investment required them to work closely with Amazon’s product teams.
CNBC contrasted with Intel Capital – a division of the technology company that manages venture capital and investment – where a former managing director said he and his colleagues “tried to be extra careful not to let trade secrets flow across the firewall into its parent company”.
Firewalls are commonplace to corporate lawyers, who instill temporary blocks to prevent transmission of information in a variety of situations, such as during due diligence on a deal. This experience may lead such attorneys to put more faith in firewalls than enforcement advocates do.
Diana Moss, the president of the American Antitrust Institute, says that like other behavioral remedies, firewalls “don’t change any incentive to exercise market power”. In contrast, structural remedies eliminate that incentive by removing the part of the business that would make the exercise of market power profitable.
No internal monitoring or compliance ensures the firewall is respected, Moss says, unless a government consent order installs a monitor in a company to make sure the business units aren’t sharing information. This would be unlikely to occur, she says.
Moss’s 2011 white paper on behavioural merger remedies, co-authored with John Kwoka, reviews how well such remedies have worked. It notes that “information firewalls in Google-ITA and Comcast-NBCU clearly impede the joint operation and coordination of business divisions that would otherwise naturally occur.”
Lina Khan’s 2019 Columbia Law Review article, “The Separation of Platforms and Commerce,” repeatedly cites Moss and Kwoka in the course of arguing that non-separation solutions such as firewalls do not work.
Khan concedes that information firewalls “in theory could help prevent information appropriation by dominant integrated firms.” But regulating the dissemination of information is especially difficult “in multibillion dollar markets built around the intricate collection, combination, and sale of data”, as companies in those markets “will have an even greater incentive to combine different sets of information”.
Why firewalls might work, especially for big tech
Yet neither Khan nor Moss points to an example of a firewall that clearly did not work. Khan writes: “Whether the [Google-ITA] information firewall was successful in preventing Google from accessing rivals’ business information is not publicly known. A year after the remedy expired, Google shut down” the application programming interface, through which ITA had provided its customisable flight search engine.
Even as enforcement advocates throw doubt on firewalls, enforcers keep requiring them. China’s Ministry of Commerce even used them to remedy a horizontal merger, in two stages of its conditions on Western Digital’s acquisition of Hitachi’s hard disk drive.
If German courts allow Andreas Mundt’s remedy for Facebook to go into effect, it will provide an example of just how effective a firewall can be on a platform. The decision requires Facebook to detail its technical plan to implement the obligation not to share data on users from its subsidiaries and its tracking on independent websites and apps.
A section of the “frequently asked questions” about the Federal Cartel Office’s Facebook case includes: “How can the Bundeskartellamt enforce the implementation of its decision?” The authority can impose fines for known non-compliance, but that assume it could detect violations of its order. Somewhat tentatively, the agency says it could carry out random monitoring, which is “possible in principle… as the actual flow of data eg from websites to Facebook can be monitored by analysing websites and their components or by recording signals.”
As perhaps befits the digital difference between Staples and Facebook, the German authority posits monitoring that would not be able to catch the kind of “oral communications” that Commissioner Chopra worried about when the US FTC cleared Staples’ acquisition of Essendant. But the use of such high-monitors could make firewalls even more appropriate as a remedy for platforms – which look to large data flows for a competitive advantage – than for old economy sales teams that could harm competition with just a few minutes of conversation.
Rather than a human monitor installed in a company to guard against firewall breaches, which Moss said was unlikely, software installed on employee computers and email systems might detect data flows between business units that should be walled off from each other. Breakups and firewalls are both longstanding remedies, but the latter may be more amenable to the kind of solutions that “big tech” itself has provided.
[This post is the third in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]
[This post is authored by John E. Lopatka, Robert Noll Distinguished Professor of Law, School of Law, The Pennsylvania State University]
Big Tech firms stand accused of many evils, and the clamor to break them up is loud. Should we fetch our pitchforks? The antitrust laws are designed to address a range of wrongs and authorize a set of remedies, which include but do not emphasize divestiture. When the harm caused by a Big Tech company is of a kind the antitrust laws are intended to prevent, an appropriate antitrust remedy can be devised. In such a case, it makes sense to use antitrust: If antitrust and its remedies are adequate to do the job fully, no legislative changes are required. When the harm falls outside the ambit of antitrust and any other pertinent statute, a choice must be made. Antitrust can be expanded; other statutes can be amended or enacted; or any harms that are not perfectly addressed by existing statutory and common law can be left alone, for legal institutions are never perfect, and a disease can be less harmful than a cure.
A comprehensive list of the myriad and changing attacks on Big Tech firms would be difficult to compile. Indeed, the identity of the offenders is not self-evident, though Google (Alphabet), Facebook, Amazon, and Apple have lately attracted the most attention. The principal charges against Big Tech firms seem to be these: 1) compromising consumer privacy; 2) manipulating the news; 3) accumulating undue social and political influence; 4) stifling innovation by acquiring creative upstarts; 5) using market power in one market to injure competitors in adjacent markets; 6) exploiting input suppliers; 7) exploiting their own employees; and 8) damaging communities by location choices.
These charges are not uniform across the Big Tech targets. Some charges have been directed more forcefully against some firms than others. For instance, infringement of consumer privacy has been a focus of attacks on Facebook. Both Facebook and Google have been accused of manipulating the news. And claims about the exploitation of input suppliers and employees and the destruction of communities have largely been directed at Amazon.
What is “Big Tech”?
Despite the variance among firms, the attacks against all of them proceed from the same syllogism: Some tech firms are big; big tech firms do social harm; therefore, big tech firms should be broken up. From an antitrust perspective, something is missing. Start with the definition of a “tech” firm. In the modern economy, every firm relies on sophisticated technology – from an auto repair shop to an airplane manufacturer to a social media website operator. Every firm is a tech firm. But critics have a more limited concept in mind. They are concerned about platforms, or intermediaries, in multi-sided markets. These markets exhibit indirect network effects. In a two-sided market, for instance, each side of the market benefits as the size of the other side grows. Platforms provide value by coordinating the demand and supply of different groups of economic actors where the actors could not efficiently interact by themselves. In short, platforms reduce transaction costs. They have been around for centuries, but their importance has been magnified in recent years by rapid advances in technology. Rational observers can sensibly ask whether platforms are peculiarly capable of causing harm. But critics tend to ignore or at least to discount the value that platforms provide, and doing so presents a distorted image that breeds bad policy.
Assuming we know what a tech firm is, what is “big”? One could measure size by many standards. Most critics do not bother to define “big,” though at least Senator Elizabeth Warren has proposed defining one category of bigness as firms with annual global revenue of $25 billion or more and a second category as those with annual global revenue of between $90 million and $25 billion. The proper standard for determining whether tech firms are objectionably large is not self-evident. Indeed, a size threshold embodied in any legal policy will almost always be somewhat arbitrary. That by itself is not a failing of a policy prescription. But why use a size screen at all? A few answers are possible. Large firms may do more harm than small firms when harm is proportionate to size. Size may matter because government intervention is costly and less sensitive to firm size than is harm, implying that only harm caused by large firms is large enough to outweigh the costs of enforcement. And most important, the size of a firm may be related to the kind of harm the firm is accused of doing. Perhaps only a firm of a certain size can inflict a particular kind of injury. A clear standard of size and its justification ought to precede any policy prescription.
What’s the (antitrust) beef?
The social harms that Big Tech firms are accused of doing are a hodgepodge. Some are familiar to antitrust scholars as either current or past objects of antitrust concern; others are not. Antitrust protects against a certain kind of economic harm: The loss of economic welfare caused by a restriction on competition. Though the terms are sometimes used in different ways, the core concept is reasonably clear and well accepted. In most cases, economic welfare is synonymous with consumer welfare. Economic welfare, though, is a broader concept. For example, economic welfare is reduced when buyers exercise market power to the detriment of sellers and by productive inefficiencies. But despite the claim of some Big Tech critics, when consumer welfare is at stake, it is not measured exclusively by the price consumers pay. Economists often explicitly refer to quality-adjusted prices and implicitly have the qualification in mind in any analysis of price. Holding quality constant makes quantitative models easier to construct, but a loss of quality is a matter of conventional antitrust concern. The federal antitrust agencies’ horizontal merger guidelines recognize that “reduced product quality, reduced product variety, reduced service, [and] diminished innovation” are all cognizable adverse effects. The scope of antitrust is not as constricted as some critics assert. Still, it has limits.
Leveraging market power is standard antitrust fare, though it is not nearly as prevalent as once thought. Horizontal mergers that reduce economic welfare are an antitrust staple. The acquisition and use of monopsony power to the detriment of input suppliers is familiar antitrust ground. If Big Tech firms have committed antitrust violations of this ilk, the offenses can be remedied under the antitrust laws.
Other complaints against the Big Tech firms do not fit comfortably or at all within the ambit of antitrust. Antitrust does not concern itself with political or social influence. Influence is a function of size, but not relative to any antitrust market. Firms that have more resources than other firms may have more influence, but the deployment of those resources across the economy is irrelevant. The use of antitrust to attack conglomerate mergers was an inglorious period in antitrust history. Injuries to communities or to employees are not a proper antitrust concern when they result from increased efficiency. Acquisitions might stifle innovation, which is a proper antitrust concern, but they might spur innovation by inducing firms to create value and thereby become attractive acquisition targets or by facilitating integration. Whether the consumer interest in informational privacy has much to do with competition is difficult to say. Privacy in this context means the collection and use of data. In a multi-sided market, one group of participants may value not only the size but also the composition and information about another group. Competition among platforms might or might not occur on the dimension of privacy. For any platform, however, a reduction in the amount of valuable data it can collect from one side and provide to another side will reduce the price it can charge the second side, which can flow back and injure the first side. In all, antitrust falters when it is asked to do what it cannot do well, and whether other laws should be brought to bear depends on a cost/benefit calculus.
Does Big Tech’s conduct merit antitrust action?
When antitrust is used, it unquestionably requires a causal connection between conduct and harm. Conduct must restrain competition, and the restraint must cause cognizable harm. Most of the attacks against Big Tech firms if pursued under the antitrust laws would proceed as monopolization claims. A firm must have monopoly power in a relevant market; the firm must engage in anticompetitive conduct, typically conduct that excludes rivals without increasing efficiency; and the firm must have or retain its monopoly power because of the anticompetitive conduct.
Put aside the flaccid assumption that all the targeted Big Tech platforms have monopoly power in relevant markets. Maybe they do, maybe they don’t, but an assumption is unwarranted. Focus instead on the conduct element of monopolization. Most of the complaints about Big Tech firms concern their use of whatever power they have. Use isn’t enough. Each of the firms named above has achieved its prominence by extraordinary innovation, shrewd planning, and effective execution in an unforgiving business climate, one in which more platforms have failed than have succeeded. This does not look like promising ground for antitrust.
Of course, even firms that generally compete lawfully can stray. But to repeat, monopolists do not monopolize unless their unlawful conduct is causally connected to their market power. The complaints against the Big Tech firms are notably weak on allegations of anticompetitive conduct that resulted in the acquisition or maintenance of their market positions. Some critics have assailed Facebook’s acquisitions of WhatsApp and Instagram. Even assuming these firms competed with Facebook in well-defined antitrust markets, the claim that Facebook’s dominance in its core business was created or maintained by these acquisitions is a stretch.
The difficulty fashioning remedies
The causal connection between conduct and monopoly power becomes particularly important when remedies are fashioned for monopolization. Microsoft, the first major monopolization case against a high tech platform, is instructive. DOJ in its complaint sought only conduct remedies for Microsoft’s alleged unlawful maintenance of a monopoly in personal computer operating systems. The trial court found that Microsoft had illegally maintained its monopoly by squelching Netscape’s Navigator and Sun’s Java technologies, and by the end of trial DOJ sought and the court ordered structural relief in the form of “vertical” divestiture, separating Microsoft’s operating system business from its applications business. Some commentators at the time argued for various kinds of “horizontal” divestiture, which would have created competing operating system platforms. The appellate court set aside the order, emphasizing that an antitrust remedy must bear a close causal connection to proven anticompetitive conduct. Structural remedies are drastic, and a plaintiff must meet a heightened standard of proof of causation to justify any kind of divestiture in a monopolization case. On remand, DOJ abandoned its request for divestiture. The evidence that Microsoft maintained its market position by inhibiting the growth of middleware was sufficient to support liability, but not structural relief.
The court’s trepidation was well-founded. Divestiture makes sense when monopoly power results from acquisitions, because the mergers expose joints at which the firm might be separated without rending fully integrated operations. But imposing divestiture on a monopolist for engaging in single-firm exclusionary conduct threatens to destroy the integration that is the essence of any firm and is almost always disproportional to the offense. Even if conduct remedies can be more costly to enforce than structural relief, the additional cost is usually less than the cost to the economy of forgone efficiency.
The proposals to break up the Big Tech firms are ill-defined. Based on what has been reported, no structural relief could be justified as antitrust relief. Whatever conduct might have been unlawful was overwhelmingly unilateral. The few acquisitions that have occurred didn’t appreciably create or preserve monopoly power, and divestiture wouldn’t do much to correct the misbehavior critics see anyway. Big Tech firms could be restructured through new legislation, but that would be a mistake. High tech platform markets typically yield dominant firms, though heterogeneous demand often creates space for competitors. Markets are better at achieving efficient structures than are government planners. Legislative efforts at restructuring are likely to invite circumvention or lock in inefficiency.
Regulate “Big Tech” instead?
In truth, many critics are willing to put up with dominant tech platforms but want them regulated. If we learned any lesson from the era of pervasive economic regulation of public utilities, it is that regulation is costly and often yields minimal benefits. George Stigler and Claire Friedland demonstrated 57 years ago that electric utility regulation had little impact. The era of regulation was followed by an era of deregulation. Yet the desire to regulate remains strong, and as Stigler and Friedland observed, “if wishes were horses, one would buy stock in a harness factory.” And just how would Big Tech platform regulators regulate? Senator Warren offers a glimpse of the kind of regulation that critics might impose: “Platform utilities would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users.” This kind of standard has some meaning in the context of a standard-setting organization dealing with patent holders. What it would mean in the context of a social media platform, for example, is anyone’s guess. Would it prevent biasing of information for political purposes, and what government official should be entrusted with that determination? What is certain is that it would invite government intervention into markets that are working well, if not perfectly. It would invite public officials to tradeoff economic welfare for a host of values embedded in the concept of fairness. Federal agencies charged with promoting the “public interest” have a difficult enough time reaching conclusions where competition is one of several specific values to be considered. Regulation designed to address all the evils high tech platforms are thought to perpetrate would make traditional economic or public-interest regulation look like child’s play.
Big Tech firms have generated immense value. They may do real harm. From all that can now be gleaned, any harm has had little to do with antitrust, and it certainly doesn’t justify breaking them up. Nor should they be broken up as an exercise in central economic planning. If abuses can be identified, such as undesirable invasions of privacy, focused legislation may be in order, but even then only if the government action is predictably less costly than the abuses.
[This post is the second in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]
[This post is authored by Philip Marsden, Bank of England & College of Europe, IG/Twitter: @competition_flaneur]
Since the release of our Furman Report, I have been blessed with an uptick in #antitrusttourism. Everywhere I go, people are talking about what to do about Big Tech. Europe, the Middle East, LatAm, Asia, Down Under — and everyone has slightly different views. But the direction of travel is similar: something is going to be done, some action will be taken. The discussions I’ve been privileged to have with agency officials, advisors, tech in-house counsel and complainants have been balanced and fair. Disagreements tend to focus on the “how, now” rather than on re-hashing arguments about whether anything need be done at all. However, there is one jurisdiction which is the exception — and that is the US. There, pragmatism seems to have been defenestrated — it is all or nothing: we break tech up, or we praise tech from the rooftops. The thing is, neither is an appropriate response, and the longer the debate paralyses the US antitrust community, the more the rest of the world will say “maybe we should see other people” and break with the hard-earned precedent of evidence-based inquiries for which the US agencies are famous.
In the Land of the Free, there is so much broad-brush polarisation. Of course, there is the political main stage, and we have our share of that in the UK too. But in the theatre of American antitrust we have Chicken Littles running around shrieking that all tech platforms are run by creeps, there is an evil design behind every algo tweak or acqui-hire, and the only solution is to ditch antitrust, and move fast and break things, especially break up the G-MAFIA and the upcoming BAT from Asia, ASAP. The Chicken Littles run rings around another group, the ostriches with their heads in the sand saying “nothing to look at here”, the platforms are only forces for good, markets tip tip and tip again, sit back and enjoy the “free” goodies, and leave any mopping up of the tears of whining complainants to fresh “studies” by antitrust enforcers.
There is also an endemic American debate which is pitched as a deep existential crisis, but seems more of a distraction: this says let’s change the consumer welfare standard and import broader social concerns — which is matched by a shocked response that price-based consumer welfare analysis is surely tried and true, and any alteration would send the heavens crashing down again. I view this as a distraction because from my experience as an enforcer and advisor, I only see an enlightened use of the consumer welfare standard as already considering harms to innovation, non-price effects, and lately privacy. So it may be interesting academic conference-fodder, but it largely misses the point that modern antitrust analysis is far broader, and more aware of non-price harms than it is portrayed.
The US though is the only jurisdiction I’ve been to lately that seems to generate the most heat in the debates, and the least light. It is also where demands for tech break-ups are loudest but where any suggestion of regulatory intervention is knee-jerk rejected with abject horror. So there is a lot of noise but not much signal. The US seems disconnected from the international consensus on the need for actual action — and is a lone singleton debating its split-brain into the ground. And when they travel to the rest of the world — many American enforcers say — commendably with honesty — “Hey it’s not me, it’s you.” “You’re the crazy ones with your Google fines, your Amazon own-sales bans, and your Facebook privacy abuse cases, we’ll just press ahead with our usual measured prosecutorial approach — oh and do a big study.”
The thing is: no one believes the US will be anti-NIKE and “just do nothing”. If that was true there wouldn’t have been a massive drop of tech stock value on the announcement of DOJ, FTC and particularly Senate inquiries. So some action will come stateside too… but what should that look like?
What I’d like to see is more engagement in the US with the international proposals. In our Furman Report, we supported a consumer welfare standard, but not laissez-faire. We supported a regulatory model developed through participative antitrust, but not common carrier regulation. And we did not favour breakups or presumptions against acquisitions by tech firms. We tried to do some good, while preventing greater evils. Now, I still think that the most anti-competitive activity I’ve ever seen comes from government not from the abuses of market power of firms, so we do need to tread very carefully in designing our solutions and remedies. But we must remain vigilant against competitive problems in the tech sector and try to get ahead of them, particularly where they are created through structural aspects of these multi-sided markets, consumer inertia, entrenchment and enveloping, even in a world of “free” “goods” and “services” (all in quotes since not everything online is free, or good, or even a service). So in Furman, we engaged with the debate but we avoided non-informative polarisation; not out of cowardice but to produce something hopefully relevant, informative, and which can actually be acted upon. It is an honour that our current Prime Minister and Chancellor have supported our work, and there are active moves to implement almost all of our proposals.
We grounded our work in maintaining a focus on a dynamic consumer welfare standard, but we still firmly agreed that more intervention was needed. We concluded this after laying out our findings of myriad structural reasons for regulatory intervention (with no antitrust cause of complaint), and improving antitrust enforcement to address bad conduct as well. We sought to #dialupantitrust — through speeding up enforcement, and modernising merger control analysis — as well as #unlockingdigitalcompetition by developing a pro-competitive code of conduct, and data mobility (not just portability) and open API and similar remedies. There’s been lots of talk about that, and similarly-directed reports from the EU Trio and the Stigler Centre. I think discussing this sort of approach is the most pragmatic, evidence-based way forward: namely a model of participative antitrust, where the tech companies, their customers, consumer groups and government work out how to ensure platforms with strategic market status take on firm conduct obligations to get ahead of problems ex ante, and clear out many of the most toxic exclusionary or exploitative practices.
Our approach would leave antitrust authorities to focus on the more nuanced behaviour, where #evidencematters and economic analysis and judgment really need to be brought to bear. This will primarily be in merger control — which we argue needs to be more forward-looking, more focussed on dynamic non-price impacts, and more able to address both the likelihood and magnitude of harms in a balanced way. This may also mean that authorities are less accepting of even heavily-sweated entry stories from merging parties. In ex post antitrust enforcement the main problem is speed, and we need to adjust the overall investigatory and appeal mechanism to ensure it is not captured not so much by the defendants and their armies of lawyers and economists, but by the mistaken focus on victory of our own team.
I’ve seen senior agency lawyers refuse to release a decision until it has been sweated by 10 litigators and 3 QC’s and is “appeal-proof” — which no decision ever is — adding months or even years to the process. And woe betide a case team, inquiry chair or agency head who tries to cut through that — for the response is always “oh so you’re (much sucking of teeth and shaking of heads) content with Legal Risk???”. This is lazy — I’d much rather work with lawyers whose default is “What are we trying to achieve?” not “I’ll just say No and then head off home” — a flaw that pervades some in-house counsel too. Legal risk is inherent in antitrust enforcement, not something to be feared. Frankly so many agencies have too many levels of internal scrutiny now which — when married to a system of full merits appeals — makes it incredible that any enforcement ever happens at all. And don’t get me started on the gaming inherent in negotiating commitments that may not even be effective but don’t even get a chance to operate before going through years of review processes dominated by third party “market tests”. These flaws in enforcement systems contribute to the perception (and reality) of antitrust law’s weakness, slowness and inapplicability to reality — and hence fuel the calls for much stronger, much more intrusive and more chilling regulation, that could truly stifle a lot of genuine innovation.
So our Furman report tries to cut through this, by speeding up antitrust enforcement, making merger control more forward looking — without achieving mathematical certainty but still allowing judgement of what is harmful on balance — and proposes a pro-competitive code of conduct for tech firms to help develop and “walk the talk”. Developing that code will be a key challenge as we need to further refine what level of economic dependency on a platform customers and suppliers need to have, before that tech co is deemed to have strategic market status and must take on additional responsibilities to act fairly with respect to its customers, users, and suppliers. Fortunately, the British Government’s approval of our plans for a Digital Markets Unit means we can get started — so watch this space.
I’ve never said that this will be easy to do. We have a model in the Groceries Code Adjudicator — which was set up as a competition remedy — after a long market investigation of the offline retail platform market identified a range of harms that could occur, that might even be price-lowering to consumers but could harm innovation, choice and legitimate competition on the merits. A list of platforms was drawn up, a code was applied, and a range of toxic exploitative and exclusionary conduct was driven out of the market, and while not everything is perfect in retailing, far fewer complaints are landing on the CEO’s desk at the Competition & Markets Authority — so it can focus on other priorities. Our view is similar — while recognising that tech is a lot more complicated. Part of our model is thus also drawn on other CMA work with which I was honoured to be involved, a two year investigation of the retail banking platforms, and a degree of supply side and demand side inertia that I had never seen before, except maybe in energy. Here the solution was not — as politicians wanted — to break up the big banks. That would have done nothing good, and a lot of bad. Instead we found that the dynamic between supply and demand was so broken that remedies on both sides of the equation were needed. Here it was truly an example not of “it’s not you, it’s me” but “it’s both of us”: suppliers and consumers were contributing to the problem. We decided not to break up the platforms, though — but open them up — making data they were just sitting on (and which was a form of barrier to entry) available to fintech intermediaries, who would compete to access the data, train their new algos and thereby offer new choice tools to consumers.
Breakups wold have added limping suppliers to the market, but much less competitive constraint. Opening up their data banks spurred the incumbents on to innovate faster than they might have, and customers to engage more with their banks. Our measure of success wasn’t switching — there is firm evidence that Britons switch their spouses more often than they switch their banks. So the remedy wasn’t breakup, and the KPI isn’t divorce, but is… engagement, on both sides of the relationship. And if it resulted in “maybe we should see other people” and multi-bank, then that is all to the overall good, for customer satisfaction, better engagement, and a more innovative retail banking ecosystem.
And that is where I think we should seek new remedies in the tech sphere. Breakups wouldn’t help us stimulate a more innovative creative ecosystem. But only opening up platforms after litigating on an essential facilities doctrine for 8 years wouldn’t get us there either. We need informed analysis, with tech experts and competition and consumer officials, to identify the drivers of business developments, to balance the myriad issues that we all have as citizens, and voters, and shoppers, and then to act surgically when we see that a competition law problem of abuse of market power, or structural economic dependency, is causing real harm.
I believe that the Furman report, and other international proposals from Australia, Canada, the EU, the UK’s Digital Markets Strategy, and enforcement action in the EU, Spain, Germany, Italy and elsewhere will help provide us with natural experiments and targeted solutions to specific problems. And in the process, will help fend off calls for short-term ‘fixes’ like breakups and other regulation that are retrograde and chill rather than go with the flow of — or better — stimulate innovation.
Finally, we must not lose sight of one of my current bugbears, the incredible dependency we have allowed our governments and private sector to have on a handful of cloud computing companies. This may well have developed through superior skill, foresight and industry, and may be subject to rigorous procurement procedures and testing, but frankly, this is a ‘market’ that is too important to ignore. Social media and advertising may be pervasive but cloud is huge — with defence departments and banks and key infrastructure dependent on what are essentially private sector resiliency programmes. Even more than Facebook’s proposed currency Libra becoming “instantly systemic”, I fear we are already there with cloud: huge benefits, amazing efficiencies, but with it some zombie-apocalypse-level systemic risks not of one bank falling over, but many. Here it may well be that the bigger they are the more resilient they are, and the more able they are to police and rectify problems… but we have heard that before in other sectors and I just hope we can apply our developing proposals for digital platforms, to new challenges as well. The way tech is developing, we can’t live without it — but to live with it, we need to accept more responsibilities as enforcers, consumers and providers of these crucial services. So let’s stay together and work harder to #makeantitrustgreatagain and #unlockdigitalcompetition.
[TOTM: The following is the seventh in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.]
This post is authored by Gerard Lloblet, Professor of Economics at CEMFI, and Jorge Padilla, Senior Managing Director at Compass Lexecon. Both have advised SEP holders, and to a lesser extent licensees, in royalty negotiations and antitrust disputes.]
Over the last few years competition authorities in the US and elsewhere have repeatedly warned about the risk of patent hold-up in the licensing of Standard Essential Patents (SEPs). Concerns about such risks were front and center in the recent FTC case against Qualcomm, where the Court ultimately concluded that Qualcomm had used a series of anticompetitive practices to extract unreasonable royalties from implementers. This post evaluates the evidence for such a risk, as well as the countervailing risk of patent hold-out.
In general, hold up may arise when firms negotiate trading terms after they have made costly, relation-specific investments. Since the costs of these investments are sunk when trading terms are negotiated, they are not factored into the agreed terms. As a result, depending on the relative bargaining power of the firms, the investments made by the weaker party may be undercompensated (Williamson, 1979).
In the context of SEPs, patenthold-up would arise if SEP owners were able to take advantage of the essentiality of their patents to charge excessive royalties to manufacturers of products reading on those patents that made irreversible investments in the standard (see Lemley and Shapiro (2007)). Similarly, in the recent FTC v. Qualcomm ruling, trial judge Lucy Koh concluded that firms may also use commercial strategies (in this case, Qualcomm’s “no license, no chips” policy, refusing to deal with certain parties and demanding exclusivity from others) to extract royalties that depart from the FRAND benchmark.
After years of heated debate, however, there is no consensus about whether patent hold-up actually exists. Some argue that there is no evidence of hold-up in practice. If patent hold-up were a significant problem, manufacturers would anticipate that their investments would be expropriated and would thus decide not to invest in the first place. But end-product manufacturers have invested considerable amounts in standardized technologies (Galetovic et al, 2015). Others claim that while investment is indeed observed, actual investment levels are “necessarily” below those that would be observed in the absence of hold-up. They allege that, since that counterfactual scenario is not observable, it is not surprising that more than fifteen years after the patent hold-up hypothesis was first proposed, empirical evidence of its existence is lacking.
Meanwhile, innovators are concerned about a risk in the opposite direction, the risk of patent hold-out. As Epstein and Noroozi (2018) explain,
By “patent holdout” we mean the converse problem, i.e., that an implementer refuses to negotiate in good faith with an innovator for a license to valid patent(s) that the implementer infringes, and instead forces the innovator to either undertake significant litigation costs and time delays to extract a licensing payment through court order, or else to simply drop the matter because the licensing game is no longer worth the candle.
Patent hold-out, also known as “efficient infringement,” is especially relevant in the standardization context for two reasons. First, SEP owners are oftentimes required to license their patents under Fair, Reasonable and Non-Discriminatory (FRAND) conditions. Particularly when, as occurs in some jurisdictions, innovators are not allowed to request an injunction, they have little or no leverage in trying to require licensees to accept a licensing deal. Secondly, SEP owners typically possess many complementary patents and, therefore, seek to license their portfolio of SEPs at once, since that minimizes transaction costs. Yet, some manufacturers de facto refuse to negotiate in this way and choose to challenge the validity of the SEP portfolio patent-by-patent and/or jurisdiction-by-jurisdiction. This strategy involves large litigation costs and is therefore inefficient. SEP holders claim that this practice is anticompetitive and it also leads to royalties that are too low.
While the concerns of SEP holders seem to have attracted the attention of the leadership of the US DOJ (see, for example, here), some authors have dismissed them as theoretically groundless, empirically immaterial and irrelevant from an antitrust perspective (see here).
Evidence of patent hold-out from litigation
In an ongoing work, Llobet and Padilla (forthcoming), we analyze the effects of the sequential litigation strategy adopted by some manufacturers and compare its consequences with the simultaneous litigation of the whole portfolio. We show that sequential litigation results in lower royalty payments than simultaneous litigation and may result in under-compensation of innovation and the dissipation of social surplus when litigation costs are high.
The model relies on two basic and realistic assumptions. First, in sequential lawsuits, the result of a trial affects the probability that each party wins the following one. That is, if the manufacturer wins the first trial, it has a higher probability of winning the second, as a first victory may uncover information about the validity of other patents that relate to the same type of innovation, which will be less likely to be upheld in court. Second, the impact of a validity challenge on royalty payments is asymmetric: they are reduced to zero if the patent is found to be invalid but are not increased if it is found valid (and infringed).
Our results indicate that these features of the legal system can be strategically used by the manufacturer. The intuition is as follows. Suppose that the innovator sets a royalty rate for each patent for which, in the simultaneous trial case, the manufacturer would be indifferent between settling and litigating. Under sequential litigation, however, the manufacturer might be willing to challenge a patent because of the gain in a future trial. This is due to the asymmetric effects that winning or losing the second trial has on the royalty rate that this firm will have to pay. In particular, if the manufacturer wins the first trial, so that the first patent is invalidated, its probability of winning the second one increases, which means that the innovator is likely to settle for a lower royalty rate for the second patent or see both patents invalidated in court. In the opposite case, if the innovator wins the first trial, so that the second is also likely to be unfavorable to the manufacturer, the latter always has the option to pay up the original royalty rate and avoid the second trial. In other words, the possibility for the manufacturer to negotiate the royalty rate downwards after a victory, without the risk of it being increased in case of a defeat, fosters sequential litigation and results in lower royalties than the simultaneous litigation of all patents would produce.
This mechanism, while being applicable to any portfolio that includes patents the validity of which is related, becomes more significant in the context of SEPs for two reasons. The first is the difficulty of innovators to adjust their royalties upwards after the first successful trial, as it might be considered a breach of their FRAND commitments. The second is that, following recent competition law litigation in the EU and other jurisdictions, SEP owners are restricted in their ability to seek (preliminary) injunctions even in the case of willful infringement. Our analysis demonstrates that the threat of injunction mitigates, though it is unlikely to eliminate completely, the incentive to litigate sequentially and, therefore, excessively (i.e. even when such litigation reduces social welfare).
We also find a second motivation for excessive litigation: business stealing. Manufacturers litigate excessively in order to avoid payment and thus achieve a valuable cost advantage over their competitors. They prefer to litigate even when litigation costs are so large that it would be preferable for society to avoid litigation because their royalty burden is reduced both in absolute terms and relative to the royalty burden for its rivals (while it does not go up if the patents are found valid). This business stealing incentive will result in the under-compensation of innovators, as above, but importantly it may also result in the anticompetitive foreclosure of more efficient competitors.
Consider, for example, a scenario in which a large firm with the ability to fund protracted litigation efforts competes in a downstream market with a competitive fringe, comprising small firms for which litigation is not an option. In this scenario, the large manufacturer may choose to litigate to force the innovator to settle on a low royalty. The large manufacturer exploits the asymmetry with its defenseless small rivals to reduce its IP costs. In some jurisdictions it may also exploit yet another asymmetry in the legal system to achieve an even larger cost advantage. If both the large manufacturer and the innovator choose to litigate and the former wins, the patent is invalidated, and the large manufacturer avoids paying royalties altogether. Whether this confers a comparative advantage on the large manufacturer depends on whether the invalidation results in the immediate termination of all other existing licenses or not.
Our work thus shows that patent hold-out concerns are both theoretically cogent and have non-trivial antitrust implications. Whether such concerns merit intervention is an empirical matter. While reviewing that evidence is outside the scope of our work, our own litigation experience suggests that patent hold-out should be taken seriously.
[TOTM: The following is the sixthin a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.
This post is authored by Jonathan M. Barnett, Torrey H. Webb Professor of Law at the University of Southern California Gould School of Law.]
There is little doubt that the decision in May 2019 by the Northern District of California in FTC v. Qualcomm is of historical importance. Unless reversed or modified on appeal, the decision would require that the lead innovator behind 3G and 4G smartphone technology renegotiate hundreds of existing licenses with device producers and offer new licenses to any interested chipmakers.
The court’s sweeping order caps off a global campaign by implementers to re-engineer the property-rights infrastructure of the wireless markets. Those efforts have deployed the instruments of antitrust and patent law to override existing licensing arrangements and thereby reduce the input costs borne by device producers in the downstream market. This has occurred both directly, in arguments made by those firms in antitrust and patent litigation or through the filing of amicus briefs, or indirectly by advocating that regulators bring antitrust actions against IP licensors.
Whether or not FTC v. Qualcomm is correctly decided largely depends on whether or not downstream firms’ interest in minimizing the costs of obtaining technology inputs from upstream R&D specialists aligns with the public interest in preserving dynamically efficient innovation markets. As I discuss below, there are three reasons to believe those interests are not aligned in this case. If so, the court’s order would simply engineer a wealth transfer from firms that have led innovation in wireless markets to producers that have borne few of the costs and risks involved in doing so. Members of the former group each exhibits R&D intensities (R&D expenditures as a percentage of sales) in the high teens to low twenties; the latter, approximately five percent. Of greater concern, the court’s upending of long-established licensing arrangements endangers business models that monetize R&D by licensing technology to a large pool of device producers (see Qualcomm), rather than earning returns through self-contained hardware and software ecosystems (see Apple). There is no apparent antitrust rationale for picking and choosing among these business models in innovation markets.
Reason #1: FRAND is a Two-Sided Deal
To fully appreciate the recent litigations involving the FTC and Apple on the one hand, and Qualcomm on the other hand, it is necessary to return to the origins of modern wireless markets.
Starting in the late 1980s, various firms were engaged in the launch of the GSM wireless network in Western Europe. At that time, each European telecom market typically consisted of a national monopoly carrier and a favored group of local equipment suppliers. The GSM project, which envisioned a trans-national wireless communications market, challenged this model. In particular, the national carrier and equipment monopolies were threatened by the fact that the GSM standard relied in part on patented technology held by an outside innovator—namely, Motorola. As I describe in a forthcoming publication, the “FRAND” (fair, reasonable and nondiscriminatory) principles that today govern the licensing of standard-essential patents in wireless markets emerged from a negotiation between, on the one hand, carriers and producers who sought a royalty cap and, on the other hand, a technology innovator that sought to preserve its licensing freedom going forward.
This negotiation history is important. Any informed discussion of the meaning of FRAND must recognize that this principle was adopted as something akin to a “good faith” contractual term designed to promote two objectives:
Protect downstream adopters from holdup tactics by upstream innovators; and
enable upstream innovators to enjoy an appreciable portion of the value generated by sales in the consumer market.
Any interpretation of FRAND that does not meet these conditions will induce upstream firms to reduce R&D investment, limit participation in standard-setting activities, or vertically integrate forward to capture directly a return on R&D dollars.
Reason #2: No Evidence of Actual Harm
In the December 2018 appellate court proceedings in which the Department of Justice unsuccessfully challenged the AT&T/Time-Warner merger, Judge David Sentelle of the D.C. Circuit said to the government’s legal counsel:
If you’re going to rely on an economic model, you have to rely on it with quantification. The bare theorem . . . doesn’t prove anything in a particular case.
The government could not credibly reply to that query in the AT&T case and, if appropriately challenged, could not do so in this case.
Far from being a market that calls out for federal antitrust intervention, the smartphone market offers what appears to be an almost textbook case of dynamic efficiency. For over a decade, implementers, along with sympathetic regulators and commentators, have argued that the market suffers (or, in a variation, will imminently suffer) from inflated prices, reduced output and delayed innovation as a result of “patent hold-up” and “royalty stacking” by opportunistic patent owners. In the course of several decades that have passed since the launch of the GSM network, none of these predictions have yet to materialize. To the contrary. The market has exhibited expanding output, declining prices (adjusted for increased functionality), constant innovation, and regular entry into the production market. Multiple empirical studies (e.g. this, this and this) have found that device producers bear on average an aggregate royalty burden in the single to mid-digits.
This hardly seems like a market in which producers and consumers are being “victimized” by what the Northern District of California calls “unreasonably high” licensing fees (compared to an unspecified, and inherently unspecifiable, dynamically efficient benchmark). Rather, it seems more likely that device producers—many of whom provided the testimony which the court referenced in concluding that royalty rates were “unreasonably high”—would simply prefer to pay an even lower fee to R&D input suppliers (with no assurance that any of the cost-savings would flow to consumers).
Reason #3: The “License as Tax” Fallacy
The rhetorical centerpiece of the FTC’s brief relied on an analogy between the patent license fees earned by Qualcomm in the downstream device market and the tax that everyone pays to the IRS. The court’s opinion wholeheartedly adopted this narrative, determining that Qualcomm imposes a tax (or, as Judge Koh terms it, a “surcharge”) on the smartphone market by demanding a fee from OEMs for use of its patent portfolio whether or not the OEM purchases chipsets from Qualcomm or another firm. The tax analogy is fundamentally incomplete, both in general and in this case in particular.
It is true that much of the economic literature applies monopoly taxation models to assess the deadweight losses attributed to patents. While this analogy facilitates analytical tractability, a “zero-sum” approach to patent licensing overlooks the value-creating “multiplier” effect that licensing generates in real-world markets. Specifically, broad-based downstream licensing by upstream patent owners—something to which SEP owners commit under FRAND principles—ensures that device makers can obtain the necessary technology inputs and, in doing so, facilitates entry by producers that do not have robust R&D capacities. All of that ultimately generates gains for consumers.
This “positive-sum” multiplier effect appears to be at work in the smartphone market. Far from acting as a tax, Qualcomm’s licensing policies appear to have promoted entry into the smartphone market, which has experienced fairly robust turnover in market leadership. While Apple and Samsung may currently dominate the U.S. market, they face intense competition globally from Chinese firms such as Huawei, Xiaomi and Oppo. That competitive threat is real. As of 2007, Nokia and Blackberry were the overwhelming market leaders and appeared to be indomitable. Yet neither can be found in the market today. That intense “gale of competition”, sustained by the fact that any downstream producer can access the required technology inputs upon payment of licensing fees to upstream innovators, challenges the view that Qualcomm’s licensing practices have somehow restrained market growth.
Concluding Thoughts: Antitrust Flashback
When competitive harms are so unclear (and competitive gains so evident), modern antitrust law sensibly prescribes forbearance. A famous “bad case” from antitrust history shows why.
In 1953, the Department of Justice won an antitrust suit against United Shoe Machinery Corporation, which had led innovation in shoe manufacturing equipment and subsequently dominated that market. United Shoe’s purportedly anti-competitive practices included a lease-only policy that incorporated training and repair services at no incremental charge. The court found this to be a coercive tie that preserved United Shoe’s dominant position, despite the absence of any evidence of competitive harm. Scholars have subsequently shown (e.g. this and this; see also this) that the court did not adequately consider (at least) two efficiency explanations: (1) lease-only policies were widespread in the market because this facilitated access by smaller capital-constrained manufacturers, and (2) tying support services to equipment enabled United Shoe to avoid free-riding on its training services by other equipment suppliers. In retrospect, courts relied on a mere possibility theorem ultimately to order the break-up of a technological pioneer, with potentially adverse consequences for manufacturers that relied on its R&D efforts.
The court’s decision in FTC v. Qualcomm is a flashback to cases like United Shoe in which courts found liability and imposed dramatic remedies with little economic inquiry into competitive harm. It has become fashionable to assert that current antitrust law is too cautious in finding liability. Yet there is a sound reason why, outside price-fixing, courts generally insist that theories of antitrust liability include compelling evidence of competitive harm. Antitrust remedies are strong medicine and should be administered with caution. If courts and regulators do not zealously scrutinize the factual support for antitrust claims, then they are vulnerable to capture by private entities whose business objectives may depart from the public interest in competitive markets. While no antitrust fact-pattern is free from doubt, over two decades of market performance strongly favor the view that long-standing licensing arrangements in the smartphone market have resulted in substantial net welfare gains for consumers. If so, the prudent course of action is simply to leave the market alone.
[This post is the first in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]
[This post is authored by Randal C. Picker, James Parker Hall Distinguished Service Professor of Law at The University of Chicago Law School]
The European Commission just announced that it is investigating Amazon. The Commission’s concern is that Amazon is simultaneously acting as a ref and player: Amazon sells goods directly as a first party but also operates a platform on which it hosts goods sold by third parties (resellers) and those goods sometimes compete. And, next step, Amazon is said to choose which markets to enter as a private-label seller at least in part by utilizing information it gleans from the third-party sales it hosts.
Assuming there is a problem …
Were Amazon’s activities thought to be a problem, the natural remedies, whether through antitrust or more direct sector, industry-specific regulation, might be to bar Amazon from both being a direct seller and a platform. India has already passed a statute that effectuates some of those results, though it seems targeted at non-domestic companies.
A broad regulation that barred Amazon from being simultaneously a seller of first-party inventory and of third-party inventory presumably would lead to a dissolution of the company into separate companies in each of those businesses. A different remedy—a classic that goes back at least as far in the United States as the 1887 Commerce Act—would be to impose some sort of nondiscrimination obligation on Amazon and perhaps to couple that with some sort of business-line restriction—a quarantine—that would bar Amazon from entering markets though private labels.
But is there a problem?
Private labels have been around a long time and large retailers have faced buy-vs.-build decisions along the way. Large, sophisticated retailers like A&P in a different era and Walmart and Costco today, just to choose two examples, are constantly rebalancing their inventory between that which they buy from third parties and that which they produce for themselves. As I discuss below, being a platform matters for the buy-vs.-build decision, but it is far from clear that being both a store and a platform simultaneously matters importantly for how we should look at these issues.
Of course, when Amazon opened for business in July 1995 it didn’t quite face these issues immediately. Amazon sold books—it billed itself as “Earth’s Biggest Bookstore”—but there is no private label possibility for books, no effort to substitute into just selling say “The Wit and Wisdom of Jeff Bezos.” You could of course build an ebooks platform—call that a Kindle—but that would be a decade or so down the road. But as Amazon expanded into more pedestrian goods, it would, like other retailers, naturally make decisions about which inventory to source internally and which to buy from third parties.
In September 1999, Amazon opened up what was being described as an online mall. Amazon called it zShops and the idea was clear: many customers came to Amazon to buy things that Amazon wasn’t offering and Amazon would bring that audience and a variety of transaction services to third parties. Third parties would in turn pay Amazon a monthly fee and a variety of transaction fees. Amazon CEO Jeff Bezos noted (as reported in The Wall Street Journal) that those prices had been set in a way to make Amazon generally “neutral” in choosing whether to enter a market through first-party inventory or through third-party inventory.
Note that a traditional retailer and the original Amazon faced a natural question which was which goods to carry in inventory? When Amazon opened its platform, Amazon changed powerfully the question of which goods to stock. Even a Walmart Supercenter has limited physical shelf space and has to take something off of the shelves to stock a new product. By becoming a platform, Amazon largely outsourced the product selection and shelf space allocation question to third parties. The new Amazon resellers would get access to Amazon’s substantial customer base—its audience—and to a variety of transactional services that Amazon would provide them.
An online retailer has some real informational advantages over physical stores, as the online retailer sees every product that customers search for. It is much harder, though not impossible, for a physical store to capture that information. But as Amazon became a platform it would no longer just observe search queries for goods but it would see actual sales by the resellers. And a physical store isn’t a platform in the way that Amazon is as the physical store is constrained by limited shelf space. But the real target here is the marginal information Amazon gets from third-party sales relative to what it would see from product searches at Amazon, its own first-party sales and from clicks on the growing amount of advertising it sells on its website.
All of that might matter for running product and inventory experiments and the corresponding pace of learning what goods customers want at what price. A physical store has to remove some item from its shelves to experiment with a new item and has to buy the item to stock it, though how much of a risk it is taking there will depend on whether the retailer can return unsold goods to the inventory supplier. A platform retailer like Amazon doesn’t have to make those tradeoffs and an online mall could offer almost an infinite inventory of items. A store or product ready for every possible search.
A possible strategy
All of this suggests a possible business strategy for a platform: let third parties run inventory experiments where the platform gets to see the results. Products that don’t sell are failed experiments and the platform doesn’t enter those markets. But when a third-party sells a product in real numbers, start selling that product as first-party inventory. Amazon then would face buy vs. build on that and that should make clear that the private brands question is distinct from the question of whether Amazon can leverage third-party reseller information to their detriment. It can certainly do just that by buying competing goods from a wholesaler and stocking that item as first-party Amazon inventory.
If Amazon is playing this strategy, it seems to be playing it slowly and poorly. Amazon CEO Jeff Bezos includes a letter each year to open Amazon’s annual report to shareholders. In the 2018 letter, Bezos opened by noting that “[s]omething strange and remarkable has happened over the last 20 years.” What was that? In 1999, the relevant number was 3%; five years later, in 2004, it was 25%, then 31% in 2009, 49% in 2014 and 58% in 2018. These were the percentage of physical gross merchandise sales by third-party sellers through Amazon. In 1993, 97% of Amazon’s sales were of its own first-party inventory but the percentage of third-party sales had steadily risen over 20 years and over the last four years of that period, third-party inventory sales exceeded Amazon’s own internal sales. As Bezos noted, Amazon’s first-party sales had grown dramatically—a 25% annual compound growth rate over that period—but in 2018, total third-party sales revenues were $160 billion while Amazon’s own first-party sales were at $117 billion. Bezos had a perspective on all of that—“Third-party sellers are kicking our first party butt. Badly.”—but if you believed the original vision behind creating the Amazon platform, Amazon should be indifferent between first-party sales and third-party sales, as long as all of that happens at Amazon.
This isn’t new
Given all of that, it isn’t crystal clear to me why Amazon gets as much attention as it does. The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws.
As suggested above, I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.
There is a great deal more to say about a company as complex as Amazon, but two thoughts in closing. One story here is that Amazon has built a superior business model in combining first-party and third-party inventory sales and that is exactly the kind of business model innovation that we should applaud. Amazon has enjoyed remarkable growth but Walmart is still vastly larger than Amazon (ballpark numbers for 2018 are roughly $510 billion in net sales for Walmart vs. roughly $233 billion for Amazon – including all 3rd party sales, as well as Amazon Web Services). The second story is the remarkable growth of sales by resellers at Amazon.
If Amazon is creating private-label goods based on information it sees on its platform, nothing suggests that it is doing that particularly rapidly. And even if it is entering those markets, it still might do that were we to break up Amazon and separate the platform piece of Amazon (call it Amazon Platform) from the original first-party version of Amazon (say Amazon Classic) as traditional retailers have for a very, very long time been making buy-vs.-build decisions on their first-party inventory and using their internal information to make those decisions.