Conspiracies and collusion often (always?) get a bad rap. Adam Smith famously derided “people of the same trade” for their inclination to conspire against the public or contrive to raise prices. Today, such conspiracies and contrivances are per se illegal and felonies punishable under the Sherman Act.
It is well known and widely accepted that collusion to suppress competition is associated with an increase in price, a transfer of consumer surplus to producers, and a deadweight loss. It seems that nothing good comes from anticompetitive collusion.
But what if there was some good from a conspiracy in restraint of trade?
Using data from the formation and breakup of illegal cartels, Hyo Kang finds higher levels of innovation—measured by patents and R&D spending—during the cartel period than in the period before the formation of the cartel or the period after the breakup of the cartel.
By Kang’s measures, during the cartel period, colluding firms increased the annual number of patent applications by about 50% or more and their R&D expenditures by more than 20% relative to the pre-cartel period. After the breakup of the cartel, patent applications and R&D spending return to approximately pre-cartel levels.
These findings are consistent with ICLE’s review of research on four-to-three mergers in the telecom industry. The review found that, of those studies that considered the effect on investment in four-to-three mergers, all of them demonstrated that capital expenditures, a proxy for investment, increased post-merger.
If Kang’s conclusions are correct they contradict John Hicks’ quip that “the best of all monopoly profits is a quiet life.” Instead of silently collecting the profits of price fixing and other forms of collusion, cartel conspirators seem to be aggressively innovating. So what gives?
Kang’s paper points to Joseph Schumpeter, who argued that some degree of market power can promote innovation by providing firms with the financial resources and predictability required for innovative activities:
Thus it is true that there is or may be an element of genuine monopoly gain in those entrepreneurial profits which are the prizes offered by capitalist society to the successful innovator. But the quantitative importance of that clement, its volatile nature and its function in the process in which it emerges put it in a class by itself. The main value to a concern of a single seller position that is secured by patent or monopolistic strategy does not consist so much in the opportunity to behave temporarily according to the monopolist schema, as in the protection it affords against temporary disorganization of the market and the space it secures for long-range planning.
Along this line, Kang argues that the reduced competition afforded by the cartel provides both an incentive to innovate and an ability to innovate. Incentives include the potential for higher returns from innovation and the reduction of duplicative R&D investment. Increased profits from collusion provide increased resources available for R&D, thereby improving a firm’s ability to innovate. In some ways, it can be argued that the cartel arrangement reduces price competition, while increasing competition along other dimensions.
A seemingly unrelated working paper by R. Andrew Butters and Thomas N. Hubbard come to a similar conclusion. They note that over time, hotels have increased competition along nonprice dimensions, trading improved room size and in-room amenities for reduced out-of-room amenities such full-service restaurants, swimming pools, and meeting spaces.
Butters & Hubbard note that many out-of-room amenities are typified by fixed costs that do not vary (much) with hotel size, while room-size and in-room amenities are largely variable costs with respect to hotel size. With the shift from out-of-room amenities to in-room amenities, the market has shifted from one of larger hotels with many rooms, to smaller hotels with fewer rooms. Thus with the shift in the dimensions of competition, the structure of the industry has shifted along with it.
The research of Kang and Butters & Hubbard raise important issues about competition policy. A single-minded focus on price ignores the other many dimensions across which firms compete. While a cartel’s consumers may face higher prices, they may also benefit from increased innovation. Similarly, while hotel guests may experience reduced price competition among hotels, they are also experiencing a better in-room experience. Although increased concentration and outright collusion may harm consumers along the price dimension, they may also benefit along other dimensions that are not so easily quantified or quantifiable.
John Maynard Keynes wrote in his famous General Theorythat “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”
This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society, New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning.
Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.
Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.”
Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s.
Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.
In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.
First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.
The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.
In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.
Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.
Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,
“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”
This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.
Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.
In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data.
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger…
One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.
In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.
Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:
U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.
Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).
Model 1: Unadjusted for demographics and content quality
Model 2: Adjusted for demographics but not content quality
Model 3: Adjusted for demographics and data usage
Model 4: Adjusted for demographics and content quality
Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:
The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing.
In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE.
Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition.
In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.
At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway. For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors.
So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”
For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in TheEconomists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.
Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment.
When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC.
Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.
But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.
And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.
On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.
At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.
The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:
It has been engaging in “exclusionary conduct” that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.
Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.
The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:
[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.
Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.
It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.
(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)
On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.
In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):
The Telecommunications Industry Association (TIA) and
The Alliance for Telecommunications Industry Solutions (ATIS).
These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.
The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.” A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.
Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.
The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.
Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.
In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.
These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.
An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”
It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”
Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.
The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”
Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:
[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.
Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.
The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.As discussed, the policies of ATIS and TIA are unclear on this.
In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidenceon what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.
The difficulty presented by the merger was, in some ways, its lack of difficulty: Even critics, while hearkening back to the Brandeisian fear of large firms, had little by way of legal objection to offer against the merger. Despite the acknowledged lack of an obvious legal basis for challenging the merger, most critics nevertheless expressed a somewhat inchoate and generalized concern that the merger would hasten the death of brick-and-mortar retail and imperil competition in the grocery industry. Critics further pointed to particular, related issues largely outside the scope of modern antitrust law — issues relating to the presumed effects of the merger on “localism” (i.e., small, local competitors), retail workers, startups with ancillary businesses (e.g., delivery services), data collection and use, and the like.
Steven Horwitz opened the symposium with an insightful and highly recommended post detailing the development of the grocery industry from its inception. Tracing through that history, Horwitz was optimistic that
Viewed from the long history of the evolution of the grocery store, the Amazon-Whole Foods merger made sense as the start of the next stage of that historical process. The combination of increased wealth that is driving the demand for upscale grocery stores, and the corresponding increase in the value of people’s time that is driving the demand for one-stop shopping and various forms of pick-up and delivery, makes clear the potential benefits of this merger.
Others in the symposium similarly acknowledged the potential transformation of the industry brought on by the merger, but challenged the critics’ despairing characterization of that transformation (Auer, Manne & Stout, Rinehart, Fruits, Atkinson).
At the most basic level, it was noted that, in the immediate aftermath of the merger, Whole Foods dropped prices across a number of categories as it sought to shore up its competitive position (Auer). Further, under relevant antitrust metrics — e.g., market share, ease of competitive entry, potential for exclusionary conduct — the merger was completely unobjectionable under existing doctrine (Fruits).
To critics’ claims that Amazon in general, and the merger in particular, was decimating the retail industry, several posts discussed the updated evidence suggesting that retail is not actually on the decline (although some individual retailers are certainly struggling to compete) (Auer, Manne & Stout). Moreover, and following from Horwitz’s account of the evolution of the grocery industry, it appears that the actual trajectory of the industry is not an either/or between online and offline, but instead a movement toward integrating both models into a single retail experience (Manne & Stout). Further, the post-merger flurry of business model innovation, venture capital investment, and new startup activity demonstrates that, confronted with entrepreneurial competitors like Walmart, Kroger, Aldi, and Instacart, Amazon’s impressive position online has not translated into an automatic domination of the traditional grocery industry (Manne & Stout).
Symposium participants more circumspect about the merger suggested that Amazon’s behavior may be laying the groundwork for an eventual monopsony case (Sagers). Further, it was suggested, a future Section 2 case, difficult under prevailing antitrust orthodoxy, could be brought with a creative approach to market definition in light of Amazon’s conduct with its marketplace participants, its aggressive ebook contracting practices, and its development and roll-out of its own private label brands (Sagers).
Skeptics also picked up on early critics’ concerns about the aggregation of large amounts of consumer data, and worried that the merger could be part of a pattern representing a real, long-term threat to consumers that antitrust does not take seriously enough (Bona & Levitsky). Sounding a further alarm, Hal Singer noted that Amazon’s interest in pushing into new markets with data generated by, for example, devices like its Echo line could bolster its ability to exclude competitors.
More fundamentally, these contributors echoed the merger critics’ concerns that antitrust does not adequately take account of other values such as “promoting local, community-based, organic food production or ‘small firms’ in general.” (Bona & Levitsky; Singer).
Rob Atkinson, however, pointed out that these values are idiosyncratic and not likely shared by the vast majority of the population — and that antitrust law shouldn’t have anything to do with them:
In short, most of the opposition to Amazon/Whole Foods merger had little or nothing to do with economics and consumer welfare. It had everything to do with a competing vision for the kind of society we want to live in. The neo-Brandesian opponents, who Lind and I term “progressive localists”, seek an alternative economy predominantly made up of small firms, supported by big government and protected from global competition.
And Dirk Auer noted that early critics’ prophecies of foreclosure of competition through “data leveraging” and below-cost pricing hadn’t remotely come to pass, thus far.
Meanwhile, other contributors noted the paucity of evidence supporting many of these assertions, and pointed out the manifest value the merger seemed to be creating by pressuring competitors to adapt and better respond to consumers’ preferences (Horwitz, Rinehart, Auer, Fruits,Manne & Stout) — in the process shoring up, rather than killing, even smaller retailers that are willing and able to evolve with changing technology and shifting consumer preferences. “For all the talk of retail dying, the stores that are actually dying are the ones that fail to cater to their customers, not the ones that happen to be offline” (Manne & Stout).
At the same time, not all merger skeptics were moved by the Neo-Brandeisian assertions. Chris Sagers, for example, finds much of the populist antitrust objection more public relations than substance. He suggested perhaps not taking these ideas and their promoters so seriously, and instead focusing on antitrust advocates with “real ideas” (like Sagers himself, of course).
Coming from a different angle, Will Rinehart also suggested not taking the criticisms too seriously, pointing to the evolving and complicated effects of the merger as Exhibit A for the need for regulatory humility:
Finally, this deal reiterates the need for regulatory humility. Almost immediately after the Amazon-Whole Foods merger was closed, prices at the store dropped and competitors struck a flurry of deals. Investments continue and many in the grocery retail space are bracing for a wave of enhancement to take hold. Even some of the most fierce critics of deal will have to admit there is a lot of uncertainty. It is unclear what business model will make the most sense in the long run, how these technologies will ultimately become embedded into production processes, and how consumers will benefit. Combined, these features underscore the difficulty, but the necessity, in implementing dynamic insights into antitrust institutions.
Offering generous praise for this symposium (thanks, Will!) and echoing the points made by other participants regarding the dynamic and unknowable course of competition (Auer, Horwitz, Manne & Stout, Fruits), Rinehart concludes:
Retrospectives like this symposium offer a chance to understand what the discussion missed at the time and what is needed to better understand innovation and competition in markets. While it might be too soon to close the book on this case, the impact can already be felt in the positions others are taking in response. In the end, the deal probably won’t be remembered for extending Amazon’s dominance into another market because that is a phantom concern. Rather, it will probably be best remembered as the spark that drove traditional retail outlets to modernize their logistics and fulfillment efforts.
For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners, and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.
What actually happened in the year following the merger is nearly the opposite: Competition among grocery stores has been more fierce than ever. “Offline” retailers are expanding — and innovating — to meet Amazon’s challenge, and many of them are booming. Disruption is never neat and tidy, but, in addition to saving Whole Foods from potential oblivion, the merger seems to have lit a fire under the rest of the industry.
This result should not be surprising to anyone who understands the nature of the competitive process. But it does highlight an important lesson: competition often comes from unexpected quarters and evolves in unpredictable ways, emerging precisely out of the kinds of adversity opponents of the merger bemoaned.
So why this deal, in this symposium, and why now? The best substantive reason I could think of is admittedly one that I personally find important. As I said, I think we should take it much more seriously as a general matter, especially in highly dynamic contexts like Silicon Valley. There has been a history of arguably pre-emptive, market-occupying vertical and conglomerate acquisitions, by big firms of smaller ones that are technologically or otherwise disruptive. The idea is that the big firms sit back and wait as some new market develops in some adjacent sector. When that new market ripens to the point of real promise, the big firm buys some significant incumbent player. The aim is not. just to facilitate its own benevolent, wholesome entry, but to set up hopefully prohibitive challenges to other de novo entrants. Love it or leave it, that theory plausibly characterizes lots and lots of acquisitions in recent decades that secured easy antitrust approval, precisely because they weren’t obviously, presently horizontal. Many people think that is true of some of Amazon’s many acquisitions, like its notoriously aggressive, near-hostile takeover of Diapers.com.
Amazon offers Prime discounts to Whole Food customers and offers free delivery for Prime members. Those are certainly consumer benefits. But with those comes a cost, which may or may not be significant. By bundling its products with collective discounts, Amazon makes it more attractive for shoppers to shift their buying practices from local stores to the internet giant. Will this eventually mean that local stores will become more inefficient, based on lower volume, and will eventually close? Do most Americans care about the potential loss of local supermarkets and specialty grocers? No one, including antitrust enforcers, seems to have asked them.
The gist of these arguments is simple. The Amazon / Whole Foods merger would lead to the exclusion of competitors, with Amazon leveraging its swaths of data and pricing below costs. All of this begs a simple question: have these prophecies come to pass?
The problem with antitrust populism is not just that it leads to unfounded predictions regarding the negative effects of a given business practice. It also ignores the significant gains which consumers may reap from these practices. The Amazon / Whole foods offers a case in point.
Even with these caveats, it’s still worth looking at the recent trends. Whole Foods’s sales since 2015 have been flat, with only low single-digit growth, according to data from Second Measure. This suggests Whole Foods is not yet getting a lift from the relationship. However, the percentage of Whole Foods’ new customers who are Prime Members increased post-merger, from 34 percent in June 2017 to 41 percent in June 2018. This suggests that Amazon’s platform is delivering customers to Whole Foods.
The negativity that surrounded the deal at its announcement made Whole Foods seem like an innocent player, but it is important to recall that they were hemorrhaging and were looking to exit. Throughout the 2010s, the company lost its market leading edge as others began to offer the same kinds of services and products. Still, the company was able to sell near the top of its value to Amazon because it was able to court so many suitors. Given all of these features, Whole Foods could have been using the exit as a mechanism to appropriate another firm’s rent.
Brandeis is back, with today’s neo-Brandeisians reflexively opposing virtually all mergers involving large firms. For them, industry concentration has grown to crisis proportions and breaking up big companies should be the animating goal not just of antitrust policy but of U.S. economic policy generally. The key to understanding the neo-Brandeisian opposition to the Whole Foods/Amazon mergers is that it has nothing to do with consumer welfare, and everything to do with a large firm animus. Sabeel Rahman, a Roosevelt Institute scholar, concedes that big firms give us higher productivity, and hence lower prices, but he dismisses the value of that. He writes, “If consumer prices are our only concern, it is hard to see how Amazon, Comcast, and companies such as Uber need regulation.” And this gets to the key point regarding most of the opposition to the merger: it had nothing to do with concerns about monopolistic effects on economic efficiency or consumer prices. It had everything to do with opposition to big firm for the sole reason that they are big.