Archives For merger guidelines

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

John Carreyrou’s marvelous book Bad Blood chronicles the rise and fall of Theranos, the one-time Silicon Valley darling that was revealed to be a house of cards.[1] Theranos’s Svengali-like founder, Elizabeth Holmes, convinced scores of savvy business people (mainly older men) that her company was developing a machine that could detect all manner of maladies from a small quantity of a patient’s blood. Turns out it was a fraud. 

I had a couple of recurring thoughts as I read Bad Blood. First, I kept thinking about how Holmes’s fraud might impair future medical innovation. Something like Theranos’s machine would eventually be developed, I figured, but Holmes’s fraud would likely set things back by making investors leery of blood-based, multi-disease diagnostics.

I also had a thought about the causes of Theranos’s spectacular failure. A key problem, it seemed, was that the company tried to do too many things at once: develop diagnostic technologies, design an elegant machine (Holmes was obsessed with Steve Jobs and insisted that Theranos’s machine resemble a sleek Apple device), market the product, obtain regulatory approval, scale the operation by getting Theranos machines in retail chains like Safeway and Walgreens, and secure third-party payment from insurers.

A thought that didn’t occur to me while reading Bad Blood was that a multi-disease blood diagnostic system would soon be developed but would be delayed, or possibly even precluded from getting to market, by an antitrust enforcement action based on things the developers did to avoid the very problems that doomed Theranos. 

Sadly, that’s where we are with the Federal Trade Commission’s misguided challenge to the merger of Illumina and Grail.

Founded in 1998, San Diego-based Illumina is a leading provider of products used in genetic sequencing and genomic analysis. Illumina produces “next generation sequencing” (NGS) platforms that are used for a wide array of applications (genetic tests, etc.) developed by itself and other companies.

In 2015, Illumina founded Grail for the purpose of developing a blood test that could detect cancer in asymptomatic individuals—the “holy grail” of cancer diagnosis. Given the superior efficacy and lower cost of treatments for early- versus late-stage cancers, success by Grail could save millions of lives and billions of dollars.

Illumina created Grail as a separate entity in which it initially held a controlling interest (having provided the bulk of Grail’s $100 million Series A funding). Legally separating Grail in this fashion, rather than running it as an Illumina division, offered a number of benefits. It limited Illumina’s liability for Grail’s activities, enabling Grail to take greater risks. It mitigated the Theranos problem of managers’ being distracted by too many tasks: Grail managers could concentrate exclusively on developing a viable cancer-screening test, while Illumina’s management continued focusing on that company’s core business. It made it easier for Grail to attract talented managers, who would rather come in as corporate officers than as division heads. (Indeed, Grail landed Jeff Huber, a high-profile Google executive, as its initial CEO.) Structuring Grail as a majority-owned subsidiary also allowed Illumina to attract outside capital, with the prospect of raising more money in the future by selling new Grail stock to investors.

In 2017, Grail did exactly that, issuing new shares to investors in exchange for $1 billion. While this capital infusion enabled the company to move forward with its promising technologies, the creation of new shares meant that Illumina no longer held a controlling interest in the firm. Its ownership interest dipped below 20 percent and now stands at about 14.5 percent of Grail’s voting shares.  

Setting up Grail so as to facilitate outside capital formation and attract top managers who could focus single-mindedly on product development has paid off. Grail has now developed a blood test that, when processed on Illumina’s NGS platform, can accurately detect a number of cancers in asymptomatic individuals. Grail predicts that this “liquid biopsy,” called Galleri, will eventually be able to detect up to 50 cancers before physical symptoms manifest. Grail is also developing other blood-based cancer tests, including one that confirms cancer diagnoses in patients suspected to have cancer and another designed to detect cancer recurrence in patients who have undergone treatment.

Grail now faces a host of new challenges. In addition to continuing to develop its tests, Grail needs to:  

  • Engage in widespread testing of its cancer-detection products on up to 50 different cancers;
  • Process and present the information from its extensive testing in formats that will be acceptable to regulators;
  • Navigate the pre-market regulatory approval process in different countries across the globe;
  • Secure commitments from third-party payors (governments and private insurers) to provide coverage for its tests;
  • Develop means of manufacturing its products at scale;
  • Create and implement measures to ensure compliance with FDA’s Quality System Regulation (QSR), which governs virtually all aspects of medical device production (design, testing, production, process controls, quality assurance, labeling, packaging, handling, storage, distribution, installation, servicing, and shipping); and
  • Market its tests to hospitals and health-care professionals.

These steps are all required to secure widespread use of Grail’s tests. And, importantly, such widespread use will actually improve the quality of the tests. Grail’s tests analyze the DNA in a patient’s blood to look for methylation patterns that are known to be associated with cancer. In essence, the tests work by comparing the methylation patterns in a test subject’s DNA against a database of genomic data collected from large clinical studies. With enough comparison data, the tests can indicate not only the presence of cancer but also where in the body the cancer signal is coming from. And because Grail’s tests use machine learning to hone their algorithms in response to new data collected from test usage, the greater the use of Grail’s tests, the more accurate, sensitive, and comprehensive they become.     

To assist with the various tasks needed to achieve speedy and widespread use of its tests, Grail decided to reunite with Illumina. In September 2020, the companies entered a merger agreement under which Illumina would acquire the 85.5 percent of Grail voting shares it does not already own for cash and stock worth $7.1 billion and additional contingent payments of $1.2 billion to Grail’s non-Illumina shareholders.

Recombining with Illumina will allow Grail—which has appropriately focused heretofore solely on product development—to accomplish the tasks now required to get its tests to market. Illumina has substantial laboratory capacity that Grail can access to complete the testing needed to refine its products and establish their effectiveness. As the leading global producer of NGS platforms, Illumina has unparalleled experience in navigating the regulatory process for NGS-related products, producing and marketing those products at scale, and maintaining compliance with complex regulations like FDA’s QSR. With nearly 3,000 international employees located in 26 countries, it has obtained regulatory authorizations for NGS-based tests in more than 50 jurisdictions around the world.  It also has long-standing relationships with third-party payors, health systems, and laboratory customers. Grail, by contrast, has never obtained FDA approval for any products, has never manufactured NGS-based tests at scale, has only a fledgling regulatory affairs team, and has far less extensive contacts with potential payors and customers. By remaining focused on its key objective (unlike Theranos), Grail has achieved product-development success. Recombining with Illumina will now enable it, expeditiously and efficiently, to deploy its products across the globe, generating user data that will help improve the products going forward.

In addition to these benefits, the combination of Illumina and Grail will eliminate a problem that occurs when producers of complementary products each operate in markets that are not fully competitive: double marginalization. When sellers of products that are used together each possess some market power due to a lack of competition, their uncoordinated pricing decisions may result in less surplus for each of them and for consumers of their products. Combining so that they can coordinate pricing will leave them and their customers better off.

Unlike a producer participating in a competitive market, a producer that faces little competition can enhance its profits by raising its price above its incremental cost.[2] But there are limits on its ability to do so. As the well-known monopoly pricing model shows, even a monopolist has a “profit-maximizing price” beyond which any incremental price increase would lose money.[3] Raising price above that level would hurt both consumers and the monopolist.

When consumers are deciding whether to purchase products that must be used together, they assess the final price of the overall bundle. This means that when two sellers of complementary products both have market power, there is an above-cost, profit-maximizing combined price for their products. If the complement sellers individually raise their prices so that the combined price exceeds that level, they will reduce their own aggregate welfare and that of their customers.

This unfortunate situation is likely to occur when market power-possessing complement producers are separate companies that cannot coordinate their pricing. In setting its individual price, each separate firm will attempt to capture as much surplus for itself as possible. This will cause the combined price to rise above the profit-maximizing level. If they could unite, the complement sellers would coordinate their prices so that the combined price was lower and the sellers’ aggregate profits higher.

Here, Grail and Illumina provide complementary products (cancer-detection tests and the NGS platforms on which they are processed), and each faces little competition. If they price separately, their aggregate prices are likely to exceed the profit-maximizing combined price for the cancer test and NGS platform access. If they combine into a single firm, that firm would maximize its profits by lowering prices so that the aggregate test/platform price is the profit-maximizing combined price.  This would obviously benefit consumers.

In light of the social benefits the Grail/Illumina merger offers—speeding up and lowering the cost of getting Grail’s test approved and deployed at scale, enabling improvement of the test with more extensive user data, eliminating double marginalization—one might expect policymakers to cheer the companies’ recombination. The FTC, however, is trying to block it.  In late March, the commission brought an action claiming that the merger would violate Section 7 of the Clayton Act by substantially reducing competition in a line of commerce.

The FTC’s theory is that recombining Illumina and Grail will impair competition in the market for “multi-cancer early detection” (MCED) tests. The commission asserts that the combined company would have both the opportunity and the motivation to injure rival producers of MCED tests.

The opportunity to do so would stem from the fact that MCED tests must be processed on NGS platforms, which are produced exclusively by Illumina. Illumina could charge Grail’s rivals or their customers higher prices for access to its NGS platforms (or perhaps deny access altogether) and could withhold the technical assistance rivals would need to secure both regulatory approval of their tests and coverage by third-party payors.

But why would Illumina take this tack, given that it would be giving up profits on transactions with producers and users of other MCED tests? The commission asserts that the losses a combined Illumina/Grail would suffer in the NGS platform market would be more than offset by gains stemming from reduced competition in the MCED test market. Thus, the combined company would have a motive, as well as an opportunity, to cause anticompetitive harm.

There are multiple problems with the FTC’s theory. As an initial matter, the market the commission claims will be impaired doesn’t exist. There is no MCED test market for the simple reason that there are no commercializable MCED tests. If allowed to proceed, the Illumina/Grail merger may create such a market by facilitating the approval and deployment of the first MCED test. At present, however, there is no such market, and the chances of one ever emerging will be diminished if the FTC succeeds in blocking the recombination of Illumina and Grail.

Because there is no existing market for MCED tests, the FTC’s claim that a combined Illumina/Grail would have a motivation to injure MCED rivals—potential consumers of Illumina’s NGS platforms—is rank speculation. The commission has no idea what profits Illumina would earn from NGS platform sales related to MCED tests, what profits Grail would earn on its own MCED tests, and how the total profits of the combined company would be affected by impairing opportunities for rival MCED test producers.

In the only relevant market that does exist—the cancer-detection market—there can be no question about the competitive effect of an Illumina/Grail merger: It would enhance competition by speeding the creation of a far superior offering that promises to save lives and substantially reduce health-care costs. 

There is yet another problem with the FTC’s theory of anticompetitive harm. The commission’s concern that a recombined Illumina/Grail would foreclose Grail’s rivals from essential NGS platforms and needed technical assistance is obviated by Illumina’s commitments. Specifically, Illumina has irrevocably offered current and prospective oncology customers 12-year contract terms that would guarantee them the same access to Illumina’s sequencing products that they now enjoy, with no price increase. Indeed, the offered terms obligate Illumina not only to refrain from raising prices but also to lower them by at least 43% by 2025 and to provide regulatory and technical assistance requested by Grail’s potential rivals. Illumina’s continued compliance with its firm offer will be subject to regular audits by an independent auditor.

In the end, then, the FTC’s challenge to the Illumina/Grail merger is unjustified. The initial separation of Grail from Illumina encouraged the managerial focus and capital accumulation needed for successful test development. Recombining the two firms will now expedite and lower the costs of the regulatory approval and commercialization processes, permitting Grail’s tests to be widely used, which will enhance their quality. Bringing Grail’s tests and Illumina’s NGS platforms within a single company will also benefit consumers by eliminating double marginalization. Any foreclosure concerns are entirely speculative and are obviated by Illumina’s contractual commitments.

In light of all these considerations, one wonders why the FTC challenged this merger (and on a 4-0 vote) in the first place. Perhaps it was the populist forces from left and right that are pressuring the commission to generally be more aggressive in policing mergers. Some members of the commission may also worry, legitimately, that if they don’t act aggressively on a vertical merger, Congress will amend the antitrust laws in a deleterious fashion. But the commission has picked a poor target. This particular merger promises tremendous benefit and threatens little harm. The FTC should drop its challenge and encourage its European counterparts to do the same. 


[1] If you don’t have time for Carreyrou’s book (and you should make time if you can), HBO’s Theranos documentary is pretty solid.

[2] This ability is market power.  In a perfectly competitive market, any firm that charges an above-cost price will lose sales to rivals, who will vie for business by lowering their prices down to the level of their cost.

[3] Under the model, this is the price that emerges at the output level where the producer’s marginal revenue equals its marginal cost.

The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).

Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.

This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.

There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.

What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.

This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.

All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.

As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.

As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.

The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.

When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.

However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.

There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.

Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.

The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.

Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.

In an age of antitrust populism on both ends of the political spectrum, federal and state regulators face considerable pressure to deploy the antitrust laws against firms that have dominant market shares. Yet federal case law makes clear that merely winning the race for a market is an insufficient basis for antitrust liability. Rather, any plaintiff must show that the winner either secured or is maintaining its dominant position through practices that go beyond vigorous competition. Any other principle would inhibit the competitive process that the antitrust laws are designed to promote. Federal judges who enjoy life tenure are far more insulated from outside pressures and therefore more likely to demand evidence of anticompetitive practices as a predicate condition for any determination of antitrust liability.

This separation of powers between the executive branch, which prosecutes alleged infractions of the law, and the judicial branch, which polices the prosecutor, is the simple genius behind the divided system of government generally attributed to the eighteenth-century French thinker, Montesquieu. The practical wisdom of this fundamental principle of political design, which runs throughout the U.S. Constitution, can be observed in full force in the current antitrust landscape, in which the federal courts have acted as a bulwark against several contestable enforcement actions by antitrust regulators.

In three headline cases brought by the Department of Justice or the Federal Trade Commission since 2017, the prosecutorial bench has struck out in court. Under the exacting scrutiny of the judiciary, government litigators failed to present sufficient evidence that a dominant firm had engaged in practices that caused, or were likely to cause, significant anticompetitive effects. In each case, these enforcement actions, applauded by policymakers and commentators who tend to follow “big is bad” intuitions, foundered when assessed in light of judicial precedent, the factual record, and the economic principles embedded in modern antitrust law. An ongoing suit, filed by the FTC this year after more than 18 months since the closing of the targeted acquisition, exhibits similar factual and legal infirmities.

Strike 1: The AT&T/Time-Warner Transaction

In response to the announcement of AT&T’s $85.4 billion acquisition of Time Warner, the DOJ filed suit in 2017 to prevent the formation of a dominant provider in home-video distribution that would purportedly deny competitors access to “must-have” content. As I have observed previously, this theory of the case suffered from two fundamental difficulties. 

First, content is an abundant and renewable resource so it is hard to see how AT&T+TW could meaningfully foreclose competitors’ access to this necessary input. Even in the hypothetical case of potentially “must-have” content, it was unclear whether it would be economically rational for post-acquisition AT&T regularly to deny access to other distributors, given that doing so would imply an immediate and significant loss in licensing revenues without any clearly offsetting future gain in revenues from new subscribers.

Second, home-video distribution is a market lapsing rapidly into obsolescence as content monetization shifts from home-based viewing to a streaming environment in which consumers expect “anywhere, everywhere” access. The blockbuster acquisition was probably best understood as a necessary effort to adapt to this new environment (already populated by several major streaming platforms), rather than an otherwise puzzling strategy to spend billions to capture a market on the verge of commercial irrelevance. 

Strike 2: The Sabre/Farelogix Acquisition

In 2019, the DOJ filed suit to block the $360 million acquisition of Farelogix by Sabre, one of three leading airline booking platforms, on the ground that it would substantially lessen competition. The factual basis for this legal diagnosis was unclear. In 2018, Sabre earned approximately $3.9 billion in worldwide revenues, compared to $40 million for Farelogix. Given this drastic difference in market share, and the almost trivial share attributable to Farelogix, it is difficult to fathom how the DOJ could credibly assert that the acquisition “would extinguish a crucial constraint on Sabre’s market power.” 

To use a now much-discussed theory of antitrust liability, it might nonetheless be argued that Farelogix posed a “nascent” competitive threat to the Sabre platform. That is: while Farelogix is small today, it may become big enough tomorrow to pose a threat to Sabre’s market leadership. 

But that theory runs straight into a highly inconvenient fact. Farelogix was founded in 1998 and, during the ensuing two decades, had neither achieved broad adoption of its customized booking technology nor succeeded in offering airlines a viable pathway to bypass the three major intermediary platforms. The proposed acquisition therefore seems best understood as a mutually beneficial transaction in which a smaller (and not very nascent) firm elects to monetize its technology by embedding it in a leading platform that seeks to innovate by acquisition. Robust technology ecosystems do this all the time, efficiently exploiting the natural complementarities between a smaller firm’s “out of the box” innovation with the capital-intensive infrastructure of an incumbent. (Postscript: While the DOJ lost this case in federal court, Sabre elected in May 2020 not to close following similarly puzzling opposition by British competition regulators.) 

Strike 3: FTC v. Qualcomm

The divergence of theories of anticompetitive risk from market realities is vividly illustrated by the landmark suit filed by the FTC in 2017 against Qualcomm. 

The litigation pursued nothing less than a wholesale reengineering of the IP licensing relationships between innovators and implementers that underlie the global smartphone market. Those relationships principally consist of device-level licenses between IP innovators such as Qualcomm and device manufacturers and distributors such as Apple. This structure efficiently collects remuneration from the downstream segment of the supply chain for upstream firms that invest in pushing forward the technology frontier. The FTC thought otherwise and pursued a remedy that would have required Qualcomm to offer licenses to its direct competitors in the chip market and to rewrite its existing licenses with device producers and other intermediate users on a component, rather than device, level. 

Remarkably, these drastic forms of intervention into private-ordering arrangements rested on nothing more than what former FTC Commissioner Maureen Ohlhausen once appropriately called a “possibility theorem.” The FTC deployed a mostly theoretical argument that Qualcomm had extracted an “unreasonably high” royalty that had potentially discouraged innovation, impeded entry into the chip market, and inflated retail prices for consumers. Yet these claims run contrary to all available empirical evidence, which indicates that the mobile wireless device market has exhibited since its inception declining quality-adjusted prices, increasing output, robust entry into the production market, and continuous innovation. The mismatch between the government’s theory of market failure and the actual record of market success over more than two decades challenges the policy wisdom of disrupting hundreds of existing contractual arrangements between IP licensors and licensees in a thriving market. 

The FTC nonetheless secured from the district court a sweeping order that would have had precisely this disruptive effect, including imposing a “duty to deal” that would have required Qualcomm to license directly its competitors in the chip market. The Ninth Circuit stayed the order and, on August 11, 2020, issued an unqualified reversal, stating that the lower court had erroneously conflated “hypercompetitive” (good) with anticompetitive (bad) conduct and observing that “[t]hroughout its analysis, the district court conflated the desire to maximize profits with an intent to ‘destroy competition itself.’” In unusually direct language, the appellate court also observed (as even the FTC had acknowledged on appeal) that the district court’s ruling was incompatible with the Supreme Court’s ruling in Aspen Skiing Co. v. Aspen Highlands Skiing Corp., which strictly limits the circumstances in which a duty to deal can be imposed. In some cases, it appears that additional levels of judicial review are necessary to protect antitrust law against not only administrative but judicial overreach.

Axon v. FTC

For the most explicit illustration of the interface between Montesquieu’s principle of divided government and the risk posed to antitrust law by cases of prosecutorial excess, we can turn to an unusual and ongoing litigation, Axon v. FTC.

The HSR Act and Post-Consummation Merger Challenges

The HSR Act provides regulators with the opportunity to preemptively challenge acquisitions and related transactions on antitrust grounds prior to those transactions having been consummated. Since its enactment in 1976, this statutory innovation has laudably increased dealmakers’ ability to close transactions with a high level of certainty that regulators would not belatedly seek to “unscramble the egg.” While the HSR Act does not foreclose this contingency since regulatory failure to challenge a transaction only indicates current enforcement intentions, it is probably fair to say that M&A dealmakers generally assume that regulators would reverse course only in exceptional circumstances. In turn, the low prospect of after-the-fact regulatory intervention encourages the efficient use of M&A transactions for the purpose of shifting corporate assets to users that value those assets most highly.

The FTC’s Belated Attack on the Axon/Vievu Acquisition

Dealmakers may be revisiting that understanding in the wake of the FTC’s decision in January 2020 to challenge the acquisition of Vievu by Axon, each being a manufacturer of body-worn camera equipment and related data-management software for law enforcement agencies. The acquisition had closed in May 2018 but had not been reported through HSR since it fell well below the reportable deal threshold. Given a total transaction value of $7 million, the passage of more than 18 months since closing, and the insolvency or near-insolvency of the target company, it is far from obvious that the Axon acquisition posed a material competitive risk that merits unsettling expectations that regulators will typically not challenge a consummated transaction, especially in the case of what is a micro-sized nebula in the M&A universe. 

These concerns are heightened by the fact that the FTC suit relies on a debatably narrow definition of the relevant market (body-camera equipment and related “cloud-based” data management software for police departments in large metropolitan areas, rather than a market that encompassed more generally defined categories of body-worn camera equipment, law enforcement agencies, and data management services). Even within this circumscribed market, there are apparently several companies that offer related technologies and an even larger group that could plausibly enter in response to perceived profit opportunities. Despite this contestable legal position, Axon’s court filing states that the FTC offered to settle the suit on stiff terms: Axon must agree to divest itself of the Vievu assets and to license all of Axon’s pre-transaction intellectual property to the buyer of the Vievu assets. This effectively amounts to an opportunistic use of the antitrust merger laws to engage in post-transaction market reengineering, rather than merely blocking an acquisition to maintain the pre-transaction status quo.

Does the FTC Violate the Separation of Powers?

In a provocative strategy, Axon has gone on the offensive and filed suit in federal district court to challenge on constitutional grounds the long-standing internal administrative proceeding through which the FTC’s antitrust claims are initially adjudicated. Unlike the DOJ, the FTC’s first stop in the litigation process (absent settlement) is not a federal district court but an internal proceeding before an administrative law judge (“ALJ”), whose ruling can then be appealed to the Commission. Axon is effectively arguing that this administrative internalization of the judicial function violates the separation of powers principle as implemented in the U.S. Constitution. 

Writing on a clean slate, Axon’s claim is eminently reasonable. The fact that FTC-paid personnel sit on both sides of the internal adjudicative process as prosecutor (the FTC litigation team) and judge (the ALJ and the Commissioners) locates the executive and judicial functions in the hands of a single administrative entity. (To be clear, the Commission’s rulings are appealable to federal court, albeit at significant cost and delay.) In any event, a court presented with Axon’s claim—as of this writing, the Ninth Circuit (taking the case on appeal by Axon)—is not writing on a clean slate and is most likely reluctant to accept a claim that would trigger challenges to the legality of other similarly structured adjudicative processes at other agencies. Nonetheless, Axon’s argument does raise important concerns as to whether certain elements of the FTC’s adjudicative mechanism (as distinguished from the very existence of that mechanism) could be refined to mitigate the conflicts of interest that arise in its current form.

Conclusion

Antitrust vigilance certainly has its place, but it also has its limits. Given the aspirational language of the antitrust statutes and the largely unlimited structural remedies to which an antitrust litigation can lead, there is an inevitable risk of prosecutorial overreach that can betray the fundamental objective to protect consumer welfare. Applied to the antitrust context, the separation of powers principle mitigates this risk by subjecting enforcement actions to judicial examination, which is in turn disciplined by the constraints of appellate review and stare decisis. A rich body of federal case law implements this review function by anchoring antitrust in a decisionmaking framework that promotes the public’s interest in deterring business practices that endanger the competitive process behind a market-based economy. As illustrated by the recent string of failed antitrust suits, and the ongoing FTC litigation against Axon, that same decisionmaking framework can also protect the competitive process against regulatory practices that pose this same type of risk.

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.   

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]

As many in the symposium have noted — and as was repeatedly noted during the FTC’s Hearings on Competition and Consumer Protection in the 21st Century — there is widespread dissatisfaction with the 1984 Non-Horizontal Merger Guidelines

Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm

We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:

My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.

Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.

What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.   

Vertical mergers and their discontents

The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:

And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.

And later, in the discussion following his talk:

If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration. 

Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:

Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger

In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)

(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).

In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place

The logic of vertical integration is not commutative 

It is true that, where contracts are observed, they are likely as (or more, actually)  efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision. 

For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!

There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.

Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law

In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.

More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two

It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger. 

As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.

At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.

At the FTC hearings last year, Francine LaFontaine highlighted this exact concern

I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.

Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.

More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.

But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.

Conclusion

Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence

Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division).]

The DOJ/FTC Draft Vertical Merger Guidelines establish a “safe harbor” of a 20% market share for each of the merging parties. But the issue of defining the relevant “market” to which the 20% would apply is not well addressed.

Although reference is made to the market definition paradigm that is offered by the DOJ’s and FTC’s Horizontal Merger Guidelines (“HMGs”), what is neglected is the following: Under the “unilateral effects” theory of competitive harm of the HMGs, the horizontal merger of two firms that sell differentiated products that are imperfect substitutes could lead to significant price increases if the second-choice product for a significant fraction of each of the merging firms’ customers is sold by the partner firm. Such unilateral-effects instances are revealed by examining detailed sales and substitution data with respect to the customers of only the two merging firms.

In such instances, the true “relevant market” is simply the products that are sold by the two firms, and the merger is effectively a “2-to-1” merger. Under these circumstances, any apparently broader market (perhaps based on physical or functional similarities of products) is misleading, and the “market” shares of the merging parties that are based on that broader market are under-representations of the potential for their post-merger exercise of market power.

With a vertical merger, the potential for similar unilateral effects* would have to be captured by examining the detailed sales and substitution patterns of each of the merging firms with all of their significant horizontal competitors. This will require a substantial, data-intensive effort. And, of course, if this effort is not undertaken and an erroneously broader market is designated, the 20% “market” share threshold will understate the potential for competitive harm from a proposed vertical merger.

* With a vertical merger, such “unilateral effects” could arise post-merger in two ways: (a) The downstream partner could maintain a higher price, since some of the lost profits from some of the lost sales could be recaptured by the upstream partner’s profits on the sales of components to the downstream rivals (which gain some of the lost sales); and (b) the upstream partner could maintain a higher price to the downstream rivals, since some of the latter firms’ customers (and the concomitant profits) would be captured by the downstream partner.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jan Rybnicek (Counsel at Freshfields Bruckhaus Deringer US LLP in Washington, D.C. and Senior Fellow and Adjunct Professor at the Global Antitrust Institute at the Antonin Scalia Law School at George Mason University).]

In an area where it may seem that agreement is rare, there is near universal agreement on the benefits of withdrawing the DOJ’s 1984 Non-Horizontal Merger Guidelines. The 1984 Guidelines do not reflect current agency thinking on vertical mergers and are not relied upon by businesses or practitioners to anticipate how the agencies may review a vertical transaction. The more difficult question is whether the agencies should now replace the 1984 Guidelines and, if so, what the modern guidelines should say.

There are several important reasons that counsel against issuing new vertical merger guidelines (VMGs). Most significantly, we likely are better off without new VMGs because they invariably will (1) send the wrong message to agency staff about the relative importance of vertical merger enforcement compared to other agency priorities, (2) create new sufficient conditions that tend to trigger wasteful investigations and erroneous enforcement actions, and (3) add very little, if anything, to our understanding of when the agencies will or will not pursue an in-depth investigation or enforcement action of a vertical merger.

Unfortunately, these problems are magnified rather than mitigated by the draft VMGs. But it is unlikely at this point that the agencies will hit the brakes and not issue new VMGs. The agencies therefore should make several key changes that would help prevent the final VMGs from causing more harm than good.

What is the Purpose of Agency Guidelines? 

Before we can have a meaningful conversation about whether the draft VMGs are good or bad for the world, or how they can be improved to ensure they contribute positively to antitrust law, it is important to identify, and have a shared understanding about, the purpose of guidelines and their potential benefits.

In general, I am supportive of guidelines. In fact, I helped urge the FTC to issue its 2015 Policy Statement articulating the agency’s enforcement principles under its Section 5 Unfair Methods of Competition authority. As I have written before, guidelines can be useful if they accomplish two important goals: (1) provide insight and transparency to businesses and practitioners about the agencies’ analytical approach to an issue and (2) offer agency staff direction as to agency priorities while cabining the agencies’ broad discretion by tethering investigational or enforcement decisions to those guidelines. An additional benefit may be that the guidelines also could prove useful to courts interpreting or applying the antitrust laws.

Transparency is important for the obvious reason that it allows the business community and practitioners to know how the agencies will apply the antitrust laws and thereby allows them to evaluate if a specific merger or business arrangement is likely to receive scrutiny. But guidelines are not only consumed by the public. They also are used by agency staff. As a result, guidelines invariably influence how staff approaches a matter, including whether to open an investigation, how in-depth that investigation is, and whether to recommend an enforcement action. Lastly, for guidelines to be meaningful, they also must accurately reflect agency practice, which requires the agencies’ analysis to be tethered to an analytical framework.

As discussed below, there are many reasons to doubt that the draft VMGs can deliver on these goals.

Draft VMGs Will Lead to Bad Enforcement Policy While Providing Little Benefit

 A chief concern with VMGs is that they will inadvertently usher in a new enforcement regime that treats horizontal and vertical mergers as co-equal enforcement priorities despite the mountain of evidence, not to mention simple logic, that mergers among competitors are a significantly greater threat to competition than are vertical mergers. The draft VMGs exacerbate rather than mitigate this risk by creating a false equivalence between vertical and horizontal merger enforcement and by establishing new minimum conditions that are likely to lead the agencies to pursue wasteful investigations of vertical transactions. And the draft VMGs do all this without meaningfully advancing our understanding of the conditions under which the agencies are likely to pursue investigations and enforcement against vertical mergers.

1. No Recognition of the Differences Between Horizontal and Vertical Mergers

One striking feature of the draft VMGs is that they fail to contextualize vertical mergers in the broader antitrust landscape. As a result, it is easy to walk away from the draft VMGs with the impression that vertical mergers are as likely to lead to anticompetitive harm as are horizontal mergers. That is a position not supported by the economic evidence or logic. It is of course true that vertical mergers can result in competitive harm; that is not a seriously contested point. But it is important to acknowledge and provide background for why that harm is significantly less likely than in horizontal cases. That difference should inform agency enforcement priorities. Potentially due to this the lack of framing, the draft VMGs tend to speak more about when the agencies may identify competitive harm rather than when they will not.

The draft VMGs would benefit greatly from a more comprehensive approach to understanding vertical merger transactions. The agencies should add language explaining that, whereas a consensus exists that eliminating a direct competitor always tends to increase the risk of unilateral effects (although often trivially), there is no such consensus that harm will result from the combination of complementary assets. In fact, the current evidence shows such vertical transactions tend to be procompetitive. Absent such language, the VMGs will over time misguidedly focus more agency resources into investigating vertical mergers where there is unlikely to be harm (with inevitably more enforcement errors) and less time on more important priorities, such as pursuing enforcement of anticompetitive horizontal transactions.

2. The 20% Safe Harbor Provides No Harbor and Will Become a Sufficient Condition

The draft VMGs attempt to provide businesses with guidance about the types of transactions the agencies will not investigate by articulating a market share safe harbor. But that safe harbor does not (1) appear to be grounded in any evidence, (2) is surprisingly low in comparison to the EU vertical merger guidelines, and (3) is likely to become a sufficient condition to trigger an in-depth investigation or enforcement. 

The draft VMGs state:

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20%, and the related product is used in less than 20% of the relevant market.

But in the very next sentence the draft VMGs render the safe harbor virtually meaningless, stating:

In some circumstance, mergers with shares below the threshold can give rise to competitive concerns.

This caveat comes despite the fact that the 20% threshold is low compared to other jurisdictions. Indeed, the EU’s guidelines create a 30% safe harbor. Nor is it clear what the basis is for the 20% threshold, either in economics or law. While it is important for the agencies to remain flexible, too much flexibility will render the draft VMGs meaningless. The draft VMGs should be less equivocal about the types of mergers that will not receive significant scrutiny and are unlikely to be the subject of enforcement action.

What may be most troubling about the market share safe harbor is the likelihood that it will establish general enforcement norms that did not previously exist. It is likely that agency staff will soon interpret (despite language stating otherwise) the 20% market share as the minimum necessary condition to open an in-depth investigation and to pursue an enforcement action. We have seen other guidelines’ tools have similar effects on agency analysis before (see, GUPPIs). This risk is only exacerbated where the safe harbor is not a true safe harbor that provides businesses with clarity on enforcement priorities.

3. Requirements for Proving EDM and Efficiencies Fails to Recognize Vertical Merger Context

The draft VMGs minimize the significant role of EDM and efficiencies in vertical mergers. The agencies frequently take a skeptical approach to efficiencies in the context of horizontal mergers and it is well-known that the hurdle to substantiate efficiencies is difficult, if not impossible, to meet. The draft VMGs oddly continue this skeptical approach by specifically referencing the standards discussed in the horizontal merger guidelines for efficiencies when discussing EDM and vertical merger efficiencies. The draft VMGs do not recognize that the combination of complementary products is inherently more likely to generate efficiencies than in horizontal mergers between competitors. The draft VMGs also oddly discuss EDM and efficiencies in separate sections and spend a trivial amount of time on what is the core motivating feature of vertical mergers. Even the discussion of EDM is as much about where there may be exceptions to EDM as it is about making clear the uncontroversial view that EDM is frequent in vertical transactions. Without acknowledging the inherent nature of EDM and efficiencies more generally, the final VMGs will send the wrong message that vertical merger enforcement should be on par with horizontal merger enforcement.

4. No New Insights into How Agencies Will Assess Vertical Mergers

Some might argue that the costs associated with the draft VMGs nevertheless are tolerable because the guidelines offer significant benefits that far outweigh their costs. But that is not the case here. The draft VMGs provide no new information about how the agencies will review vertical merger transactions and under what circumstances they are likely to seek enforcement actions. And that is because it is a difficult if not impossible task to identify any such general guiding principles. Indeed, unlike in the context of horizontal transactions where an increase in market power informs our thinking about the likely competitive effects, greater market power in the context of a vertical transaction that combines complements creates downward pricing pressure that often will dominate any potential competitive harm.

The draft VMGs do what they can, though, which is to describe in general terms several theories of harm. But the benefits from that exercise are modest and do not outweigh the significant risks discussed above. The theories described are neither novel or unknown to the public today. Nor do the draft VMGs explain any significant new thinking on vertical mergers, likely because there has been none that can provide insight into general enforcement principles. The draft VMGs also do not clarify changes to statutory text (because it has not changed) or otherwise clarify judicial rulings or past enforcement actions. As a result, the draft VMGs do not offer sufficient benefits that would outweigh their substantial cost.

Conclusion

Despite these concerns, it is worth acknowledging the work the FTC and DOJ have put into preparing the draft VMGs. It is no small task to articulate a unified position between the two agencies on an issue such as vertical merger enforcement where so many have such strong views. To the agencies’ credit, the VMGs are restrained in not including novel or more adventurous theories of harm. I anticipate the DOJ and FTC will engage with commentators and take the feedback seriously as they work to improve the final VMGs.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Scott Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati).]

On January 10, 2020, the United States Department of Justice (“DOJ”) and the Federal Trade Commission (“FTC”) (collectively, “the Agencies”) released their joint draft guidelines outlining their “principal analytical techniques, practices and enforcement policy” with respect to vertical mergers (“Draft Guidelines”). While the Draft Guidelines describe and formalize the Agencies’ existing approaches when investigating vertical mergers, they leave several policy questions unanswered. In particular, the Draft Guidelines do not address how the Agencies might approach the issue of acquisition of potential or nascent competitors through vertical mergers. As many technology mergers are motivated by the desire to enter new industries or add new tools or features to an existing platform (i.e., the Buy-Versus-Build dilemma), the omission leaves a significant hole in the Agencies’ enforcement policy agenda, and leaves the tech industry, in particular, without adequate guidance as to how the Agencies may address these issues.

This is notable, given that the Horizontal Merger Guidelines explicitly address potential competition theories of harm (e.g., at § 1 (referencing mergers and acquisitions “involving actual or potential competitors”); § 2 (“The Agencies consider whether the merging firms have been, or likely will become absent the merger, substantial head-to-head competitors.”). Indeed, the Agencies have recently challenged several proposed horizontal mergers based on nascent competition theories of harm. 

Further, there has been much debate regarding whether increased antitrust scrutiny of vertical acquisitions of nascent competitors, particularly in technology markets, is warranted (See, e.g., Open Markets Institute, The Urgent Need for Strong Vertical Merger Guidelines (“Enforcers should be vigilant toward dominant platforms’ acquisitions of seemingly small or marginal firms and be ready to block acquisitions that may be part of a monopoly protection strategy. Dominant firms should not be permitted to expand through vertical acquisitions and cut off budding threats before they have a chance to bloom.”); Caroline Holland, Taking on Big Tech Through Merger Enforcement (“Vertical mergers that create market power capable of stifling competition could be particularly pernicious when it comes to digital platforms.”)). 

Thus, further policy guidance from the Agencies on this issue is needed. As the Agencies formulate guidance, they should take note that vertical mergers involving technology start-ups generally promote efficiency and innovation, and that any potential competitive harm almost always can be addressed with easy-to-implement behavioral remedies.

The agencies’ draft vertical merger guidelines

The Draft Guidelines outline the following principles that the Agencies will apply when analyzing vertical mergers:

  • Market definition. The Agencies will identify a relevant market and one or more “related products.” (§ 2) This is a product that is supplied by the merged firm, is vertically related to the product in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market. (§ 2)
  • Safe harbor. Unlike horizontal merger cases, the Agencies cannot rely on changes in concentration in the relevant market as a screen for competitive effects. Instead, the Agencies consider measures of the competitive significance of the related product. (§ 3) The Draft Guidelines propose a safe harbor, stating that the Agencies are unlikely to challenge a vertical merger “where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” (§ 3) However, shares exceeding the thresholds, taken alone, do not support an inference that the vertical merger is anticompetitive. (§ 3)
  • Theories of unilateral harm. Vertical mergers can result in unilateral competitive effects, including raising rivals’ costs (charging rivals in the relevant market a higher price for the related product) or foreclosure (refusing to supply rivals with the related product altogether). (§ 5.a) Another potential unilateral effect is access to competitively sensitive information: The combined firm may, through the acquisition, gain access to sensitive business information about its upstream or downstream rivals that was unavailable to it before the merger (for example, a downstream rival of the merged firm may have been a premerger customer of the upstream merging party). (§ 5.b)
  • Theories of coordinated harm. Vertical mergers can also increase the likelihood of post-merger coordinated interaction. For example, a vertical merger might eliminate or hobble a maverick firm that would otherwise play an important role in limiting anticompetitive coordination. (§ 7)
  • Procompetitive effects. Vertical mergers can have procompetitive effects, such as the elimination of double marginalization (“EDM”). A merger of vertically related firms can create an incentive for the combined entity to lower prices on the downstream product, because it will capture the additional margins from increased sales on the upstream product. (§ 6) EDM thus may benefit both the merged firm and buyers of the downstream product. (§ 6)
  • Efficiencies. Vertical mergers have the potential to create cognizable efficiencies; the Agencies will evaluate such efficiencies using the standards set out in the Horizontal Merger Guidelines. (§ 8)

Implications for vertical mergers involving nascent start-ups

At present, the Draft Guidelines do not address theories of nascent or potential competition. To the extent the Agencies provide further guidance regarding the treatment of vertical mergers involving nascent start-ups, they should take note of the following facts:

First, empirical evidence from strategy literature indicates that technology-related vertical mergers are likely to be efficiency-enhancing. In a survey of the strategy literature on vertical integration, Professor D. Daniel Sokol observed that vertical acquisitions involving technology start-ups are “largely complementary, combining the strengths of the acquiring firm in process innovation with the product innovation of the target firms.” (p. 1372) The literature shows that larger firms tend to be relatively poor at developing new and improved products outside of their core expertise, but are relatively strong at process innovation (developing new and improved methods of production, distribution, support, and the like). (Sokol, p. 1373) Larger firms need acquisitions to help with innovation; acquisition is more efficient than attempting to innovate through internal efforts. (Sokol, p. 1373)

Second, vertical merger policy towards nascent competitor acquisitions has important implications for the rate of start-up formation, and the innovation that results. Entrepreneurship in technology markets is motivated by the opportunity for commercialization and exit. (Sokol, p. 1362 (“[T]he purpose of such investment [in start-ups] is to reap the rewards of scaling a venture to exit.”))

In recent years, as IPO activity has declined, vertical mergers have become the default method of entrepreneurial exit. (Sokol, p. 1376) Increased vertical merger enforcement against start-up acquisitions thus closes off the primary exit strategy for entrepreneurs. As Prof. Sokol concluded in his study of vertical mergers:

When antitrust agencies, judges, and legislators limit the possibility of vertical mergers as an exit strategy for start-up firms, it creates risk for innovation and entrepreneurship…. it threatens entrepreneurial exits, particularly for tech companies whose very business model is premised upon vertical mergers for purposes of a liquidity event. (p. 1377)

Third, to the extent that the vertical acquisition of a start-up raises competitive concerns, a behavioral remedy is usually preferable to a structural one. As explained above, vertical acquisitions typically result in substantial efficiencies, and these efficiencies are likely to overwhelm any potential competitive harm. Further, a structural remedy is likely infeasible in the case of a start-up acquisition. Thus, behavioral relief is the only way of preserving the deal’s efficiencies while remedying the potential competitive harm. (Which the Agencies have recognized, see DOJ Antitrust Division, Policy Guide to Merger Remedies, p. 20 (“Stand-alone conduct relief is only appropriate when a full-stop prohibition of the merger would sacrifice significant efficiencies and a structural remedy would similarly eliminate such efficiencies or is simply infeasible.”)) Appropriate behavioral remedies for vertical acquisitions of start-ups would include firewalls (restricting the flow of competitively sensitive information between the upstream and downstream units of the combined firm) or a fair dealing or non-discrimination remedy (requiring the merging firm to supply an input or grant customer access to competitors in a non-discriminatory way) with clear benchmarks to ensure compliance. (See Policy Guide to Merger Remedies, pp. 22-24)

To be sure, some vertical mergers may cause harm to competition, and there should be enforcement when the facts justify it. But vertical mergers involving technology start-ups generally enhance efficiency and promote innovation. Antitrust’s goals of promoting competition and innovation are thus best served by taking a measured approach towards vertical mergers involving technology start-ups. (Sokol, pp. 1362–63) (“Thus, a general inference that makes vertical acquisitions, particularly in tech, more difficult to approve leads to direct contravention of antitrust’s role in promoting competition and innovation.”)

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Sharis Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division); with Timothy Cornell (Partner, Clifford Chance); Brian Concklin (Counsel, Clifford Chance); and Michael Van Arsdall (Counsel, Clifford Chance).]

The draft Vertical Merger Guidelines (“Guidelines”) miss a real opportunity to provide businesses with consistent guidance across jurisdictions and to harmonize the international approach to vertical merger review

As drafted, the Guidelines indicate the agencies will evaluate market shares and concentration — measured using the same methodology described in the long-standing Horizontal Merger Guidelines — but not use these metrics as a “rigid screen.” On that basis the Guidelines establish a “soft” 20 percent threshold, where the U.S. Agencies are “unlikely to challenge a vertical merger” if the merging parties have less than a 20 percent share of the relevant market and the related product is used in less than 20 percent of the relevant market.

We suggest, instead, that the Guidelines be aligned with those of other jurisdictions, namely the EU non-horizontal merger guidelines [for an extended discussion of which, see Bill Kolaasky’s symposium post here —ed.]. The European Commission’s guidelines state the European Commission is “unlikely to find concern” with a vertical merger affecting less than 30 percent of the relevant markets and the post-merger HHIs fall below 2000. Among others, Japan and Chile employ a similarly higher bar than the Guidelines. A discrepancy between the U.S. and other international guidelines causes unnecessary uncertainty within the business and legal communities and could lead to inconsistent enforcement outcomes.In any event, beyond the dangers created by a lack of international harmonization, setting the threshold at 20 percent seems arbitrarily low given the pro-competitive nature of most vertical mergers. Setting the threshold so low fails to recognize the inherently procompetitive nature of the majority of vertical combinations, and could result in false positives, and undue cost and delay.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Eric Fruits (Chief Economist, International Center for Law & Economics and Professor of Economics, Portland State University).]

Vertical mergers are messy. They’re messy for the merging firms and they’re especially messy for regulators charged with advancing competition without advantaging competitors. Firms rarely undertake a vertical merger with an eye toward monopolizing a market. Nevertheless, competitors and competition authorities excel at conjuring up complex models that reveal potentially harmful consequences stemming from vertical mergers. In their post, Gregory J. Werden and Luke M. Froeb highlight the challenges in evaluating vertical mergers:

[V]ertical mergers produce anticompetitive effects only through indirect mechanisms with many moving parts, which makes the prediction of competitive effects from vertical mergers more complex and less certain.  

There’s a recurring theme throughout this symposium: The current Vertical Merger Guidelines should be updated; the draft Guidelines are a good start, but they raise more questions than they answer. Other symposium posts have hit on the key ups and downs of the draft Guidelines. 

In this post, I use the draft Guidelines’ examples to highlight how messy vertical mergers can be. The draft Guidelines’ examples are meant to clarify the government’s thinking on markets and mergers. In the end, however, they demonstrate the complexity in identifying relevant markets, related products, and the dynamic interaction of competition. I will focus on two examples provided in the draft Guidelines. Warning: you’re going to read a lot about oranges.

In the following example from the draft Guidelines, the relevant market is the wholesale supply of orange juice in region X and Company B’s supply of oranges is the related product

Example 2: Company A is a wholesale supplier of orange juice. It seeks to acquire Company B, an owner of orange orchards. The Agencies may consider whether the merger would lessen competition in the wholesale supply of orange juice in region X (the relevant market). The Agencies may identify Company B’s supply of oranges as the related product. Company B’s oranges are used in fifteen percent of the sales in the relevant market for wholesale supply of orange juice. The Agencies may consider the share of fifteen percent as one indicator of the competitive significance of the related product to participants in the relevant market.

The figure below illustrates one hypothetical structure. Company B supplies an equal amount of oranges to Company A and two other wholesalers, C and D, totalling 15 percent of orange juice sales in region X. Orchards owned by others account for the remaining 85 percent. For the sake of argument, assume all the wholesalers are the same size in which case Company B’s orchard would supply 20 percent of the oranges used by wholesalers A, C, and D.

Orange juice sold in a particular region is just one of many uses for oranges. The juice can be sold as fresh liquid, liquid from concentrate, or frozen concentrate. The fruit can be sold as fresh produce or it can be canned, frozen, or processed into marmalade. Many of these products can be sold outside of a particular region and can be sold outside of the United States. This is important in considering the next example from the draft Guidelines.

Example 3: In Example 2, the merged firm may be able to profitably stop supplying oranges (the related product) to rival orange juice suppliers (in the relevant market). The merged firm will lose the margin on the foregone sales of oranges but may benefit from increased sales of orange juice if foreclosed rivals would lose sales, and some of those sales were diverted to the merged firm. If the benefits outweighed the costs, the merged firm would find it profitable to foreclose. If the likely effect of the foreclosure were to substantially lessen competition in the orange juice market, the merger potentially raises significant competitive concerns and may warrant scrutiny.

This is the classic example of raising rivals’ costs. Under the standard formulation, the merged firm will produce oranges at the orchard’s marginal cost — in theory, the price it pays for oranges would be the same both pre- and post-merger. If orchard B does not sell its oranges to the non-integrated wholesalers C, D, and E, the other orchards will be able to charge a price greater than their marginal cost of production and greater than the pre-merger market price for oranges. The higher price of oranges used by non-integrated wholesalers will then be reflected in higher prices for orange juice sold by the wholesalers. 

The merged firm’s juice prices will be higher post-merger because its unintegrated rivals’ juice prices will be higher, thus increasing the merged firm’s profits. The merged firm and unintegrated orchards would be the “winners;” unintegrated wholesalers and consumers would be the “losers.” Under a consumer welfare standard the result could be deemed anticompetitive. Under a total welfare standard, anything goes.

But, the classic example of raising rivals’ costs is based on some strong assumptions. It assumes that, pre-merger, all upstream firms price at marginal cost, which means there is no double marginalization. It assumes all the upstream firm’s products are perfectly identical. It assumes unintegrated firms don’t respond by integrating themselves. If one or more of these assumptions is not correct, more complex models — with additional (potentially unprovable) assumptions — must be employed. What begins as a seemingly straightforward theoretical example is now a battle of which expert’s models best fit the facts and best predicts the likely outcome. 

In the draft Guidelines’ raising rivals’ costs example, it’s assumed the merged firm would refuse to sell oranges to rival downstream wholesalers. However, if rival orchards charge a sufficiently high price, the merged firm would profit from undercutting its rivals’ orange prices, while still charging a price greater than marginal cost. Thus, it’s not obvious that the merged firm has an incentive to cut off supply to downstream competitors. The extent of the pricing pressure on the merged firm to cheat on itself is an empirical matter that depends on how upstream and downstream firms react, or might react.

For example, using the figure above, if the merged firm stopped supplying oranges to rival wholesalers, then the merged firm’s orchard would supply 60 percent of the oranges used in the firm’s juice. Although wholesalers C and D would not get oranges from B’s orchards, they could obtain oranges from other orchards that are no longer supplying wholesaler A. In this case, the merged firm’s attempt at foreclosure would have no effect and there would be no harm to competition.

It’s possible the merged firm would divert some or all of its oranges to a “secondary” market, removing those oranges from the juice market. Rather than juicing oranges, the merged firm may decide to sell them as fresh produce; fresh citrus fruits account for 7 percent of Florida’s crop and 75% of California’s. This diversion would lead to a decline in the supply of oranges for juice and the price of this key input would rise. 

But, as noted in the Guidelines’ example, this strategy would raise the merged firm’s costs along with its rivals. Moreover, rival orchards can respond to this strategy by diverting their own oranges from “secondary” markets to the juice market, in which case there may be no significant effect on the price of juice oranges. What begins as a seemingly straightforward theoretical example is now a complicated empirical matter. Or worse, it may just be a battle over which expert is the most convincing fortune teller.

Moreover, the merged firm may have legitimate business reasons for the merger and legitimate business reasons for reducing the supply of oranges to juice wholesalers. For example “citrus greening,” an incurable bacterial disease, has caused severe damage to Florida’s citrus industry, significantly reducing crop yields. A vertical merger could be one way to reduce supply risks. On the demand side, an increase in the demand for fresh oranges would guide firms to shift from juice and processed markets to the fresh market. What some would see as anticompetitive conduct, others would see as a natural and expected response to price signals.Because of the many alternative uses for oranges, it’s overly simplistic to declare that the supply of orange juice in a specific region is “the” relevant market. Orchards face a myriad of options in selling their products. Misshapen fruit can be juiced fresh or as frozen concentrate; smaller fruit can be canned or jellied. “Perfect” fruit can be sold as fresh produce, juice, canned, or jellied. Vertical integration with a juice wholesaler adds just one factor to the myriad factors affecting how and where an upstream supplier sells its products. Just as there is no single relevant market, in many cases there is no single related product — a fact that is especially relevant in vertical relationships. Unfortunately the draft Guidelines provide little guidance in these important areas.