Much has already been said about the twin antitrust suits filed by Epic Games against Apple and Google. For those who are not familiar with the cases, the game developer – most famous for its hit title Fortnite and the “Unreal Engine” that underpins much of the game (and movie) industry – is complaining that Apple and Google are thwarting competition from rival app stores and in-app payment processors.
Supporters have been quick to see in these suits a long-overdue challenge against the 30% commissions that Apple and Google charge. Some have even portrayed Epic as a modern-day Robin Hood, leading the fight against Big Tech to the benefit of small app developers and consumers alike. Epic itself has been keen to stoke this image, comparing its litigation to a fight for basic freedoms in the face of Big Brother:
However, upon closer inspection, cracks rapidly appear in this rosy picture. What is left is a company partaking in blatant rent-seeking that threatens to harm the sprawling ecosystems that have emerged around both Apple and Google’s app stores.
Two issues are particularly salient. First, Epic is trying to protect its own interests at the expense of the broader industry. If successful, its suit would merely lead to alternative revenue schemes that – although more beneficial to itself – would leave smaller developers to shoulder higher fees. Second, the fees that Epic portrays as extortionate were in fact key to the emergence of mobile gaming.
Epic’s utopia is not an equilibrium
Central to Epic’s claims is the idea that both Apple and Google: (i) thwart competition from rival app stores, and implement a series of measures that prevent developers from reaching gamers through alternative means (such as pre-installing apps, or sideloading them in the case of Apple’s platforms); and (ii) tie their proprietary payment processing services to their app stores. According to Epic, this ultimately enables both Apple and Google to extract “extortionate” commissions (30%) from app developers.
But Epic’s whole case is based on the unrealistic assumption that both Apple and Google will sit idly by while rival play stores and payment systems take a free-ride on the vast investments they have ploughed into their respective smartphone platforms. In other words, removing Apple and Google’s ability to charge commissions on in-app purchases does not prevent them from monetizing their platforms elsewhere.
Indeed, economic and strategic management theory tells us that so long as Apple and Google single-handedly control one of the necessary points of access to their respective ecosystems, they should be able to extract a sizable share of the revenue generated on their platforms. One can only speculate, but it is easy to imagine Apple and Google charging rival app stores for access to their respective platforms, or charging developers for access to critical APIs.
Epic itself seems to concede this point. In a recent Verge article, it argued that Apple was threatening to cut off its access to iOS and Mac developer tools, which Apple currently offers at little to no cost:
Apple will terminate Epic’s inclusion in the Apple Developer Program, a membership that’s necessary to distribute apps on iOS devices or use Apple developer tools, if the company does not “cure your breaches” to the agreement within two weeks, according to a letter from Apple that was shared by Epic. Epic won’t be able to notarize Mac apps either, a process that could make installing Epic’s software more difficult or block it altogether. Apple requires that all apps are notarized before they can be run on newer versions of macOS, even if they’re distributed outside the App Store.
There is little to prevent Apple from more heavily monetizing these tools – should Epic’s antitrust case successfully prevent it from charging commissions via its app store.
All of this raises the question: why is Epic bringing a suit that, if successful, would merely result in the emergence of alternative fee schedules (as opposed to a significant reduction of the overall fees paid by developers).
One potential answer is that the current system is highly favorable to small apps that earn little to no revenue from purchases and who benefit most from the trust created by Apple and Google’s curation of their stores. It is, however, much less favorable to developers like Epic who no longer require any curation to garner the necessary trust from consumers and who earn a large share of their revenue from in-app purchases.
In more technical terms, the fact that all in-game payments are made through Apple and Google’s payment processing enables both platforms to more easily price-discriminate. Unlike fixed fees (but just like royalties), percentage commissions are necessarily state-contingent (i.e. the same commission will lead to vastly different revenue depending on an underlying app’s success). The most successful apps thus contribute far more to a platform’s fixed costs. For instance, it is estimated that mobile games account for 72% of all app store spend. Likewise, more than 80% of the apps on Apple’s store pay no commission at all.
This likely expands app store output by getting lower value developers on board. In that sense, it is akin to Ramsey pricing (where a firm/utility expands social welfare by allocating a higher share of fixed costs to the most inelastic consumers). Unfortunately, this would be much harder to accomplish if high value developers could easily bypass Apple or Google’s payment systems.
The bottom line is that Epic appears to be fighting to change Apple and Google’s app store business models in order to obtain fee schedules that are better aligned with its own interests. This is all the more important for Epic Games, given that mobile gaming is becoming increasingly popular relative to other gaming mediums (also here).
The emergence of new gaming platforms
Up to this point, I have mostly presented a zero-sum view of Epic’s lawsuit – i.e. developers and platforms are fighting over the distribution app store profits (though some smaller developers may lose out). But this ignores what is likely the chief virtue of Apple and Google’s “closed” distribution model. Namely, that it has greatly expanded the market for mobile gaming (and other mobile software), and will likely continue to do so in the future.
Much has already been said about the significant security and trust benefits that Apple and Google’s curation of their app stores (including their control of in-app payments) provide to users. Benedict Evans and Ben Thompson have both written excellent pieces on this very topic.
In a nutshell, the closed model allows previously unknown developers to rapidly expand because (i) users do not have to fear their apps contain some form of malware, and (ii) they greatly reduce payments frictions, most notably security related ones. But while these are indeed tremendous benefits, another important upside seems to have gone relatively unnoticed.
The “closed” business model also gives Apple and Google (as well as other platforms) significant incentives to develop new distribution mediums (smart TVs spring to mind) and improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.
The economics of two-sided markets are enlightening in this respect. Apple and Google’s stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks”. That is, they compete aggressively (amongst themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users (note, however, that in the case at hand the incidence of those platform fees is unclear).
This gives platforms significant incentives to continuously attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video and games was one of the driving forces behind the launch of the iPad.
This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms (as Epic games is seeking to do).
In response, some commentators have countered that platforms may use their strong market positions to squeeze developers, thereby undermining software investments. But such a course of action may ultimately be self-defeating. For instance, writing about retail platforms imitating third-party sellers, Anfrei Hagiu, Tat-How Teh and Julian Wright have argued that:
[T]he platform has an incentive to commit itself not to imitate highly innovative third-party products in order to preserve their incentives to innovate.
Seen in this light, Apple and Google’s 30% commissions can be seen as a soft commitment not to expropriate developers, thus leaving them with a sizable share of the revenue generated on each platform. This may explain why the 30% commission has become a standard in the games industry (and beyond).
Furthermore, from an evolutionary perspective, it is hard to argue that the 30% commission is somehow extortionate. If game developers were systematically expropriated, then the gaming industry – in particular its mobile segment – would not have grown so drastically over the past years:
All of this this likely explains why a recent survey found that 81% of app developers believed regulatory intervention would be misguided:
81% of developers and publishers believe that the relationship between them and platforms is best handled within the industry, rather than through government intervention. Competition and choice mean that developers will use platforms that they work with best.
The upshot is that the “closed” model employed by Apple and Google has served the gaming industry well. There is little compelling reason to overhaul that model today.
When all is said and done, there is no escaping the fact that Epic games is currently playing a high-stakes rent-seeking game. As Apple noted in its opposition to Epic’s motion for a temporary restraining order:
Epic did not, and has not, contested that it is in breach of the App Store Guidelines and the License Agreement. Epic’s plan was to violate the agreements intentionally in order to manufacture an emergency. The moment Fortnite was removed from the App Store, Epic launched an extensive PR smear campaign against Apple and a litigation plan was orchestrated to the minute; within hours, Epic had filed a 56-page complaint, and within a few days, filed nearly 200 pages with this Court in a pre-packaged “emergency” motion. And just yesterday, it even sought to leverage its request to this Court for a sales promotion, announcing a “#FreeFortniteCup” to take place on August 23, inviting players for one last “Battle Royale” across “all platforms” this Sunday, with prizes targeting Apple.
Epic is ultimately seeking to introduce its own app store on both Apple and Google’s platforms, or at least bypass their payment processing services (as Spotify is seeking to do in the EU).
Unfortunately, as this post has argued, condoning this type of free-riding could prove highly detrimental to the entire mobile software industry. Smaller companies would almost inevitably be left to foot a larger share of the bill, existing platforms would become less secure, and the development of new ones could be hindered. At the end of the day, 30% might actually be a small price to pay.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]
Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).
Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.
The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:
And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.
That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.
* * *
Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.
The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient.
Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies:
Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.
Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):
Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:
The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.
These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?
Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.
What is a “killer acquisition”…?
Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.
For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper:
“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.
Moreover, the authors add that:
Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur.
Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:
If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.
…And what isn’t a killer acquisition?
What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project.
Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.
In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.
As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.
The market realities of the ventilator market and its implications for the “killer acquisition” story
1. The mechanical ventilator market is highly competitive
As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive.
Medical ventilators market competition is intense.
The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position.
Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.
2. The value of the merger was too small
A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million.
Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it.
As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.
[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.
The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.
We can apply this reasoning to Covidien’s acquisition of Newport:
Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out).
For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”
If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market).
The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.
Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.
“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”
If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry.
Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.
Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success.
Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.
3. Lessons from Covidien’s ventilator product decisions
The killer acquisition claims are further weakened by at least four other important pieces of information:
Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
Covidien appears to have discontinued production of its own portable ventilator in 2014
The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio
Covidien continued to develop and sell Newport’s ventilators
For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.
However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.
It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted).
Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.
Covidien continued to develop and sell Newport’s other ventilators
Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.
If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them?
At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.
There was little overlap between Covidien’s and Newport’s ventilators
Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators.
This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:
Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.
A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).
In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines).
Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:
[D]esigned to provide support to patients who do not require complex critical care ventilators.
A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:
This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.
The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:
This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.
Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.
In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.
Covidien appears to have discontinued production of its own portable ventilator in 2014
Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.
The product is reported on the company’s 2011, 2012 and 2013 annual reports:
Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….
Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.
(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).
Putting the Newport deal in context
Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices.
That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one.
By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.
Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces
So why was the Aura ventilator discontinued?
Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems.
The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where
mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.
The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns.
Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:
The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).
the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.
And the US Government RFP confirms that this was indeed an important requirement:
The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features:
• Flexibility to accommodate a wide patient population range from neonate to adult.
Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:
Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliver — both in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.
As Jason Crawford, an engineer and tech industry commentator, put it:
Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.
The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:
Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here).
Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly.
In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition.
Ending the Aura project might have been an efficient outcome
As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.
Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.
While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965):
Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.
Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.
“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.
In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.
Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry.
And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.
What is studied is a system which lives in the minds of economists but not on earth.
Numerouscommentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations.
The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence.
Finally, what the New York Times piece does offer is a chilling tale of government failure.
The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US.
The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit.
And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”
On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.
At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.
The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:
It has been engaging in “exclusionary conduct” that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.
Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.
The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:
[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.
Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.
It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.
(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)
On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.
In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):
The Telecommunications Industry Association (TIA) and
The Alliance for Telecommunications Industry Solutions (ATIS).
These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.
The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.” A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.
Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.
The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.
Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.
In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.
These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.
An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”
It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”
Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.
The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”
Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:
[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.
Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.
The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.As discussed, the policies of ATIS and TIA are unclear on this.
In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidenceon what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.
What to make of Wednesday’s decision by the European Commission alleging that Google has engaged in anticompetitive behavior? In this post, I contrast the European Commission’s (EC) approach to competition policy with US antitrust, briefly explore the history of smartphones and then discuss the ruling.
Asked about the EC’s decision the day it was announced, FTC Chairman Joseph Simons noted that, while the market is concentrated, Apple and Google “compete pretty heavily against each other” with their mobile operating systems, in stark contrast to the way the EC defined the market. Simons also stressed that for the FTC what matters is not the structure of the market per se but whether or not there is harm to the consumer. This again contrasts with the European Commission’s approach, which does not require harm to consumers. As Simons put it:
Once they [the European Commission] find that a company is dominant… that imposes upon the company kind of like a fairness obligation irrespective of what the effect is on the consumer. Our regulatory… our antitrust regime requires that there be a harm to consumer welfare — so the consumer has to be injured — so the two tests are a little bit different.
Indeed, and as the history below shows, the popularity of Apple’s iOS and Google’s Android operating systems arose because they were superior products — not because of anticompetitive conduct on the part of either Apple or Google. On the face of it, the conduct of both Apple and Google has led to consumer benefits, not harms. So, from the perspective of U.S. antitrust authorities, there is no reason to take action.
Moreover, there is a danger that by taking action as the EU has done, competition and innovation will be undermined — which would be a perverse outcome indeed. These concerns were reflected in astatement by Senator Mike Lee (R-UT):
Today’s decision by the European Commission to fine Google over $5 billion and require significant changes to its business model to satisfy EC bureaucrats has the potential to undermine competition and innovation in the United States,” Sen. Lee said. “Moreover, the decision further demonstrates the different approaches to competition policy between U.S. and EC antitrust enforcers. As discussed at the hearing held last December before the Senate’s Subcommittee on Antitrust, Competition Policy & Consumer Rights, U.S. antitrust agencies analyze business practices based on the consumer welfare standard. This analytical framework seeks to protect consumers rather than competitors. A competitive marketplace requires strong antitrust enforcement. However, appropriate competition policy should serve the interests of consumers and not be used as a vehicle by competitors to punish their successful rivals.
Ironically, the fundamental basis for the Commission’s decision is an analytical framework developed by economists at Harvard in the 1950s, which presumes that the structure of a market determines the conduct of the participants, which in turn presumptively affects outcomes for consumers. This “structure-conduct-performance” paradigm has been challenged both theoretically and empirically (and by “challenged,” I mean “demolished”).
Maintaining, as EC Commissioner Vestager has, that “What would serve competition is to have more players,” is to adopt a presumption regarding competition rooted in the structure of the market, without sufficient attention to the facts on the ground. As French economist Jean Tirole noted in his Nobel Prize lecture:
Economists accordingly have advocated a case-by-case or “rule of reason” approach to antitrust, away from rigid “per se” rules (which mechanically either allow or prohibit certain behaviors, ranging from price-fixing agreements to resale price maintenance). The economists’ pragmatic message however comes with a double social responsibility. First, economists must offer a rigorous analysis of how markets work, taking into account both the specificities of particular industries and what regulators do and do not know….
Second, economists must participate in the policy debate…. But of course, the responsibility here goes both ways. Policymakers and the media must also be willing to listen to economists.
In good Tirolean fashion, we begin with an analysis of how the market for smartphones developed. What quickly emerges is that the structure of the market is a function of intense competition, not its absence. And, by extension, mandating a different structure will likely impede competition, or, at the very least, will not likely contribute to it.
A brief history of smartphone competition
In 2006, Nokia’s N70 became the first smartphone to sell more than a million units. It was a beautiful device, with a simple touch screen interface and real push buttons for numbers. The following year, Apple released its first iPhone. It sold 7 million units — about the same as Nokia’s N95 and slightly less than LG’s Shine. Not bad, but paltry compared to the sales of Nokia’s 1200 series phones, which had combined sales of over 250 million that year — about twice the total of all smartphone sales in 2007.
By 2017, smartphones had come to dominate the market, with total sales of over1.5 billion. At the same time, the structure of the market has changed dramatically. In the first quarter of 2018, Apple’s iPhone X and iPhone 8 were thetwo best-selling smartphones in the world. In total, Apple shipped just over52 million phones, accounting for 14.5% of the global market. Samsung, which has a wider range of devices, sold even more: 78 million phones, or 21.7% of the market. At third and fourth place were Huawei (11%) and Xiaomi (7.5%). Nokia and LG didn’t even make it into the top 10, with market shares of only 3% and1% respectively.
Several factors have driven this highly dynamic market. Dramatic improvements in cellular data networks have played a role. But arguably of greater importance has been the development of software that offers consumers an intuitive and rewarding experience.
Apple’s iOS and Google’s Android operating systems have proven to be enormously popular among both users and app developers. This has generated synergies — or what economists call network externalities — as more apps have been developed, so more people are attracted to the ecosystem and vice versa, leading to a virtuous circle that benefits both users and app developers.
By contrast, Nokia’s early smartphones, including the N70 and N95, ran Symbian, the operating system developed for Psion’s handheld devices, which had a clunkier user interface and wasmore difficult to code — so it was less attractive to both users and developers. In addition, Symbian lacked an effective means of solving the problem of fragmentation of the operating system across different devices, which made it difficult for developers to create apps that ran across the ecosystem — something both Apple (through its closed system) and Google (through agreements with carriers) were able to address. Meanwhile, Java’s MIDP used in LG’s Shine, and its successor J2ME imposed restrictions on developers (such as prohibiting access to files, hardware, and network connections) that seem to have made it less attractive than Android.
The relative superiority of their operating systems enabled Apple and the manufacturers of Android-based phones to steal a march on the early leaders in the smartphone revolution.
The fact that Google allows smartphone manufacturers to install Android for free, distributes Google Play and other apps in a free bundle, and pays such manufacturers for preferential treatment for Google Search, has also kept the cost of Android-based smartphones down. As a result, Android phones are the cheapest on the market, providing a powerful experience for as little as $50. It is reasonable to conclude from this that innovation, driven by fierce competition, has led to devices, operating systems, and apps that provide enormous benefits to consumers.
The Commission decision would harm device manufacturers, app developers and consumers
The EC’s decision seems to disregard the history of smartphone innovation and competition and their ongoing consequences. As Dirk Auer explains, the Open Handset Alliance (OHA) was created specifically to offer an effective alternative to Apple’s iPhone — and it worked. Indeed, it worked so spectacularly that Android is installed on about 80% of all new phones. This success was the result of several factors that the Commission now seeks to undermine:
First, in order to maintain order within the Android universe, and thereby ensure that apps developed for Android would function on the vast majority of Android devices, Google and the OHA sought to limit the extent to which Android “forks” could be created. (Apple didn’t face this problem because its source code is proprietary, so cannot be modified by third-party developers.) One way Google does this is by imposing restrictions on the licensing of its proprietary apps, such as the Google Play store (a repository of apps, similar to Apple’s App Store).
Device manufacturers that don’t conform to these restrictions may still build devices with their forked version of Android — but without those Google apps. Indeed, Amazon chooses to develop a non-conforming version of Android and built its own app repository for its Fire devices (though it is still possible to add the Google Play Store). That strategy seems to be working for Amazon in the tablet market; in 2017 it rose past Samsung to become the second biggest manufacturer of tablets worldwide, after Apple.
Second, in order to be able to offer Android for free to smartphone manufacturers, Google sought to develop unique revenue streams (because, although the software is offered for free, it turns out that software developers generally don’t work for free). The main way Google did this was by requiring manufacturers that choose to install Google Play also to install its browser (Chrome) and search tools, which generate revenue from advertising. At the same time, Google kept its platform open by permitting preloads of rivals’ apps and creating a marketplace where rivals can also reach scale. Mozilla’s Firefox browser, for example, has been downloaded over 100 million times on Android.
The importance of these factors to the success of Android is acknowledged by the EC. But instead of treating them as legitimate business practices that enabled the development of high-quality, low-cost smartphones and a universe of apps that benefits billions of people, the Commission simply asserts that they are harmful, anticompetitive practices.
For example, the Commission asserts that
In order to be able to pre-install on their devices Google’s proprietary apps, including the Play Store and Google Search, manufacturers had to commit not to develop or sell even a single device running on an Android fork. The Commission found that this conduct was abusive as of 2011, which is the date Google became dominant in the market for app stores for the Android mobile operating system.
This is simply absurd, to say nothing of ahistorical. As noted, the restrictions on Android forks plays an important role in maintaining the coherency of the Android ecosystem. If device manufacturers were able to freely install Google apps (and other apps via the Play Store) on devices running problematic Android forks that were unable to run the apps properly, consumers — and app developers — would be frustrated, Google’s brand would suffer, and the value of the ecosystem would be diminished. Extending this restriction to all devices produced by a specific manufacturer, regardless of whether they come with Google apps preinstalled, reinforces the importance of the prohibition to maintaining the coherency of the ecosystem.
It is ridiculous to say that something (efforts to rein in Android forking) that made perfect sense until 2011 and that was central to the eventual success of Android suddenly becomes “abusive” precisely because of that success — particularly when the pre-2011 efforts were often viewed as insufficient and unsuccessful (a January 2012 Guardian Technology Blog post, “How Google has lost control of Android,” sums it up nicely).
Meanwhile, if Google is unable to tie pre-installation of its search and browser apps to the installation of its app store, then it will have less financial incentive to continue to maintain the Android ecosystem. Or, more likely, it will have to find other ways to generate revenue from the sale of devices in the EU — such as charging device manufacturers for Android or Google Play. The result is that consumers will be harmed, either because the ecosystem will be degraded, or because smartphones will become more expensive.
The troubling absence of Apple from the Commission’s decision
In addition, the EC’s decision is troublesome in other ways. First, for its definition of the market. The ruling asserts that “Through its control over Android, Google is dominant in the worldwide market (excluding China) for licensable smart mobile operating systems, with a market share of more than 95%.” But “licensable smart mobile operating systems” is a very narrow definition, as it necessarily precludes operating systems that are not licensable — such as Apple’s iOS and RIM’s Blackberry OS. Since Apple has nearly 25% of the market share of smartphones in Europe, the European Commission has — through its definition of the market — presumed away the primary source of effective competition. As Pinar Akmanhas noted:
How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?
The EU then invents a series of claims regarding the lack of competition with Apple:
end user purchasing decisions are influenced by a variety of factors (such as hardware features or device brand), which are independent from the mobile operating system;
It is not obvious that this is evidence of a lack of competition. A better explanation is that the EU’s narrow definition of the market is defective. In fact, one could easily draw the opposite conclusion of that drawn by the Commission: the fact that purchasing decisions are driven by various factors suggests that there is substantial competition, with phone manufacturers seeking to design phones that offer a range of features, on a number of dimensions, to best capture diverse consumer preferences. They are able to do this in large part precisely because consumers are able to rely upon a generally similar operating system and continued access to the apps that they have downloaded. As Tim Cook likes to remind his investors, Apple is quite successful at targeting “Android switchers” to switch to iOS.
Apple devices are typically priced higher than Android devices and may therefore not be accessible to a large part of the Android device user base;
And yet, in the first quarter of 2018, Apple phones accounted for five of the top ten selling smartphones worldwide. Meanwhile, several competing phones, including thefifth and sixth best-sellers, Samsung’s Galaxy S9 and S9+, sell forsimilar prices to themostexpensive iPhones. And a refurbished iPhone 6 can be had for less than $150.
Android device users face switching costs when switching to Apple devices, such as losing their apps, data and contacts, and having to learn how to use a new operating system;
This is, of course, true for any system switch. And yet the growing market share of Apple phones suggests that some users are willing to part with those sunk costs. Moreover, the increasing predominance of cloud-based and cross-platform apps, as well as Apple’s own “Move to iOS” Android app (which facilitates the transfer of users’ data from Android to iOS), means that the costs of switching border on trivial. As mentioned above, Tim Cook certainly believes in “Android switchers.”
even if end users were to switch from Android to Apple devices, this would have limited impact on Google’s core business. That’s because Google Search is set as the default search engine on Apple devices and Apple users are therefore likely to continue using Google Search for their queries.
This is perhaps the most bizarre objection of them all. The fact that Apple chooses to install Google search as the default demonstrates that consumers prefer that system over others. Indeed, this highlights a fundamental problem with the Commission’s own rationale, As Akman notes:
It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.
As the foregoing demonstrates, the EC’s decision is based on a fundamental misunderstanding of the nature and evolution of the market for smartphones and associated applications. The statement by Commissioner Vestager quoted above — that “What would serve competition is to have more players” — belies this misunderstanding and highlights the erroneous assumptions underpinning the Commission’s analysis, which is wedded to a theory of market competition that was long ago thrown out by economists.
And, thankfully, it appears that the FTC Chairman is aware of at least some of the flaws in the EC’s conclusions.
Google will undoubtedly appeal the Commission’s decision. For the sakes of the millions of European consumers who rely on Android-based phones and the millions of software developers who provide Android apps, let’s hope that they succeed.
What does it mean to “own” something? A simple question (with a complicated answer, of course) that, astonishingly, goes unasked in a recent article in the Pennsylvania Law Review entitled, What We Buy When We “Buy Now,” by Aaron Perzanowski and Chris Hoofnagle (hereafter “P&H”). But how can we reasonably answer the question they pose without first trying to understand the nature of property interests?
P&H set forth a simplistic thesis for their piece: when an e-commerce site uses the term “buy” to indicate the purchase of digital media (instead of the term “license”), it deceives consumers. This is so, the authors assert, because the common usage of the term “buy” indicates that there will be some conveyance of property that necessarily includes absolute rights such as alienability, descendibility, and excludability, and digital content doesn’t generally come with these attributes. The authors seek to establish this deception through a poorly constructed survey regarding consumers’ understanding of the parameters of their property interests in digitally acquired copies. (The survey’s considerable limitations is a topic for another day….)
to discuss how best to communicate to consumers regarding license terms and restrictions in connection with online transactions involving copyrighted works… [as a precursor to] the creation of a multistakeholder process to establish best practices to improve consumers’ understanding of license terms and restrictions in connection with online transactions involving creative works.
Whatever the results of that process, it should not begin, or end, with P&H’s problematic approach.
Getting to their conclusion that platforms are engaged in deceptive practices requires two leaps of faith: First, that property interests are absolute and that any restraint on the use of “property” is inconsistent with the notion of ownership; and second, that consumers’ stated expectations (even assuming that they were measured correctly) alone determine the appropriate contours of legal (and economic) property interests. Both leaps are meritless.
Property and ownership are not absolute concepts
P&H are in such a rush to condemn downstream restrictions on the alienability of digital copies that they fail to recognize that “property” and “ownership” are not absolute terms, and are capable of being properly understood only contextually. Our very notions of what objects may be capable of ownership change over time, along with the scope of authority over owned objects. For P&H, the fact that there are restrictions on the use of an object means that it is not properly “owned.” But that overlooks our everyday understanding of the nature of property.
Ownership is far more complex than P&H allow, and ownership limited by certain constraints is still ownership. As Armen Alchian and Harold Demsetz note in The Property Right Paradigm (1973):
In common speech, we frequently speak of someone owning this land, that house, or these bonds. This conversational style undoubtedly is economical from the viewpoint of quick communication, but it masks the variety and complexity of the ownership relationship. What is owned are rights to use resources, including one’s body and mind, and these rights are always circumscribed, often by the prohibition of certain actions. To “own land” usually means to have the right to till (or not to till) the soil, to mine the soil, to offer those rights for sale, etc., but not to have the right to throw soil at a passerby, to use it to change the course of a stream, or to force someone to buy it. What are owned are socially recognized rights of action. (Emphasis added).
Literally, everything we own comes with a range of limitations on our use rights. Literally. Everything. So starting from a position that limitations on use mean something is not, in fact, owned, is absurd.
Moreover, in defining what we buy when we buy digital goods by reference to analog goods, P&H are comparing apples and oranges, without acknowledging that both apples and oranges are bought.
There has been a fair amount of discussion about the nature of digital content transactions (including by the USPTO and NTIA), and whether they are analogous to traditional sales of objects or more properly characterized as licenses. But this is largely a distinction without a difference, and the nature of the transaction is unnecessary in understanding that P&H’s assertion of deception is unwarranted.
Quite simply, we are accustomed to buying licenses as well as products. Whenever we buy a ticket — e.g., an airline ticket or a ticket to the movies — we are buying the right to use something or gain some temporary privilege. These transactions are governed by the terms of the license. But we certainly buy tickets, no? Alchian and Demsetz again:
The domain of demarcated uses of a resource can be partitioned among several people. More than one party can claim some ownership interest in the same resource. One party may own the right to till the land, while another, perhaps the state, may own an easement to traverse or otherwise use the land for specific purposes. It is not the resource itself which is owned; it is a bundle, or a portion, of rights to use a resource that is owned. In its original meaning, property referred solely to a right, title, or interest, and resources could not be identified as property any more than they could be identified as right, title, or interest. (Emphasis added).
P&H essentially assert that restrictions on the use of property are so inconsistent with the notion of property that it would be deceptive to describe the acquisition transaction as a purchase. But such a claim completely overlooks the fact that there are restrictions on any use of property in general, and on ownership of copies of copyright-protected materials in particular.
Take analog copies of copyright-protected works. While the lawful owner of a copy is able to lend that copy to a friend, sell it, or even use it as a hammer or paperweight, he or she can not offer it for rental (for certain kinds of works), cannot reproduce it, may not publicly perform or broadcast it, and may not use it to bludgeon a neighbor. In short, there are all kinds of restrictions on the use of said object — yet P&H have little problem with defining the relationship of person to object as “ownership.”
Consumers’ understanding of all the terms of exchange is a poor metric for determining the nature of property interests
When we buy digital goods, we probably care a great deal about a few terms. For a digital music file, for example, we care first and foremost about whether it will play on our device(s). Other terms are of diminishing importance. Users certainly care whether they can play a song when offline, for example, but whether their children will be able to play it after they die? Not so much. That eventuality may, in fact, be specified in the license, but the nature of this particular ownership relationship includes a degree of rational ignorance on the users’ part: The typical consumer simply doesn’t care. In other words, she is, in Nobel-winning economist Herbert Simon’s term, “boundedly rational.” That isn’t deception; it’s a feature of life without which we would be overwhelmed by “information overload” and unable to operate. We have every incentive and ability to know the terms we care most about, and to ignore the ones about which we care little.
Relatedly, P&H also fail to understand the relationship between price and ownership. A digital song that is purchased from Amazon for $.99 comes with a set of potentially valuable attributes. For example:
It may be purchased on its own, without the other contents of an album;
It never degrades in quality, and it’s extremely difficult to misplace;
It may be purchased from one’s living room and be instantaneously available;
It can be easily copied or transferred onto multiple devices; and
It can be stored in Amazon’s cloud without taking up any of the consumer’s physical memory resources.
In many ways that matter to consumers, digital copies are superior to analog or physical ones. And yet, compared to physical media, on a per-song basis (assuming one could even purchase a physical copy of a single song without purchasing an entire album), $.99 may represent a considerable discount. Moreover, in 1982 when CDs were first released, they cost an average of $15. In 2017 dollars, that would be $38. Yet today most digital album downloads can be found for $10 or less.
Of course, songs purchased on CD or vinyl offer other benefits that a digital copy can’t provide. But the main thing — the ability to listen to the music — is approximately equal, and yet the digital copy offers greater convenience at (often) lower price. It is impossible to conclude that a consumer is duped by such a purchase, even if it doesn’t come with the ability to resell the song.
In fact, given the price-to-value ratio, it is perhaps reasonable to think that consumers know full well (or at least suspect) that there might be some corresponding limitations on use — the inability to resell, for example — that would explain the discount. For some people, those limitations might matter, and those people, presumably, figure out whether such limitations are present before buying a digital album or song For everyone else, however, the ability to buy a digital song for $.99 — including all of the benefits of digital ownership, but minus the ability to resell — is a good deal, just as it is worth it to a home buyer to purchase a house, regardless of whether it is subject to various easements.
Consumers are, in fact, familiar with “buying” property with all sorts of restrictions
The inability to resell digital goods looms inordinately large for P&H: According to them, by virtue of the fact that digital copies may not be resold, “ownership” is no longer an appropriate characterization of the relationship between the consumer and her digital copy. P&H believe that digital copies of works are sufficiently similar to analog versions, that traditional doctrines of exhaustion (which would permit a lawful owner of a copy of a work to dispose of that copy as he or she deems appropriate) should apply equally to digital copies, and thus that the inability to alienate the copy as the consumer wants means that there is no ownership interest per se.
But, as discussed above, even ownership of a physical copy doesn’t convey to the purchaser the right to make or allow any use of that copy. So why should we treat the ability to alienate a copy as the determining factor in whether it is appropriate to refer to the acquisition as a purchase? P&H arrive at this conclusion only through the illogical assertion that
Consumers operate in the marketplace based on their prior experience. We suggest that consumers’ “default” behavior is based on the experiences of buying physical media, and the assumptions from that context have carried over into the digital domain.
P&H want us to believe that consumers can’t distinguish between the physical and virtual worlds, and that their ability to use media doesn’t differentiate between these realms. But consumers do understand (to the extent that they care) that they are buying a different product, with different attributes. Does anyone try to play a vinyl record on his or her phone? There are perceived advantages and disadvantages to different kinds of media purchases. The ability to resell is only one of these — and for many (most?) consumers not likely the most important.
And, furthermore, the notion that consumers better understood their rights — and the limitations on ownership — in the physical world and that they carried these well-informed expectations into the digital realm is fantasy. Are we to believe that the consumers of yore understood that when they bought a physical record they could sell it, but not rent it out? That if they played that record in a public place they would need to pay performance royalties to the songwriter and publisher? Not likely.
Simply put, there is a wide variety of goods and services that we clearly buy, but that have all kinds of attributes that do not fit P&H’s crabbed definition of ownership. For example:
We buy tickets to events and membership in clubs (which, depending upon club rules, may not be alienated, and which always lapse for non-payment).
We buy houses notwithstanding the fact that in most cases all we own is the right to inhabit the premises for as long as we pay the bank (which actually retains more of the incidents of “ownership”).
In fact, we buy real property encumbered by a series of restrictive covenants: Depending upon where we live, we may not be able to build above a certain height, we may not paint the house certain colors, we may not be able to leave certain objects in the driveway, and we may not be able to resell without approval of a board.
We may or may not know (or care) about all of the restrictions on our use of such property. But surely we may accurately say that we bought the property and that we “own” it, nonetheless.
The reality is that we are comfortable with the notion of buying any number of limited property interests — including the purchasing of a license — regardless of the contours of the purchase agreement. The fact that some ownership interests may properly be understood as licenses rather than as some form of exclusive and permanent dominion doesn’t suggest that a consumer is not involved in a transaction properly characterized as a sale, or that a consumer is somehow deceived when the transaction is characterized as a sale — and P&H are surely aware of this.
Conclusion: The real issue for P&H is “digital first sale,” not deception
At root, P&H are not truly concerned about consumer deception; they are concerned about what they view as unreasonable constraints on the “rights” of consumers imposed by copyright law in the digital realm. Resale looms so large in their analysis not because consumers care about it (or are deceived about it), but because the real object of their enmity is the lack of a “digital first sale doctrine” that exactly mirrors the law regarding physical goods.
But Congress has already determined that there are sufficient distinctions between ownership of digital copies and ownership of analog ones to justify treating them differently, notwithstanding ownership of the particular copy. And for good reason: Trade in “used” digital copies is not a secondary market. Such copies are identical to those traded in the primary market and would compete directly with “pristine” digital copies. It makes perfect sense to treat ownership differently in these cases — and still to say that both digital and analog copies are “bought” and “owned.”
P&H’s deep-seated opposition to current law colors and infects their analysis — and, arguably, their failure to be upfront about it is the real deception. When one starts an analysis with an already-identified conclusion, the path from hypothesis to result is unlikely to withstand scrutiny, and that is certainly the case here.
My colleague, Neil Turkewitz, begins his fine post for Fair Use Week (read: crashing Fair Use Week) by noting that
Many of the organizations celebrating fair use would have you believe, because it suits their analysis, that copyright protection and the public interest are diametrically opposed. This is merely a rhetorical device, and is a complete fallacy.
If I weren’t a recovering law professor, I would just end there: that about sums it up, and “the rest is commentary,” as they say. Alas….
All else equal, creators would like as many people to license their works as possible; there’s no inherent incompatibility between “incentives and access” (which is just another version of the fallacious “copyright protection versus the public interest” trope). Everybody wants as much access as possible. Sure, consumers want to pay as little as possible for it, and creators want to be paid as much as possible. That’s a conflict, and at the margin it can seem like a conflict between access and incentives. But it’s not a fundamental, philosophical, and irreconcilable difference — it’s the last 15 minutes of negotiation before the contract is signed.
Reframing what amounts to a fundamental agreement into a pitched battle for society’s soul is indeed a purely rhetorical device — and a mendacious one, at that.
The devil is in the details, of course, and there are still disputes on the margin, as I said. But it helps to know what they’re really about, and why they are so far from the fanciful debates the copyright scolds wish we were having.
First, price is, in fact, a big deal. For the creative industries it can be the difference between, say, making one movie or a hundred, and for artists is can be the difference between earning a livelihood writing songs or packing it in for a desk job.
But despite their occasional lip service to the existence of trade-offs, many “fair-users” see price — i.e., licensing agreements — as nothing less than a threat to social welfare. After all, the logic runs, if copies can be made at (essentially) zero marginal cost, a positive price is just extortion. They say, “more access!,” but they don’t mean, “more access at an agreed-upon price;” they mean “zero-price access, and nothing less.” These aren’t the same thing, and when “fair use” is a stand-in for “zero-price use,” fair-users moving the goalposts — and being disingenuous about it.
The other, related problem, of course, is piracy. Sometimes rightsholders’ objections to the expansion of fair use are about limiting access. But typically that’s true only where fine-tuned contracting isn’t feasible, and where the only realistic choice they’re given is between no access for some people, and pervasive (and often unstoppable) piracy. There are any number of instances where rightsholders have no realistic prospect of efficiently negotiating licensing terms and receiving compensation, and would welcome greater access to their works even without a license — as long as the result isn’t also (or only) excessive piracy. The key thing is that, in such cases, opposition to fair use isn’t opposition to reasonable access, even free access. It’s opposition to piracy.
Time-shifting with VCRs and space-shifting with portable mp3 players (to take two contentious historical examples) fall into this category (even if they are held up — as they often are — by the fair-users as totems of their fanciful battle ). At least at the time of the Sony and Diamond Rio cases, when there was really no feasible way to enforce licenses or charge differential prices for such uses, the choice rightsholders faced was effectively all-or-nothing, and they had to pick one. I’m pretty sure, all else equal, they would have supported such uses, even without licenses and differential compensation — except that the piracy risk was so significant that it swamped the likely benefits, tilting the scale toward “nothing” instead of “all.”
Again, the reality is that creators and rightsholders were confronted with a choice between two imperfect options; neither was likely “right,” and they went with the lesser evil. But one can’t infer from that constrained decision an inherent antipathy to fair use. Sadly, such decisions have to be made in the real world, not law reviews and EFF blog posts. As economists Benjamin Klein, Andres Lerner and Kevin Murphy put it regarding the Diamond Rio case:
[R]ather than representing an attempt by copyright-holders to increase their profits by controlling legally established “fair uses,”… the obvious record-company motivation is to reduce the illegal piracy that is encouraged by the technology. Eliminating a “fair use” [more accurately, “opposing an expansion of fair use” -ed.] is not a benefit to the record companies; it is an unfortunate cost they have to bear to solve the much larger problem of infringing uses. The record companies face competitive pressure to avoid these costs by developing technologies that distinguish infringing from non-infringing copying.
This last point is important, too. Fair-users don’t like technological protection measures, either, even if they actually facilitate licensing and broader access to copyrighted content. But that really just helps to reveal the poverty of their position. They should welcome technology that expands access, even if it also means that it enables rightsholders to fine-tune their licenses and charge a positive price. Put differently: Why do they hate Spotify!?
I’m just hazarding a guess here, but I suspect that the antipathy to technological solutions goes well beyond the short-term limits on some current use of content that copyright minimalists think shouldn’t be limited. If technology, instead of fair use, is truly determinative of the extent of zero-price access, then their ability to seriously influence (read: rein in) the scope of copyright is diminished. Fair use is amorphous. They can bring cases, they can lobby Congress, they can pen strongly worded blog posts, and they can stage protests. But they can’t do much to stop technological progress. Of course, technology does at least as much to limit the enforceability of licenses and create new situations where zero-price access is the norm. But still, R&D is a lot harder than PR.
What’s more, if technology were truly determinative, it would frequently mean that former fair uses could become infringing at some point (or vice versa, of course). Frankly, there’s no reason for time-shifting of TV content to continue to be considered a fair use today. We now have the technology to both enable time shifting and to efficiently license content for the purpose, charge a differential price for it, and enforce the terms. In fact, all of that is so pervasive today that most users do pay for time-shifting technologies, under license terms that presumably define the scope of their right to do so; they just may not have read the contract. Where time-shifting as a fair use rears its ugly head today is in debates over new, infringing technology where, in truth, the fair use argument is really a malleable pretext to advocate for a restriction on the scope of copyright (e.g., Aereo).
In any case, as the success of business models like Spotify and Netflix (to say nothing of Comcast’s X1 interface and new Xfinity Stream app) attest, technology has enabled users to legitimately engage in what was once conceivable seemingly only under fair use. Yes, at a price — one that millions of people are willing to pay. It is surely the case that rightsholders’ licensing of technologies like these have made content more accessible, to more people, and with higher-quality service, than a regime of expansive unlicensed use could ever have done.
At the same time, let’s not forget that, often, even when they could efficiently distribute content only at a positive price, creators offer up scads of content for free, in myriad ways. Sure, the objective is to maximize revenue overall by increasing exposure, price discriminating, or enhancing the quality of paid-for content in some way — but so what? More content is more content, and easier access is easier access. All of that uncompensateddistribution isn’t rightsholders nodding toward the copyright scolds’ arguments; it’s perfectly consistent with licensing. Obviously, the vast majority of music, for example, is listened-to subject to license agreements, not because of fair use exceptions or rightsholders’ largesse.
For the vast majority of creators, users and uses, licensed access works, and gets us massive amounts of content and near ubiquitous access. The fair use disputes we do have aren’t really about ensuring broad access; that’s already happening. Rather, those disputes are either niggling over the relatively few ambiguous margins on the one hand, or, on the other, fighting the fair-users’ manufactured, existential fight over whether copyright exceptions will subsume the rule. The former is to be expected: Copyright boundaries will always be imperfect, and courts will always be asked to make the close calls. The latter, however, is simply a drain on resources that could be used to create more content, improve its quality, distribute it more broadly, or lower prices.
Copyright law has always been, and always will be, operating in the shadow of technology — technology both for distribution and novel uses, as well as for pirating content. The irony is that, as digital distribution expands, it has dramatically increased the risk of piracy, even as copyright minimalists argue that the low costs of digital access justify a more expansive interpretation of fair use — which would, in turn, further increase the risk of piracy.
Creators’ opposition to this expansion has nothing to do with opposition to broad access to content, and everything to do with ensuring that piracy doesn’t overwhelm their ability to get paid, and to produce content in the first place.
Even were fair use to somehow disappear tomorrow, there would be more and higher-quality content, available to more people in more places, than ever before. But creators have no interest in seeing fair use disappear. What they do have is an interest in is licensing their content as broadly as possible when doing so is efficient, and in minimizing piracy. Sometimes legitimate fair-use questions get caught in the middle. We could and should have a reasonable debate over the precise contours of fair use in such cases. But the false dichotomy of creators against users makes that extremely difficult. Until the disingenuous rhetoric is clawed back, we’re stuck with needless fights that don’t benefit either users or creators — although they do benefit the policy scolds, academics, wonks and businesses that foment them.
Over the weekend, Senator Al Franken and FCC Commissioner Mignon Clyburn issued an impassioned statement calling for the FCC to thwart the use of mandatory arbitration clauses in ISPs’ consumer service agreements — starting with a ban on mandatory arbitration of privacy claims in the Chairman’s proposed privacy rules. Unfortunately, their call to arms rests upon a number of inaccurate or weak claims. Before the Commissioners vote on the proposed privacy rules later this week, they should carefully consider whether consumers would actually be served by such a ban.
To begin with, it is firmly cemented in Supreme Court precedent that the Federal Arbitration Act (FAA) “establishes ‘a liberal federal policy favoring arbitration agreements.’” As the Court recently held:
[The FAA] reflects the overarching principle that arbitration is a matter of contract…. [C]ourts must “rigorously enforce” arbitration agreements according to their terms…. That holds true for claims that allege a violation of a federal statute, unless the FAA’s mandate has been “overridden by a contrary congressional command.”
For better or for worse, that’s where the law stands, and it is the exclusive province of Congress — not the FCC — to change it. Yet nothing in the Communications Act (to say nothing of the privacy provisions in Section 222 of the Act) constitutes a “contrary congressional command.”
And perhaps that’s for good reason. In enacting the statute, Congress didn’t demonstrate the same pervasive hostility toward companies and their relationships with consumers that has characterized the way this FCC has chosen to enforce the Act. As Commissioner O’Rielly noted in dissenting from the privacy NPRM:
I was also alarmed to see the Commission acting on issues that should be completely outside the scope of this proceeding and its jurisdiction. For example, the Commission seeks comment on prohibiting carriers from including mandatory arbitration clauses in contracts with their customers. Here again, the Commission assumes that consumers don’t understand the choices they are making and is willing to impose needless costs on companies by mandating how they do business.
If the FCC were to adopt a provision prohibiting arbitration clauses in its privacy rules, it would conflict with the FAA — and the FAA would win. Along the way, however, it would create a thorny uncertainty for both companies and consumers seeking to enforce their contracts.
The evidence suggests that arbitration is pro-consumer
But the lack of legal authority isn’t the only problem with the effort to shoehorn an anti-arbitration bias into the Commission’s privacy rules: It’s also bad policy.
In the 2015 Open Internet Order, we agreed with the observation that “mandatory arbitration, in particular, may more frequently benefit the party with more resources and more understanding of the dispute procedure, and therefore should not be adopted.” We further discussed how arbitration can create an asymmetrical relationship between large corporations that are repeat players in the arbitration system and individual customers who have fewer resources and less experience. Just as customers should not be forced to agree to binding arbitration and surrender their right to their day in court in order to obtain broadband Internet access service, they should not have to do so in order to protect their private information conveyed through that service.
The Commission may have “agreed”with the cited observations about arbitration, but that doesn’t make those views accurate. As one legal scholar has noted, summarizing the empirical data on the effects of arbitration:
[M]ost of the methodologically sound empirical research does not validate the criticisms of arbitration. To give just one example, [employment] arbitration generally produces higher win rates and higher awards for employees than litigation.
* * *
In sum, by most measures — raw win rates, comparative win rates, some comparative recoveries and some comparative recoveries relative to amounts claimed — arbitration generally produces better results for claimants [than does litigation].
A comprehensive, empirical study by Northwestern Law’s Searle Center on AAA (American Arbitration Association) cases found much the same thing, noting in particular that
Consumer claimants in arbitration incur average arbitration fees of only about $100 to arbitrate small (under $10,000) claims, and $200 for larger claims (up to $75,000).
Consumer claimants also win attorneys’ fees in over 60% of the cases in which they seek them.
On average, consumer arbitrations are resolved in under 7 months.
Consumers win some relief in more than 50% of cases they arbitrate…
And they do almost exactly as well in cases brought against “repeat-player” business.
In short, it’s extremely difficult to sustain arguments suggesting that arbitration is tilted against consumers relative to litigation.
(Upper) class actions: Benefitting attorneys — and very few others
But it isn’t just any litigation that Clyburn and Franken seek to preserve; rather, they are focused on class actions:
If you believe that you’ve been wronged, you could take your service provider to court. But you’d have to find a lawyer willing to take on a multi-national telecom provider over a few hundred bucks. And even if you won the case, you’d likely pay more in legal fees than you’d recover in the verdict.
The only feasible way for you as a customer to hold that corporation accountable would be to band together with other customers who had been similarly wronged, building a case substantial enough to be worth the cost—and to dissuade that big corporation from continuing to rip its customers off.
While — of course — litigation plays an important role in redressing consumer wrongs, class actions frequently don’t confer upon class members anything close to the imagined benefits that plaintiffs’ lawyers and their congressional enablers claim. According to a 2013 report on recent class actions by the law firm, Mayer Brown LLP, for example:
“In [the] entire data set, not one of the class actions ended in a final judgment on the merits for the plaintiffs. And none of the class actions went to trial, either before a judge or a jury.” (Emphasis in original).
“The vast majority of cases produced no benefits to most members of the putative class.”
“For those cases that do settle, there is often little or no benefit for class members. What is more, few class members ever even see those paltry benefits — particularly in consumer class actions.”
“The bottom line: The hard evidence shows that class actions do not provide class members with anything close to the benefits claimed by their proponents, although they can (and do) enrich attorneys.”
Similarly, a CFPB study of consumer finance arbitration and litigation between 2008 and 2012 seems to indicate that the class action settlements and judgments it studied resulted in anemic relief to class members, at best. The CFPB tries to disguise the results with large, aggregated and heavily caveated numbers (never once actually indicating what the average payouts per person were) that seem impressive. But in the only hard numbers it provides (concerning four classes that ended up settling in 2013), promised relief amounted to under $23 each (comprising both cash and in-kind payment) if every class member claimed against the award. Back-of-the-envelope calculations based on the rest of the data in the report suggest that result was typical.
Furthermore, the average time to settlement of the cases the CFPB looked at was almost 2 years. And somewhere between 24% and 37% involved a non-class settlement — meaning class members received absolutely nothing at all because the named plaintiff personally took a settlement.
By contrast, according to the Searle Center study, the average award in the consumer-initiated arbitrations it studied (admittedly, involving cases with a broader range of claims) was almost $20,000, and the average time to resolution was less than 7 months.
To be sure, class action litigation has been an important part of our system of justice. But, as Arthur Miller — a legal pioneer who helped author the rules that make class actions viable — himself acknowledged, they are hardly a panacea:
I believe that in the 50 years we have had this rule, that there are certain class actions that never should have been brought, admitted; that we have burdened our judiciary, yes. But we’ve had a lot of good stuff done. We really have.
The good that has been done, according to Professor Miller, relates in large part to the civil rights violations of the 50’s and 60’s, which the class action rules were designed to mitigate:
Dozens and dozens and dozens of communities were desegregated because of the class action. You even see desegregation decisions in my old town of Boston where they desegregated the school system. That was because of a class action.
It’s hard to see how Franken and Clyburn’s concern for redress of “a mysterious 99-cent fee… appearing on your broadband bill” really comes anywhere close to the civil rights violations that spawned the class action rules. Particularly given the increasingly pervasive role of the FCC, FTC, and other consumer protection agencies in addressing and deterring consumer harms (to say nothing of arbitration itself), it is manifestly unclear why costly, protracted litigation that infrequently benefits anyone other than trial attorneys should be deemed so essential.
“Empowering the 21st century [trial attorney]”
Nevertheless, Commissioner Clyburn and Senator Franken echo the privacy NPRM’s faulty concerns about arbitration clauses that restrict consumers’ ability to litigate in court:
If you’re prohibited from using our legal system to get justice when you’re wronged, what’s to protect you from being wronged in the first place?
Well, what do they think the FCC is — chopped liver?
Hardly. In fact, it’s a little surprising to see Commissioner Clyburn (who sits on a Commission that proudly proclaims that “[p]rotecting consumers is part of [its] DNA”) and Senator Franken (among Congress’ most vocal proponents of the FCC’s claimed consumer protection mission) asserting that the only protection for consumers from ISPs’ supposed depredations is the cumbersome litigation process.
In fact, of course, the FCC has claimed for itself the mantle of consumer protector, aimed at “Empowering the 21st Century Consumer.” But nowhere does the agency identify “promoting and preserving the rights of consumers to litigate” among its tools of consumer empowerment (nor should it). There is more than a bit of irony in a federal regulator — a commissioner of an agency charged with making sure, among other things, that corporations comply with the law — claiming that, without class actions, consumers are powerless in the face of bad corporate conduct.
Moreover, even if it were true (it’s not) that arbitration clauses tend to restrict redress of consumer complaints, effective consumer protection would still not necessarily be furthered by banning such clauses in the Commission’s new privacy rules.
The FCC’s contemplated privacy regulations are poised to introduce a wholly new and untested regulatory regime with (at best) uncertain consequences for consumers. Given the risk of consumer harm resulting from the imposition of this new regime, as well as the corollary risk of its excessive enforcement by complainants seeking to test or push the boundaries of new rules, an agency truly concerned with consumer protection would tread carefully. Perhaps, if the rules were enacted without an arbitration ban, it would turn out that companies would mandate arbitration (though this result is by no means certain, of course). And perhaps arbitration and agency enforcement alone would turn out to be insufficient to effectively enforce the rules. But given the very real costs to consumers of excessive, frivolous or potentially abusive litigation, cabining the litigation risk somewhat — even if at first it meant the regime were tilted slightly too much against enforcement — would be the sensible, cautious and pro-consumer place to start.
Whether rooted in a desire to “protect” consumers or not, the FCC’s adoption of a rule prohibiting mandatory arbitration clauses to address privacy complaints in ISP consumer service agreements would impermissibly contravene the FAA. As the Court has made clear, such a provision would “‘stand as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress’ embodied in the Federal Arbitration Act.” And not only would such a rule tend to clog the courts in contravention of the FAA’s objectives, it would do so without apparent benefit to consumers. Even if such a rule wouldn’t effectively be invalidated by the FAA, the Commission should firmly reject it anyway: A rule that operates primarily to enrich class action attorneys at the expense of their clients has no place in an agency charged with protecting the public interest.
Copyright law, ever a sore point in some quarters, has found a new field of battle in the FCC’s recent set-top box proposal. At the request of members of Congress, the Copyright Office recently wrote a rather thorough letter outlining its view of the FCC’s proposal on rightsholders.
In sum, the CR’s letter was an even-handed look at the proposal which concluded:
As a threshold matter, it seems critical that any revised proposal respect the authority of creators to manage the exploitation of their copyrighted works through private licensing arrangements, because regulatory actions that undermine such arrangements would be inconsistent with the rights granted under the Copyright Act.
This fairly uncontroversial statement of basic legal principle was met with cries of alarm. And Stanford’s CIS had a post from Affiliated Scholar Annemarie Bridy that managed to trot out breathless comparisons to inapposite legal theories while simultaneously misconstruing the “fair use” doctrine (as well as how Copyright law works in the video market, for that matter).
Look out! Lochner is coming!
In its letter the Copyright Office warned the FCC that its proposed rules have the potential to disrupt the web of contracts that underlie cable programming, and by extension, risk infringing the rights of copyright holders to commercially exploit their property. This analysis actually tracks what Geoff Manne and I wrote in both our initial comment and our reply comment to the set-top box proposal.
Yet Professor Bridy seems to believe that, notwithstanding the guarantees of both the Constitution and Section 106 of the Copyright Act, the FCC should have the power to abrogate licensing contracts between rightsholders and third parties. She believes that
[t]he Office’s view is essentially that the Copyright Act gives right holders not only the limited range of rights enumerated in Section 106 (i.e., reproduction, preparation of derivative works, distribution, public display, and public performance), but also a much broader and more amorphous right to “manage the commercial exploitation” of copyrighted works in whatever ways they see fit and can accomplish in the marketplace, without any regulatory interference from the government.
What in the world does this even mean? A necessary logical corollary of the Section 106 rights includes the right to exploit works commercially as rightsholders see fit. Otherwise, what could it possibly mean to have the right to control the reproduction or distribution of a work? The truth is that Section 106 sets out a general set of rights that inhere in rightsholders with respect to their protected works, and that commercial exploitation is merely a subset of this total bundle of rights.
The ability to contract with other parties over these rights is also a necessary corollary of the property rights recognized in Section 106. After all, the right to exclude implies by necessity the right to include. Which is exactly what a licensing arrangement is.
But wait, there’s more — she actually managed to pull out the Lochner bogeyman to validate her argument!
The Office’s absolutist logic concerning freedom of contract in the copyright licensing domain is reminiscent of the Supreme Court’s now-infamous reasoning in Lochner v. New York, a 1905 case that invalidated a state law limiting maximum working hours for bakers on the ground that it violated employer-employee freedom of contract. The Court in Lochner deprived the government of the ability to provide basic protections for workers in a labor environment that subjected them to unhealthful and unsafe conditions. As Julie Cohen describes it, “‘Lochner’ has become an epithet used to characterize an outmoded, over-narrow way of thinking about state and federal economic regulation; it goes without saying that hardly anybody takes the doctrine it represents seriously.”
This is quite a leap of logic, as there is precious little in common between the letter from the Copyright Office and the Lochner opinion aside from the fact that both contain the word “contracts” in their pages. Perhaps the most critical problem with Professor Bridy’s analogy is the fact that Lochner was about a legislature interacting with the common law system of contract, whereas the FCC is a body subordinate to Congress, and IP is both constitutionally and statutorily guaranteed. A sovereign may be entitled to interfere with the operation of common law, but an administrative agency does not have the same sort of legal status as a legislature when redefining general legal rights.
The key argument that Professor Bridy offered in support of her belief that the FCC should be free to abrogate contracts at will is that “[r]egulatory limits on private bargains may come in the form of antitrust laws or telecommunications laws or, as here, telecommunications regulations that further antitrust ends.” However, this completely misunderstand U.S. constitutional doctrine.
In particular, as Geoff Manne and I discussed in our set-top box comments to the FCC, using one constitutional clause to end-run another constitutional clause is generally a no-no:
Regardless of whether or how well the rules effect the purpose of Sec. 629, copyright violations cannot be justified by recourse to the Communications Act. Provisions of the Communications Act — enacted under Congress’s Commerce Clause power — cannot be used to create an end run around limitations imposed by the Copyright Act under the Constitution’s Copyright Clause. “Congress cannot evade the limits of one clause of the Constitution by resort to another,” and thus neither can an agency acting within the scope of power delegated to it by Congress. Establishing a regulatory scheme under the Communications Act whereby compliance by regulated parties forces them to violate content creators’ copyrights is plainly unconstitutional.
Congress is of course free to establish the implementation of the Copyright Act as it sees fit. However, unless Congress itself acts to change that implementation, the FCC — or any other party — is not at liberty to interfere with rightsholders’ constitutionally guaranteed rights.
You Have to Break the Law Before You Raise a Defense
Another bone of contention upon which Professor Bridy gnaws is a concern that licensing contracts will abrogate an alleged right to “fair use” by making the defense harder to muster:
One of the more troubling aspects of the Copyright Office’s letter is the length to which it goes to assert that right holders must be free in their licensing agreements with MVPDs to bargain away the public’s fair use rights… Of course, the right of consumers to time-shift video programming for personal use has been enshrined in law since Sony v. Universal in 1984. There’s no uncertainty about that particular fair use question—none at all.
The major problem with this reasoning (notwithstanding the somewhat misleading drafting of Section 107) is that “fair use” is not an affirmative right, it is an affirmative defense. Despite claims that “fair use” is a right, the Supreme Court has noted on at least two separate occasions (1, 2) that Section 107 was “structured… [as]… an affirmative defense requiring a case-by-case analysis.”
Moreover, important as the Sony case is, it does not not establish that “[t]here’s no uncertainty about [time-shifting as a] fair use question—none at all.” What it actually establishes is that, given the facts of that case, time-shifting was a fair use. Not for nothing the Sony Court notes at the outset of its opinion that
An explanation of our rejection of respondents’ unprecedented attempt to impose copyright liability upon the distributors of copying equipment requires a quite detailed recitation of the findings of the District Court.
But more generally, the Sony doctrine stands for the proposition that:
“The limited scope of the copyright holder’s statutory monopoly, like the limited copyright duration required by the Constitution, reflects a balance of competing claims upon the public interest: creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts. The immediate effect of our copyright law is to secure a fair return for an ‘author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. ‘The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, ‘lie in the general benefits derived by the public from the labors of authors.’ Fox Film Corp. v. Doyal, 286 U. S. 123, 286 U. S. 127. See Kendall v. Winsor, 21 How. 322, 62 U. S. 327-328; Grant v. Raymond, 6 Pet. 218, 31 U. S. 241-242. When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 422 U. S. 156 (1975) (footnotes omitted).
In other words, courts must balance competing interests to maximize “the general benefits derived by the public,” subject to technological change and other criteria that might shift that balance in any particular case.
Thus, even as an affirmative defense, nothing is guaranteed. The court will have to walk through a balancing test, and only after that point, and if the accused party’s behavior has not tipped the scales against herself, will the court find the use a “fair use.”
The Lenz case was an interesting one because, despite the above noted Supreme Court precedent treating “fair use” as a defense, it is one of the very few cases that has held “fair use” to be an affirmative right (in that case, the court decided that Section 1201 of the DMCA required consideration of “fair use” as a part of filling out a take-down notice). And in doing so, it too tried to rely on Sony to restructure the nature of “fair use.” But as I have previously written, “[i]t bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.”
Thus, to say that rightsholders’ licensing contracts somehow impinge a “right” of fair use completely puts the cart before the horse. Remember, as an affirmative defense, “fair use” is an excuse for otherwise infringing behavior, and rightsholders are well within their constitutional and statutory rights to avoid potential infringing uses.
Think about it this way. When you commit a crime you can raise a defense: for instance, an insanity defense. But just because you might be excused for committing a crime if a court finds you were not operating with full faculties, this does not entitle every insane person to go out and commit that crime. The insanity defense can be raised only after a crime is committed, and at that point it will be examined by a judge and jury to determine if applying the defense furthers the overall criminal law scheme.
“Fair use” works in exactly the same manner. And even though Sony described how time- and space-shifting were potentially permissible, it did so only by determining on those facts that the balancing test came out to allow it. So, maybe a particular time-shifting use would be “fair use.” But maybe not. More likely, in this case, even the allegedly well-established “fair use” of time-shifting in the context of today’s digital media, on-demand programing, Netflix and the like may not meet that burden.
And what this means is that a rightsholder does not have an ex ante obligation to consider whether a particular contractual clause might in some fashion or other give rise to a “fair use” defense.
The contrary point of view makes no sense. Because “fair use” is a defense, forcing parties to build “fair use” considerations into their contractual negotiations essentially requires them to build in an allowance for infringement — and one that a court might or might not ever find appropriate in light of the requisite balancing of interests. That just can’t be right.
Instead, I think this article is just a piece of the larger IP-skeptic movement. I suspect that when “fair use” was in its initial stages of development, it was intended as a fairly gentle softening on the limits of intellectual property — something like the “public necessity” doctrine in common law with respect to real property and trespass. However, that is just not how “fair use” advocates see it today. As Geoff Manne has noted, the idea of “permissionless innovation” has wrongly come to mean “no contracts required (or permitted)”:
[Permissionless innovation] is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright…, which otherwise requires those pernicious licenses (i.e., permission) from others.
But this position is nonsense — intangible property is still property. And at root, property is just a set of legal relations between persons that defines their rights and obligations with respect to some “thing.” It doesn’t matter if you can hold that thing in your hand or not. As property, IP can be subject to transfer and control through voluntarily created contracts.
Even if “fair use” were some sort of as-yet unknown fundamental right, it would still be subject to limitations upon it by other rights and obligations. To claim that “fair use” should somehow trump the right of a property holder to dispose of the property as she wishes is completely at odds with our legal system.
On Friday the the International Center for Law & Economics filed comments with the FCC in response to Chairman Wheeler’s NPRM (proposed rules) to “unlock” the MVPD (i.e., cable and satellite subscription video, essentially) set-top box market. Plenty has been written on the proposed rulemaking—for a few quick hits (among many others) see, e.g., Richard Bennett, Glenn Manishin, Larry Downes, Stuart Brotman, Scott Wallsten, and me—so I’ll dispense with the background and focus on the key points we make in our comments.
Our comments explain that the proposal’s assertion that the MVPD set-top box market isn’t competitive is a product of its failure to appreciate the dynamics of the market (and its disregard for economics). Similarly, the proposal fails to acknowledge the complexity of the markets it intends to regulate, and, in particular, it ignores the harmful effects on content production and distribution the rules would likely bring about.
American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.
Of course, much of this competition comes from outside the MVPD market, strictly speaking—most notably from OVDs like Netflix. It’s indisputable that the statute directs the FCC to address the MVPD market and the MVPD set-top box market. But addressing competition in those markets doesn’t mean you simply disregard the world outside those markets.
The competitiveness of a market isn’t solely a function of the number of competitors in the market. Even relatively constrained markets like these can be “fully competitive” with only a few competing firms—as is the case in every market in which MVPDs operate (all of which are presumed by the Commission to be subject to “effective competition”).
The truly troubling thing, however, is that the FCC knows that MVPDs compete with OVDs, and thus that the competitiveness of the “MVPD market” (and the “MVPD set-top box market”) isn’t solely a matter of direct, head-to-head MVPD competition.
How do we know that? As I’ve recounted before, in a recent speech FCC General Counsel Jonathan Sallet approvingly explained that Commission staff recommended rejecting the Comcast/Time Warner Cable merger precisely because of the alleged threat it posed to OVD competitors. In essence, Sallet argued that Comcast sought to undertake a $45 billion merger primarily—if not solely—in order to ameliorate the competitive threat to its subscription video services from OVDs:
Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively.…
Thus, at least when it suits it, the Chairman’s office appears not only to believe that this competitive threat is real, but also that Comcast, once the largest MVPD in the country, believes so strongly that the OVD competitive threat is real that it was willing to pay $45 billion for a mere “increased ability” to limit it.
And now the FCC has approved the Charter/Time Warner Cable, imposing conditions that, according to Wheeler,
focus on removing unfair barriers to video competition. First, New Charter will not be permitted to charge usage-based prices or impose data caps. Second, New Charter will be prohibited from charging interconnection fees, including to online video providers, which deliver large volumes of internet traffic to broadband customers. Additionally, the Department of Justice’s settlement with Charter both outlaws video programming terms that could harm OVDs and protects OVDs from retaliation—an outcome fully supported by the order I have circulated today.
If MVPDs and OVDs don’t compete, why would such terms be necessary? And even if the threat is merely potential competition, as we note in our comments (citing to this, among other things),
particularly in markets characterized by the sorts of technological change present in video markets, potential competition can operate as effectively as—or even more effectively than—actual competition to generate competitive market conditions.
Moreover, the proposal asserts that the “market” for MVPD set-top boxes isn’t competitive because “consumers have few alternatives to leasing set-top boxes from their MVPDs, and the vast majority of MVPD subscribers lease boxes from their MVPD.”
But the MVPD set-top box market is an aftermarket—a secondary market; no one buys set-top boxes without first buying MVPD service—and always or almost always the two are purchased at the same time. As Ben Klein and many others have shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive.
Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.
The competitiveness of the MVPD market in which the antecedent choice of provider is made incorporates consumers’ preferences regarding set-top boxes, and makes the secondary market competitive.
The proposal’s superficial and erroneous claim that the set-top box market isn’t competitive thus reflects bad economics, not competitive reality.
But it gets worse. The NPRM doesn’t actually deny the importance of OVDs and app-based competitors wholesale — it only does so when convenient. As we note in our Comments:
The irony is that the NPRM seeks to give a leg up to non-MVPD distribution services in order to promote competition with MVPDs, while simultaneously denying that such competition exists… In order to avoid triggering [Section 629’s sunset provision,] the Commission is forced to pretend that we still live in the world of Blockbuster rentals and analog cable. It must ignore the Netflix behind the curtain—ignore the utter wealth of video choices available to consumers—and focus on the fact that a consumer might have a remote for an Apple TV sitting next to her Xfinity remote.
“Yes, but you’re aware that there’s an invention called television, and on that invention they show shows?” — Jules Winnfield
The NPRM proposes to create a world in which all of the content that MVPDs license from programmers, and all of their own additional services, must be provided to third-party device manufacturers under a zero-rate compulsory license. Apart from the complete absence of statutory authority to mandate such a thing (or, I should say, apart from statutory language specifically prohibiting such a thing), the proposed rules run roughshod over the copyrights and negotiated contract rights of content providers:
The current rulemaking represents an overt assault on the web of contracts that makes content generation and distribution possible… The rules would create a new class of intermediaries lacking contractual privity with content providers (or MVPDs), and would therefore force MVPDs to bear the unpredictable consequences of providing licensed content to third-parties without actual contracts to govern those licenses…
Because such nullification of license terms interferes with content owners’ right “to do and to authorize” their distribution and performance rights, the rules may facially violate copyright law… [Moreover,] the web of contracts that support the creation and distribution of content are complicated, extensively negotiated, and subject to destabilization. Abrogating the parties’ use of the various control points that support the financing, creation, and distribution of content would very likely reduce the incentive to invest in new and better content, thereby rolling back the golden age of television that consumers currently enjoy.
You’ll be hard-pressed to find any serious acknowledgement in the NPRM that its rules could have any effect on content providers, apart from this gem:
We do not currently have evidence that regulations are needed to address concerns raised by MVPDs and content providers that competitive navigation solutions will disrupt elements of service presentation (such as agreed-upon channel lineups and neighborhoods), replace or alter advertising, or improperly manipulate content…. We also seek comment on the extent to which copyright law may protect against these concerns, and note that nothing in our proposal will change or affect content creators’ rights or remedies under copyright law.
The Commission can’t rely on copyright to protect against these concerns, at least not without admitting that the rules require MVPDs to violate copyright law and to breach their contracts. And in fact, although it doesn’t acknowledge it, the NPRM does require the abrogation of content owners’ rights embedded in licenses negotiated with MVPD distributors to the extent that they conflict with the terms of the rule (which many of them must).
“You keep using that word. I do not think it means what you think it means.” — Inigo Montoya
Finally, the NPRM derives its claimed authority for these rules from an interpretation of the relevant statute (Section 629 of the Communications Act) that is absurdly unreasonable. That provision requires the FCC to enact rules to assure the “commercial availability” of set-top boxes from MVPD-unaffiliated vendors. According to the NPRM,
we cannot assure a commercial market for devices… unless companies unaffiliated with an MVPD are able to offer innovative user interfaces and functionality to consumers wishing to access that multichannel video programming.
This baldly misconstrues a term plainly meant to refer to the manner in which consumers obtain their navigation devices, not how those devices should function. It also contradicts the Commission’s own, prior readings of the statute:
As structured, the rules will place a regulatory thumb on the scale in favor of third-parties and to the detriment of MVPDs and programmers…. [But] Congress explicitly rejected language that would have required unbundling of MVPDs’ content and services in order to promote other distribution services…. Where Congress rejected language that would have favored non-MVPD services, the Commission selectively interprets the language Congress did employ in order to accomplish exactly what Congress rejected.
And despite the above noted problems (and more), the Commission has failed to do even a cursory economic evaluation of the relative costs of the NPRM, instead focusing narrowly on one single benefit it believes might occur (wider distribution of set-top boxes from third-parties) despite the consistent failure of similar FCC efforts in the past.
All of the foregoing leads to a final question: At what point do the costs of these rules finally outweigh the perceived benefits? On the one hand are legal questions of infringement, inducements to violate agreements, and disruptions of complex contractual ecosystems supporting content creation. On the other hand are the presence of more boxes and apps that allow users to choose who gets to draw the UI for their video content…. At some point the Commission needs to take seriously the costs of its actions, and determine whether the public interest is really served by the proposed rules.
As ICLE argued in its amicus brief, the Second Circuit’s ruling in United States v. Apple Inc. is in direct conflict with the Supreme Court’s 2007 Leegin decision, and creates a circuit split with the Third Circuit based on that court’s Toledo Mack ruling. Moreover, the negative consequences of the court’s ruling will be particularly acute for modern, high-technology sectors of the economy, where entrepreneurs planning to deploy new business models will now face exactly the sort of artificial deterrents that the Court condemned in Trinko:
Mistaken inferences and the resulting false condemnations are especially costly, because they chill the very conduct the antitrust laws are designed to protect.
Absent review by the Supreme Court to correct the Second Circuit’s error, the result will be less-vigorous competition and a reduction in consumer welfare. The Court should grant certiorari.
The Second Circuit committed a number of important errors in its ruling.
First, as the Supreme Court held in Leegin, condemnation under the per se rule is appropriate
only for conduct that would always or almost always tend to restrict competition… [and] only after courts have had considerable experience with the type of restraint at issue.
Neither is true in this case. The use of MFNs in Apple’s contracts with the publishers and its adoption of the so-called “agency model” for e-book pricing have never been reviewed by the courts in a setting like this one, let alone found to “always or almost always tend to restrict competition.” There is no support in the case law or economic literature for the proposition that agency models or MFNs used to facilitate entry by new competitors in platform markets like this one are anticompetitive.
Second, the court of appeals emphasized that in some cases e-book prices increased after Apple’s entry, and it viewed that fact as strong support for application of the per se rule. But the Court in Leegin made clear that the per se rule is inappropriate where, as here, “prices can be increased in the course of promoting procompetitive effects.”
What the Second Circuit missed is that competition occurs on many planes other than price; higher prices do not necessarily suggest decreased competition or anticompetitive effects. As Josh Wright points out:
[T]the multi-dimensional nature of competition implies that antitrust analysis seeking to maximize consumer or total welfare must inevitably calculate welfare tradeoffs when innovation and price effects run in opposite directions.
Higher prices may accompany welfare-enhancing “competition on the merits,” resulting in greater investment in product quality, reputation, innovation, or distribution mechanisms.
While the court acknowledged that “[n]o court can presume to know the proper price of an ebook,” its analysis nevertheless rested on the presumption that Amazon’s prices before Apple’s entry were competitive. The record, however, offered no support for that presumption, and thus no support for the inference that post-entry price increases were anticompetitive.
[P]roof that a restraint alters price or output when compared to the status quo ante is at least equally consistent with an alternative explanation, namely, that the agreement under scrutiny corrects a market failure and does not involve the exercise or creation of market power. Because such failures can result in prices that are below the optimum, or output that is above it, contracts that correct or attenuate market failure will often increase prices or reduce output when compared to the status quo ante. As a result, proof that such a restraint alters price or other terms of trade is at least equally consistent with a procompetitive explanation, and thus cannot give rise to a prima facie case under settled antitrust doctrine.
Before Apple’s entry, Amazon controlled 90% of the e-books market, and the publishers had for years been unable to muster sufficient bargaining power to renegotiate the terms of their contracts with Amazon. At the same time, Amazon’s pricing strategies as a nascent platform developer in a burgeoning market (that it was, in practical effect, trying to create) likely did not always produce prices that would be optimal under evolving market conditions as the market matured. The fact that prices may have increased following the alleged anticompetitive conduct cannot support an inference that the conduct was anticompetitive.
Third, the Second Circuit also made a mistake in dismissing Apple’s defenses. The court asserted that
this defense — that higher prices enable more competitors to enter a market — is no justification for a horizontal price‐fixing conspiracy.
But the court is incorrect. As Bill Kolasky points out in his post, it is well-accepted that otherwise-illegal agreements that are ancillary to a procompetitive transaction should be evaluated under the rule of reason.
It was not that Apple couldn’t enter unless Amazon’s prices (and its own) were increased. Rather, the contention made by Apple was that it could not enter unless it was able to attract a critical mass of publishers to its platform – a task which required some sharing of information among the publishers – and unless it was able to ensure that Amazon would not artificially lower its prices to such an extent that it would prevent Apple from attracting a critical mass of readers to its platform. The MFN and the agency model were thus ancillary restraints that facilitated the transactions between Apple and the publishers and between Apple and iPad purchasers. In this regard they are appropriately judged under the rule of reason and, under the rule of reason, offer a valid procompetitive justification for the restraints.
And it was the fact of Apple’s entry, not the use of vertical restraints in its contracts, that enabled the publishers to wield the bargaining power sufficient to move Amazon to the agency model. The court itself noted that the introduction of the iPad and iBookstore “gave publishers more leverage to negotiate for alternative sales models or different pricing.” And as Ben Klein noted at trial,
Apple’s entry probably gave the publishers an increased ability to threaten [Amazon sufficiently that it accepted the agency model]…. The MFN [made] a trivial change in the publishers’ incentives…. The big change that occurs is the change on the other side of the bargaining situation after Apple comes in where Amazon now cannot just tell them no.
Fourth, the purpose of applying the per se rule is to root out activities that always or almost always harm competition. Although it’s possible that a horizontal agreement that facilitates entry and increases competition could be subject to the per se rule, in this case its application was inappropriate. The novelty of Apple’s arrangement with the publishers, coupled with the weakness of proof of any sort of actual price fixing fails to meet even a minimal threshold that would require application of the per se rule.
Not all horizontal arrangements are per se illegal. If an arrangement is relatively novel, facilitates entry, and is patently different from naked price fixing, it should be reviewed under the rule of reason. SeeBMI. All of those conditions are met here.
The conduct of the publishers – distinct from their agreements with Apple – to find some manner of changing their contracts with Amazon is not itself price fixing, either. The prices themselves would be set only subsequent to whatever new contracts were adopted. At worst, the conduct of the publishers in working toward new contracts with Amazon can be characterized as a facilitating practice.
But even then, the precedent of the Court counsels against applying the per se rule to facilitating practices such as the mere dissemination of price information or, as in this case, information regarding the parties’ preferred, bilateral, contractual relationships. As the Second Circuit itself once held, following the Supreme Court,
[the] exchange of information is not illegal per se, but can be found unlawful under a rule of reason analysis.
In other words, even the behavior of the publishers should be analyzed under a rule of reason – and Apple’s conduct in facilitating that behavior cannot be imbued with complicity in a price-fixing scheme that may not have existed at all.
Fifth, in order for conduct to “eliminate price competition,” there must be price competition to begin with. But as the district court itself noted, the publishers do not compete on price. This point is oft-overlooked in discussions of the case. It is perhaps possible to say that the contract terms at issue and the publishers’ pressure on Amazon affected price competition between Apple and Amazon – but even then it cannot be said to have reduced competition, because, absent Apple’s entry, there was no competition at all between Apple and Amazon.
It’s true that, if all Apple’s entry did was to transfer identical e-book sales from Amazon to Apple, at higher prices and therefore lower output, it might be difficult to argue that Apple’s entry was procompetitive. But the myopic focus on e-book titles without consideration of product differentiation is mistaken, as well.
The relevant competition here is between Apple and Amazon at the platform level. As explained above, it is misleading to look solely at prices in evaluating the market’s competitiveness. Provided that switching costs are low enough and information about the platforms is available to consumers, consumer welfare may have been enhanced by competition between the platforms on a range of non-price dimensions, including, for example: the Apple iBookstore’s distinctive design, Apple’s proprietary file format, features on Apple’s iPad that were unavailable on Kindle Readers, Apple’s use of a range of marketing incentives unavailable to Amazon, and Apple’s algorithmic matching between its data and consumers’ e-book purchases.
While it’s difficult to disentangle Apple’s entry from other determinants of consumers’ demand for e-books, and even harder to establish with certainty the “but-for” world, it is nonetheless telling that the e-book market has expanded significantly since Apple’s entry, and that purchases of both iPads and Kindles have increased, as well.
There is, in other words, no clear evidence that consumers viewed the two products as perfect substitutes, and thus there is no evidence that Apple’s entry merely caused a non-welfare-enhancing substitution from Amazon to Apple. At minimum, there is no basis for treating the contract terms that facilitated Apple’s entry under a per se standard.
The point, in sum, is that there is in fact substantial evidence that Apple’ entry was pro-competitive, that there was no price-fixing scheme of which Apple was a part, and absolutely no evidence that the vertical restraints at issue in the case were the sort that should presumptively give rise to liability. Not only was application of the per se rule inappropriate, but, to answer Richard Epstein, there is strong evidence that Apple should win under a rule of reason analysis, as well.
In my view, the Second Circuit’s decision in Apple e-Books, if not reversed by the Supreme Court, threatens to undo a half century of progress in reforming antitrust doctrine. In decision after decision, from White Motorsthrough Leeginand Actavis, the Supreme Court has repeatedly held—in cases involving both horizontal and vertical restraints—that the only test for whether an agreement can be found per se unlawful under Section 1 is whether it is “a naked [restraint] of trade with no purpose except stifling competition,” or whether it is instead “ancillary to the legitimate and competitive purposes” of a business association. Dagher. The cases in which the Court has consistently applied this test read like a litany of antitrust decisions we all now study in law school: White Motors, Topco, GTE Sylvania, Professional Engineers, BMI, Maricopa, NCAA, Business Electronics, ARCO, California Dental, Dagher, Leegin, American Needle, and, most recently, Actavis. Significantly, more than two-thirds of these cases involved horizontal, not vertical restraints.
In these decisions, the Court has also repeatedly warned that this test cannot be applied by simply asking whether the defendants “have literally ‘fixed’ a ‘price,”or otherwise agreed not to compete. Warning that “[l]iteralness is overly simplistic and often overbroad,” the Court insisted in BMI that courts instead focus on “the effect and, because it tends to show effect…, on the purpose of the practice” to determine whether “the practice facially appears to be one that would always or almost always tend to restrict competition and decrease output… or instead one designed to ‘increase economic efficiency and render markets more, rather than less, competitive.”
In applying this test the Court has also repeatedly emphasized that a court should classify an alleged restraint—whether horizontal or vertical—as per se unlawful “only after considerable experience” with the particular restraint at issue. In addition, the Court has repeatedly emphasized that all that is necessary for a restraint to escape per se illegality is that there be a “plausible” procompetitive purpose behind it. See, e.g., Cal Dental; Business Electronics; Northwest Wholesale Stationers.
By focusing so much attention in their cert. papers on whether the agreements between Apple and the publishers should be characterized as “vertical” or “horizontal,” both Apple and the DOJ seem to have lost sight of the fundamental teachings of this long line of Supreme Court decisions—namely, that even if an agreement is horizontal, it can be found to be per se unlawful only if it is a naked agreement that, on its face, serves no purpose other than to restrict competition and restrain output. This is particularly important where, as in this case, the alleged agreements have both horizontal and vertical elements. In such cases, the right question is not whether the agreements can be labeled a “hub-and-spoke conspiracy,” but instead what the nature and purpose of those agreements were.
In this case, the nature of the arrangement between Apple and the publishers by which they all appointed Apple as their common sales agent is not fundamentally different from the an agreement among a group of competitors to appoint a joint sales agent. While such an arrangement can, in some circumstances, be used to facilitate cartel behavior, it can also serve legitimate pro-competitive purposes by enabling those competitors to market their goods or services more efficiently. The courts and antitrust enforcement agencies have, therefore, recognized—ever since the Supreme Court’s decision in Appalachian Coals—that these joint sales arrangements must generally be evaluated under the rule of reason and cannot in all instances be condemned as per se unlawful. See, e.g., FTC/DOJ, Competitor Collaboration Guidelines. (For those of you who remember the criticisms that used to be directed at that decision by your antitrust professor in law school, I urge you to read Sheldon Kimmel’s excellent revisionist article, How and Why the Per Se Rule Against Price Fixing Went Wrong, showing that the Court’s holding was perfectly consistent with its more recent rulings in BMI and its progeny.
Viewing this as an agreement among the publishers to appoint Apple as their common sales agent might have helped the lower courts to have focused on what should have been the key issues in the case. The first is whether the agency arrangement was a “naked” agreement to “restrict competition and decrease output,” or could “plausibly” have been intended to serve other legitimate pro-competitive business purposes. The second is whether, if so, the restraints that were part of this arrangement—such as price caps and most-favored nation clauses—were ancillary to those legitimate purposes.
Based on the record as I read it, it appears to me that the answers to these two questions are obvious, and that they compel the conclusion that this common sales agent arrangement could not be classified as per se unlawful, but would need to be evaluated under a full-blown rule of reason analysis. Let me address each issue in turn.
Was the common sales agent arrangement between Apple and the five publishers a naked agreement to fix prices and restrict output?
Neither the lower courts nor the parties in their cert papers address this key issue in any detail, choosing instead to spend page after page debating whether the agreement between Apple and the publishers was horizontal or vertical. Fortunately, the amicus briefs that were filed in support of Apple’s cert. petition by ICLE and by a group of antitrust economists do address the issue at considerable length.
Those briefs make a convincing argument that the common sale agent arrangements between the publishers and Apple were designed to serve at least two pro-competitive purposes. The first was to introduce greater competition into the downstream market for the distribution of e-books by ending Amazon’s below-cost pricing of e-books at the retail level. The second was to give the publishers greater control over the downstream pricing of their e-books in order to prevent below-cost pricing of e-books from cannibalizing the sales of their print books.
The common sale agent arrangement served to introduce more competition into the downstream market for the distribution of e-books
This one is easy. No one disputes that before Apple entered, Amazon dominated the downstream market for e-books with a 90% market share, giving it a virtual monopoly. Hopefully, few, if any, would dispute that Amazon’s loss-leader strategy of selling e-books at well below cost served to entrench its near monopoly position in that market. It is easy to understand why publishers of e-books would not want to allow Amazon’s monopoly to continue, leaving them with only a sole distributor for their products.
The record below makes it clear that Apple did not believe it could profitably enter the e-book market so long as Amazon continued to maintain its first-mover advantage by selling e-books below cost. Apple and the publishers therefore had a common interest in moving from the existing wholesale model of e-book distribution to a new agency model under which the publishers, not Amazon, would control the retail pricing of e-books and could set those prices at a level that would enable other competitors, such as Apple, to enter. That seems pro-competitive to me.
The record also makes it clear that this objective could not be accomplished through a simple vertical agency agreement between Apple and one or two individual publishers. In order to enter successfully, Apple needed a critical mass of titles, which it could have only by securing the agreement of most of the leading publishers to appoint it as their common sale agent. Apple, therefore, had a legitimate pro-competitive business reason to facilitate—or, as the Second Circuit charged, “orchestrate” —agreements among the publishers to switch to an agency model and to appoint Apple as their common non-exclusive agent for the sale of their e-books.
The common sales agent arrangement gave the publishers control over the retail prices of e-books, protecting them from harms to their businesses that could otherwise be caused by below-cost pricing by a single dominant retailer.
The Second Circuit and DOJ both make much of the fact that the publishers wanted to control the retail prices of e-books in order to raise those prices above the level set by Amazon’s loss-leader pricing strategy. They both seem to believe that this alone is enough to characterize their conduct as a “naked price fixing scheme.” But it is not. As the Supreme Court held in Leegin, resale price maintenance can be pro-competitive even if it leads to higher prices if it is designed promote competition by creating a more efficient and competitive distribution system.
As Areeda and Hovenkamp teach in their treatise, Fundamentals of Antitrust Law, the same principle applies to agreements among a group of horizontal competitors to appoint a single sales agent. Those competitors will frequently “have to agree with each other that they will not accept less than a certain minimum price, or sometimes may even have to agree on the entire price schedule,” and these prices may sometimes be higher than the prices at which they were previously selling the products individually. See Areeda & Hovenkamp (2015 Supp.), at 19:31-32. But even if these agreements result in an increase in price, they argue that it should not be found illegal if the effect on output is positive. Their argument is supported by the language in BMI, in which the Court focused on the effect of a restraint on output, not price, in describing what was necessary to classify an alleged restraint as a per se illegal naked price-fixing agreement.
Here, although the district court found that prices went up and output went down in the short run after the publishers switched from their wholesale model to an agency model, these immediate, short-term effects do not necessarily show that the switch to the new agency model might not, over the long-term, have resulted in an increase in output. DOJ concedes that since Apple’s entry, e-book sales have grown exponentially, but speculates that this growth might have occurred even if Amazon had continued to maintain its monopoly position in the retail sale of e-books. As someone who reads e-books on my iPad, I doubt that, but this is the type of issue that can only be resolved through a full rule-of-reason analysis, not through the application of a conclusive presumption of illegality under the per se doctrine.
Here, as the amicus briefs argue, there are several ways Amazon’s loss-leader pricing strategy could have depressed the output of both e-book and print books long-term. First, of course, once its monopoly was fully entrenched, Amazon could have sought to recoup its losses by raising its e-book prices above a competitive level. Second, if instead Amazon continued to cannibalize print sales through below-cost e-book pricing, publishers might have been forced to reduce the royalties they pay authors, giving those authors less reason to continue writing, thus reducing the output of all books. Again, these are the types of issues that require a full rule of reason analysis, not summary condemnation under the per se doctrine.
Were the price caps and most-favored nation clauses ancillary restraints that may have been reasonably necessary to the legitimate pro-competitive purposes of the common sales agent arrangement?
The ancillary nature of the terms that were included in Apple’s agency agreements with the publishers, and which the publishers may have agreed among themselves to accept, is equally easy to show.
The price caps on which Apple insisted were obviously designed to protect it from opportunistic behavior by the publishers in charging higher prices for their e-books than what Apple felt the market would accept, thereby preventing it from selling a sufficient volume of e-books to make its entry successful. Such opportunistic behavior by the publishers could also have made it harder to convince consumers to buy Apple’s new iPad, the success of which was critical to its future.
The most favored nation clauses on which Apple insisted, and which the publishers may also have agreed among themselves to accept, were likewise arguably necessary to protect Apple from the risk of having to compete against an established competitor offering lower prices than it could, thereby impeding its successful entry and damaging its goodwill with consumers.
In both cases, these are classic and legitimate reasons for ancillary restraints. Whether or not these particular restraints were reasonably necessary to Apple’s successful entry is a question that could only be decided on the basis of a full rule of reason analysis. All that is needed to avoid per se condemnation is that there be a plausible argument that they were, and that, again, should be something that no one could dispute.
* * *
Given the way the case was litigated, I recognize that it may be difficult to introduce at the Supreme Court level a whole new way of looking at the facts of the case. But if the Court does grant cert., I would hope that Apple and the amici supporting it would try to refocus the Court’s attention away from a sterile argument over whether the restraints in question were vertical or horizontal, and to focus it instead on whether they were a “naked” attempt to fix prices and restrict output or were instead ancillary to a pro-competitive business relationship.
The appellate court’s 2015 decision affirming the district court’s finding of per se liability in United States v. Apple provoked controversy over the legal and economic merits of the case, its significance for antitrust jurisprudence, and its implications for entrepreneurs, startups, and other economic actors throughout the economy. Apple has filed a cert petition with the Supreme Court, which will decide on February 19th whether to hear the case.
We’ve lined up an outstanding and diverse group of scholars, practitioners and other experts to participate in the symposium. The full archive of symposium posts can be found at this link, and individual posts can be accessed by clicking on the author’s name below.