On Monday, April 2, I will leave the Heritage Foundation to enter federal government service. Accordingly, today I am signing off as a regular contributor to Truth on the Market. First and foremost, I owe a great debt of gratitude to Geoff Manne, who was kind enough to afford me access to TOTM. Geoff’s outstanding leadership has made TOTM the leading blog site bringing to bear sound law and economics insights on antitrust and related regulatory topics. I was also privileged to have the opportunity to work on an article with TOTM stalwart Thom Lambert, whose concise book How To Regulate is by far the best general resource on sound regulatory principles (it should sit on the desk of the head of every regulatory agency). I have also greatly benefited from the always insightful analyses of fellow TOTM bloggers Allen Gibby, Eric Fruits, Joanna Shepherd, Kristian Stout, Mike Sykuta, and Neil Turkewitz. Thanks to all! I look forward to continuing to seek enlightenment at truthonthemarket.com.
Archives For truth on the market
If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.
Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.
He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.
What is bias?
We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.
In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.
Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.
Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.
The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).
“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.
On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.
A study with low statistical power has a reduced chance of detecting a true effect.
Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.
An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.
Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.
By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.
Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.
In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.
Now, we’re getting to power.
Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.
By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.
Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:
- If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
- If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.
In other words, a low power test is more likely to convict the innocent than a high power test.
In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.
Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.
(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)
Is economics bogus?
It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:
Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.
For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:
[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.
Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.
In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”
Last week I attended the 17th Annual Conference of the International Competition Network (ICN) held in New Delhi, India from March 21-23. The Delhi Conference highlighted the key role of the ICN in promoting global convergence toward “best practices” in substantive and procedural antitrust analysis by national antitrust (“competition”) agencies. The ICN operates as a virtual network of competition agencies and expert “non-governmental advisers” (NGAs), not governments. As such, the ICN promulgates “recommended practices,” provides online training and other assistance to competition agencies, and serves as a forum for the building of relationships among competition officials (an activity which facilitates cooperation on particular matters and the exchange of advice on questions of antitrust policy and administration). There is a general consensus among competition agencies and NGAs (I am one) that the ICN has accomplished a great deal since its launch in 2001 – indeed, it has far surpassed expectations. Although (not surprisingly) inter-jurisdictional differences in perspective on particular competition issues remain, the ICN has done an excellent job in helping ensure that national competition agencies understand each other as they carry out their responsibilities. By “speaking a common antitrust language,” informed by economic reasoning, agencies are better able to cooperate on individual matters and evaluate the merits of potential changes in law and procedure.
Pre-ICN Program Hosted by Competition Policy International (CPI)
Special one-day programs immediately preceding the ICN have proliferated in recent years. On March 20, I participated in the small group one-day program hosted by Competition Policy International (CPI), attended by senior competition agency officials, private practitioners, and scholars. This program featured a morning roundtable covering problems of extraterritoriality and an afternoon roundtable focused on competition law challenges in the digital economy.
The extraterritoriality session reflected the growing number of competition law matters (particularly cartels and mergers) that have effects in multiple jurisdictions. There appeared to be general support for the proposition that a competition authority should impose remedies that have extraterritorial application only to the extent necessary to remedy harm to competition within the enforcing jurisdiction. There also was a general consensus that it is very difficult for a competition authority to cede enforcement jurisdiction to a foreign authority, when the first authority finds domestic harm attributable to extraterritorial conduct and has the ability to assert jurisdiction. Thus, although efforts to promote comity in antitrust enforcement are worthwhile, it must be recognized that there are practical limitations to such initiatives. As such, a focus on enhancing coordination and cooperation among multiple agencies investigating the same conduct will be of paramount importance.
The digital economy roundtable directed particular attention to enforcement challenges raised by Internet “digital platforms” (e.g., Google, Facebook, Amazon). In particular, with respect to digital platforms, roundtable participants discussed whether new business models and disruptive innovations create challenges to existing competition law and practices; what recent technology changes portend for market definition, assessment of market power, and other antitrust enforcement concepts; whether new analytic tools are required; and what are good mechanisms to harmonize regulation and competition enforcement. Although there was no overall consensus on these topics, there was robust discussion of multi-sided market analysis and differences in approach to digital platform oversight.
An ICN Conference Overview
As in recent years, the ICN Conference itself featured set-piece (no Q&A) plenary sessions involving colloquies among top agency officials regarding cartels, unilateral conduct, mergers, advocacy, and agency effectiveness – the areas covered during the year by the ICN’s specialized working groups. Numerous break-out sessions allowed ICN delegates to discuss in detail particular developments in these areas, and to evaluate and hash out the relative merits of competing approaches to problems. At least seven generalizations can be drawn from the Delhi Conference’s deliberations.
First, other international organizations that initially had kept their distance from the ICN, specifically the OECD, the World Bank, and UNCTAD, now engage actively with the ICN. This is a very positive development indeed. Research carried out by the OECD on competition policy – for example, on the economic evaluation of regulatory approaches (important for competition advocacy), digital platforms, and public tenders – has been injected as “policy inputs” to discrete ICN initiatives. Annual Competition advocacy contests cosponsored by the ICN and the World Bank have enabled a large number of agencies (particularly in developing countries) to showcase their successes in helping improve the competitive climate within their jurisdictions. UNCTAD initiatives on competition and economic development can be effectively presented to new competition agencies through ICN involvement.
Second, competition authorities are focusing more intensively on “vertical mergers” involving firms at different levels of the distribution chain. The ICN can help agencies be attentive to the need to weigh procompetitive efficiencies as well as new theories of anticompetitive harm in investigating these mergers.
Third, the transformation of economies worldwide through the Internet and the “digital revolution” is posing new challenges – and opportunities – for enforcers. Policy analysis, informed by economics, is evolving in this area.
Fourth, cartels and bid rigging (collusion in public tenders was the showcase “special project” at the Delhi Conference) investigations remain as significant as ever. Thinking on the administration of government leniency programs and “ex officio” investigations aimed at ferreting out cartels continues to be refined.
Fifth, the continuing growth in the number and scope of competition laws and the application of those laws to international commerce places a premium on enhanced coordination among competition agencies. The ICN’s role in facilitating such cooperation thus assumes increased importance.
Sixth, issues of due process, or procedural fairness, commendably are generally recognized as important elements of effective agency administration. Nevertheless, the precise contours of due process, and its specific application, are not uniform across agencies, and merit continued exploration by the ICN.
Seventh, the question of whether non-purely economic factors (such as fairness, corporate size, and the rights of workers) should be factored into competition analysis is gaining increased traction in a number of jurisdictions, and undoubtedly will be a subject of considerable debate in the years to come.
The ICN is by now a mature organization. As a virtual network that relies on the power to persuade, not to dictate, it is dynamic, not static. The ICN continues to respond flexibly to the changing needs of its many members and to global economic developments, within the context of the focused work carried out by its various substantive and process-related working groups. The Delhi Conference provided a welcome opportunity for a timely review of its accomplishments and an assessment of its future direction. In short, the ICN remains a highly useful vehicle for welfare-enhancing “soft convergence” among competition law regimes.
The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.
That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.
I Have Seen the Enemy and the Enemy is Us
Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.
(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)
The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.
There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.
More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.
There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.
Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.
The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.
The Badness of the Open Graph Idea
One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.
The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.
But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.
In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.
If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.
These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.
Excess is unflattering, no less when claiming that every evolution in legal doctrine is a slippery slope leading to damnation. In Friday’s New York Times, Lina Khan trots down this alarmist path while considering the implications for the pending Supreme Court case of Ohio v. American Express. One of the core issues in the case is the proper mode of antitrust analysis for credit card networks as two-sided markets. The Second Circuit Court of Appeals agreed with arguments, such as those that we have made, that it is important to consider the costs and benefits to both sides of a two-sided market when conducting an antitrust analysis. The Second Circuit’s opinion is under review in the American Express case.
Khan regards the Second Circuit approach of conducting a complete analysis of these markets as a mistake.
On her reading, the idea that an antitrust analysis of credit card networks should reflect their two-sided-ness would create “de facto antitrust immunity” for all platforms:
If affirmed, the Second Circuit decision would create de facto antitrust immunity for the most powerful companies in the economy. Since internet technologies have enabled the growth of platform companies that serve multiple groups of users, firms like Alphabet, Amazon, Apple, Facebook, and Uber are set to be prime beneficiaries of the Second Circuit’s warped analysis. Amazon, for example, could claim status as a two-sided platform because it connects buyers and sellers of goods; Google because it facilitates a market between advertisers and search users… Indeed, the reason that the tech giants are lining up behind the Second Circuit’s approach is that — if ratified — it would make it vastly more difficult to use antitrust laws against them.
This paragraph is breathtaking. First, its basic premise is wrong. Requiring a complete analysis of the complicated economic effects of conduct undertaken in two sided markets before imposing antitrust liability would not create “de facto antitrust immunity.” It would require that litigants present, and courts evaluate, credible evidence sufficient to establish a claim upon which an enforcement action can be taken — just like in any other judicial proceeding in any area of law. Novel market structures may require novel analytical models and novel evidence, but that is no different with two-sided markets than with any other complicated issue before a court.
Second, the paragraph’s prescribed response would be, in fact, de facto antitrust liability for any firm competing in a two-sided market — that is, as Kahn notes, almost every major tech firm.
A two-sided platform competes with other platforms by facilitating interactions between the two sides of the market. This often requires a careful balancing of the market: in most of these markets too many or too few participants on one side of the market reduces participation on the other side. So these markets play the role of matchmaker, charging one side of the market a premium in order to cross-subsidize a desirable level of participation on the other. This will be discussed more below, but the takeaway for now is that most of these platforms operate by charging one side of the market (or some participants on one side of the market) an above-cost price in order to charge the other side of the market a below-cost price. A platform’s strategy on either side of the market makes no sense without the other, and it does not adopt practices on one side without carefully calibrating them with the other. If one does not consider both sides of these markets, therefore, the simplistic approach that Kahn demands will systematically fail to capture both the intent and the effect of business practices in these markets. More importantly, such an approach could be used to find antitrust violations throughout these industries — no matter the state of competition, market share, or actual consumer effects.
What are two-sided markets?
Khan notes that there is some element of two-sidedness in many (if not most) markets:
Indeed, almost all markets can be understood as having two sides. Firms ranging from airlines to meatpackers could reasonably argue that they meet the definition of “two-sided,” thereby securing less stringent review.
This is true, as far as it goes, as any sale of goods likely involves the selling party acting as some form of intermediary between chains of production and consumption. But such a definition is unworkably broad from the point of view of economic or antitrust analysis. If two-sided markets exist as distinct from traditional markets there must be salient features that define those specialized markets.
Economists have been intensively studying two-sided markets (see, e.g., here, here, and here) for the past two decades (and had recognized many of their basic characteristics even before then). As Khan notes, multi-sided platforms have indeed existed for a long time in the economy. Newspapers, for example, provide a targeted outlet for advertisers and incentives for subscribers to view advertisements; shopping malls aggregate retailers in one physical location to lower search costs for customers, while also increasing the retailers’ sales volume. Relevant here, credit card networks are two-sided platforms, facilitating credit-based transactions between merchants and consumers.
One critical feature of multi-sided platforms is the interdependent demand of platform participants. Thus, these markets require a simultaneous critical mass of users on each side in order to ensure the viability of the platform. For instance, a credit card is unlikely to be attractive to consumers if few merchants accept it; and few merchants will accept a credit card that isn’t used by a sufficiently large group of consumers. To achieve critical mass, a multi-sided platform uses both pricing and design choices, and, without critical mass on all sides, the positive feedback effects that enable the platform’s unique matching abilities might not be achieved.
This highlights the key distinction between traditional markets and multi-sided markets. Most markets have two sides (e.g., buyers and sellers), but that alone doesn’t make them meaningfully multi-sided. In a multi-sided market a key function of the platform is to facilitate the relationship between the sides of the market in order to create and maintain an efficient relationship between them. The platform isn’t merely a reseller of a manufacturer’s goods, for instance, but is actively encouraging or discouraging participation by users on both sides of the platform in order to maximize the value of the platform itself — not the underlying transaction — for those users. Consumers, for instance, don’t really care how many pairs of jeans a clothier stocks; but a merchant does care how many cardholders an issuer has on its network. This is most often accomplished by using prices charged to each side (in the case of credit cards, so-called interchange fees) to keep each side an appropriate size.
Moreover, the pricing that occurs on a two-sided platform is secondary, to a varying extent, to the pricing of the subject of the transaction. In a two-sided market, the prices charged to either side of the market are an expression of the platform’s ability to control the terms on which the different sides meet to transact and is relatively indifferent to the thing about which the parties are transacting.
The nature of two-sided markets highlights the role of these markets as more like facilitators of transactions and less like traditional retailers of goods (though this distinction is a matter of degree, and different two-sided markets can be more-or-less two-sided). Because the platform uses prices charged to each side of the market in order to optimize overall use of the platform (that is, output or volume of transactions), pricing in these markets operates differently than pricing in traditional markets. In short, the pricing on one side of the platform is often used to subsidize participation on the other side of the market, because the overall value to both sides is increased as a result. Or, conversely, pricing to one side of the market may appear to be higher than the equilibrium level when viewed for that side alone, because this funds a subsidy to increase participation on another side of the market that, in turn, creates valuable network effects for the side of the market facing the higher fees.
The result of this dynamic is that it is more difficult to assess the price and output effects in multi-sided markets than in traditional markets. One cannot look at just one side of the platform — at the level of output and price charged to consumers of the underlying product, say — but must look at the combined pricing and output of both the underlying transaction as well as the platform’s service itself, across all sides of the platform.
Thus, as David Evans and Richard Schmalensee have observed, traditional antitrust reasoning is made more complicated in the presence of a multi-sided market:
[I]t is not possible to know whether standard economic models, often relied on for antitrust analysis, apply to multi-sided platforms without explicitly considering the existence of multiple customer groups with interdependent demand…. [A] number of results for single-sided firms, which are the focus of much of the applied antitrust economics literature, do not apply directly to multi-sided platforms.
The good news is that antitrust economists have been focusing significant attention on two- and multi-sided markets for a long while. Their work has included attention to modelling the dynamics and effects of competition in these markets, including how to think about traditional antitrust concepts such as market definition, market power and welfare analysis. What has been lacking, however, has been substantial incorporation of this analysis into judicial decisions. Indeed, this is one of the reasons that the Second Circuit’s opinion in this case was, and why the Supreme Court’s opinion will be, so important: this work has reached the point that courts are recognizing that these markets can and should be analyzed differently than traditional markets.
Getting the two-sided analysis wrong in American Express would harm consumers
Khan describes credit card networks as a “classic case of oligopoly,” and opines that American Express’s contractual anti-steering provision is, “[a]s one might expect, the credit card companies us[ing] their power to block competition.” The initial, inherent tension in this statement should be obvious: the assertion is simultaneously that this a non-competitive, oligopolistic market and that American Express is using the anti-steering provision to harm its competitors. Indeed, rather than demonstrating a classic case of oligopoly, this demonstrates the competitive purpose that the anti-steering provision serves: facilitating competition between American Express and other card issuers.
The reality of American Express’s anti-steering provision, which prohibits merchants who choose to accept AmEx cards from “steering” their customers to pay for purchases with other cards, is that it is necessary in order for American Express to compete with other card issuers. Just like analysis of multi-sided markets needs to consider all sides of the market, platforms competing in these markets need to compete on all sides of the market.
But the use of complex pricing schemes to determine prices on each side of the market to maintain an appropriate volume of transactions in the overall market creates a unique opportunity for competitors to behave opportunistically. For instance, if one platform charges a high fee to one side of the market in order to subsidize another side of the market (say, by offering generous rewards), this creates an opportunity for a savvy competitor to undermine that balancing by charging the first side of the market a lower fee, thus attracting consumers from its competitor and, perhaps, making its pricing strategy unprofitable. This may appear to be mere price competition. But the effects of price competition on one side of a multi-sided market are more complicated to evaluate than those of traditional price competition.
Generally, price competition has the effect of lowering prices for goods, increasing output, decreasing deadweight losses, and benefiting consumers. But in a multi-sided market, the high prices charged to one side of the market can be used to benefit consumers on the other side of the market; and that consumer benefit can increase output on that side of the market in ways that create benefits for the first side of the market. When a competitor poaches a platform’s business on a single side of a multi-sided market, the effects can be negative for users on every side of that platform’s market.
This is most often seen in cases, like with credit cards, where platforms offer differentiated products. American Express credit cards are qualitatively different than Visa and Mastercard credit cards; they charge more (to both sides of the market) but offer consumers a more expansive rewards program (funded by the higher transaction fees charged to merchants) and offer merchants access to what are often higher-value customers (ensured by the higher fees charged to card holders).
If American Express did not require merchants to abide by its anti-steering rule, it wouldn’t be able to offer this form of differentiated product; it would instead be required to compete solely on price. Cardholders exist who prefer higher-status cards with a higher-tier of benefits, and there are merchants that prefer to attract a higher-value pool of customers.
But without the anti-steering provisions, the only competition available is on the price to merchants. The anti-steering rule is needed in order to prevent merchants from free-riding on American Express’s investment in attracting a unique group of card holders to its platform. American Express maintains that differentiation from other cards by providing its card holders with unique and valuable benefits — benefits that are subsidized in part by the fees charged to merchants. But merchants that attract customers by advertising that they accept American Express cards but who then steer those customers to other cards erode the basis of American Express’s product differentiation. Because of the interdependence of both sides of the platform, this ends up undermining the value that consumers receive from the platform as American Express ultimately withdraws consumer-side benefits. In the end, the merchants who valued American Express in the first place are made worse off by virtue of being permitted to selectively free-ride on American Express’s network investment.
At this point it is important to note that many merchants continue to accept American Express cards in light of both the cards’ higher merchant fees and these anti-steering provisions. Meanwhile, Visa and Mastercard have much larger market shares, and many merchants do not accept Amex. The fact that merchants who may be irritated by the anti-steering provision continue to accept Amex despite it being more costly, and the fact that they could readily drop Amex and rely on other, larger, and cheaper networks, suggests that American Express creates real value for these merchants. In other words, American Express, in fact, must offer merchants access to a group of consumers who are qualitatively different from those who use Visa or Mastercard cards — and access to this group of consumers must be valuable to those merchants.
An important irony in this case is that those who criticize American Express’s practices, who are arguing these practices limit price competition and that merchants should be able to steer customers to lower-fee cards, generally also argue that modern antitrust law focuses too myopically on prices and fails to account for competition over product quality. But that is precisely what American Express is trying to do: in exchange for a higher price it offers a higher quality card for consumers, and access to a different quality of consumers for merchants.
Anticompetitive conduct here, there, everywhere! Or nowhere.
The good news is that many on the court — and, for that matter, even Ohio’s own attorney — recognize that the effects of the anti-steering rule on the cardholder side of the market need to be considered alongside their effects on merchants:
JUSTICE KENNEDY: Does output include premiums or rewards to customers?
MR. MURPHY: Yeah. Output would include quality considerations as well.
The bad news is that several justices don’t seem to get it. Justice Kagan, for instance, suggested that “the effect of these anti-steering provisions means a market where we will only have high-cost/high-service products.” Justice Kagan’s assertion reveals the hubris of the would-be regulator, bringing to her evaluation of the market a preconception of what that market is supposed to look like. To wit: following her logic, one can say just as much that without the anti-steering provisions we would have a market with only low-cost/low-service products. Without an evaluation of the relative effects — which is more complicated than simple intuition suggests, especially since one can always pay cash — there is no reason to say that either of these would be a better outcome.
The reality, however, is that it is possible for the market to support both high- and low-cost, and high- and low-service products. In fact, this is the market in which we live today. As Justice Gorsuch said, “American Express’s agreements don’t affect MasterCard or Visa’s opportunity to cut their fees … or to advertise that American Express’s are higher. There is room for all kinds of competition here.” Indeed, one doesn’t need to be particularly creative to come up with competitive strategies that other card issuers could adopt, from those that Justice Gorsuch suggests, to strategies where card issuers are, in fact, “forced” to accept higher fees, which they in turn use to attract more card holders to their networks, such as through sign-up bonuses or awards for American Express customers who use non-American Express cards at merchants who accept them.
A standard response to such proposals is “if that idea is so good, why isn’t the market already doing it?” An important part of the answer in this case is that MasterCard and Visa know that American Express relies on the anti-steering provision in order to maintain its product differentiation.
Visa and Mastercard were initially defendants in this case, as well, as they used similar rules to differentiate some of their products. It’s telling that these larger market participants settled because, to some extent, harming American Express is worth more to them than their own product differentiation. After all, consumers steered away from American Express will generally use Visa or Mastercard (and their own high-priced cards may be cannibalizing from their own low-priced cards anyway, so reducing their value may not hurt so much). It is therefore a better strategy for them to try to use the courts to undermine that provision so that they don’t actually need to compete with American Express.
Without the anti-steering provision, American Express loses its competitive advantage compared to MasterCard and Visa and would be forced to compete against those much larger platforms on their preferred terms. What’s more, this would give those platforms access to American Express’s vaunted high-value card holders without the need to invest resources in competing for them. In other words, outlawing anti-steering provisions could in fact have both anti-competitive intent and effect.
Of course, card networks aren’t necessarily innocent of anticompetitive conduct, one way or the other. Showing that they are on either side of the anti-steering rule requires a sufficiently comprehensive analysis of the industry and its participants’ behavior. But liability cannot be simply determined based on behavior on one side of a two-sided market. These companies can certainly commit anticompetitive mischief, and they need to be held accountable when that happens. But this case is not about letting American Express or tech companies off the hook for committing anticompetitive conduct. This case is about how we evaluate such allegations, weigh them against possible beneficial effects, and put in place the proper thorough analysis for this particular form of business.
Over the last two decades, scholars have studied the nature of multi-sided platforms, and have made a good deal of progress. We should rely on this learning, and make sure that antitrust analysis is sound, not expedient.
The U.S. Federal Trade Commission’s (FTC) well-recognized expertise in assessing unfair or deceptive acts or practices can play a vital role in policing abusive broadband practices. Unfortunately, however, because Section 5(a)(2) of the FTC Act exempts common carriers from the FTC’s jurisdiction, serious questions have been raised about the FTC’s authority to deal with unfair or deceptive practices in cyberspace that are carried out by common carriers, but involve non-common-carrier activity (in contrast, common carrier services have highly regulated terms and must be made available to all potential customers).
Commendably, the Ninth Circuit held on February 26, in FTC v. AT&T Mobility, that harmful broadband data throttling practices by a common carrier were subject to the FTC’s unfair acts or practices jurisdiction, because the common carrier exception is “activity-based,” and the practices in question did not involve common carrier services. Key excerpts from the summary of the Ninth Circuit’s opinion follow:
The en banc court affirmed the district court’s denial of AT&T Mobility’s motion to dismiss an action brought by th Federal Trade Commission (“FTC”) under Section 5 of the FTC Act, alleging that AT&T’s data-throttling plan was unfair and deceptive. AT&T Mobility’s data-throttling is a practice by which the company reduced customers’ broadband data speed without regard to actual network congestion. Section 5 of the FTC Act gives the agency enforcement authority over “unfair or deceptive acts or practices,” but exempts “common carriers subject to the Acts to regulate commerce.” 15 U.S.C § 45(a)(1), (2). AT&T moved to dismiss the action, arguing that it was exempt from FTC regulation under Section 5. . . .
The en banc court held that the FTC Act’s common carrier exemption was activity-based, and therefore the phrase “common carriers subject to the Acts to regulate commerce” provided immunity from FTC regulation only to the extent that a common carrier was engaging in common carrier services. In reaching this conclusion, the en banc court looked to the FTC Act’s text, the meaning of “common carrier” according to the courts around the time the statute was passed in 1914, decades of judicial interpretation, the expertise of the FTC and Federal Communications Commission (“FCC”), and legislative history.
Addressing the FCC’s order, issued on March 12, 2015, reclassifying mobile data service from a non-common carriage service to a common carriage service, the en banc court held that the prospective reclassification order did not rob the FTC of its jurisdiction or authority over conduct occurring before the order. Accordingly, the en banc court affirmed the district court’s denial of AT&T’s motion to dismiss.
A key introductory paragraph in the Ninth Circuit’s opinion underscores the importance of the court’s holding for sound regulatory policy:
But what can the FTC do about unfair or deceptive practices affecting broadband services, offered by common carriers, subsequent to the FCC’s 2015 reclassification of mobile data service as a common carriage service? The FTC will be able to act, assuming that the Federal Communications Commission’s December 2017 rulemaking, reclassifying mobile broadband Internet access service as not involving a common carrier service, passes legal muster (as it should). In order to avoid any legal uncertainty, however, Congress could take the simple step of eliminating the FTC Act’s common carrier exception – an outdated relic that threatens to generate disparate enforcement outcomes toward the same abusive broadband practice, based merely upon whether the parent company is deemed a “common carrier.”
The cause of basing regulation on evidence-based empirical science (rather than mere negative publicity) – and of preventing regulatory interference with First Amendment commercial speech rights – got a judicial boost on February 26.
Specifically, in National Association of Wheat Growers et al. v. Zeise (Monsanto Case), a California federal district court judge preliminarily enjoined application against Monsanto of a labeling requirement imposed by a California regulatory law, Proposition 65. Proposition 65 mandates that the Governor of California publish a list of chemicals known to the State to cause cancer, and also prohibits any person in the course of doing business from knowingly and intentionally exposing anyone to the listed chemicals without a prior “clear and reasonable” warning. In this case, California sought to make Monsanto place warning labels on its popular Roundup weed killer products, stating that glyphosate, a widely-used herbicide and key Roundup ingredient, was known to cause cancer. Monsanto, joined by various agribusiness entities, sued to enjoin California from taking that action. Judge William Shubb concluded that there was insufficient evidence that the active ingredient in Roundup causes cancer, and that requiring Roundup to publish warning labels would violate Monsanto’s First Amendment rights by compelling it to engage in false and misleading speech. Salient excerpts from Judge Shubb’s opinion are set forth below:
[When, as here, it compels commercial speech, in order to satisfy the First Amendment,] [t]he State has the burden of demonstrating that a disclosure requirement is purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest. . . . The dispute in the present case is over whether the compelled disclosure is of purely factual and uncontroversial information. In this context, “uncontroversial” “refers to the factual accuracy of the compelled disclosure, not to its subjective impact on the audience.” [citation omitted]
On the evidence before the court, the required warning for glyphosate does not appear to be factually accurate and uncontroversial because it conveys the message that glyphosate’s carcinogenicity is an undisputed fact, when almost all other regulators have concluded that there is insufficient evidence that glyphosate causes cancer. . . .
It is inherently misleading for a warning to state that a chemical is known to the state of California to cause cancer based on the finding of one organization [, the International Agency for Research on Cancer] (which as noted above, only found that substance is probably carcinogenic), when apparently all other regulatory and governmental bodies have found the opposite, including the EPA, which is one of the bodies California law expressly relies on in determining whether a chemical causes cancer. . . . [H]ere, given the heavy weight of evidence in the record that glyphosate is not in fact known to cause cancer, the required warning is factually inaccurate and controversial. . . .
The court’s First Amendment inquiry here boils down to what the state of California can compel businesses to say. Whether Proposition 65’s statutory and regulatory scheme is good policy is not at issue. However, where California seeks to compel businesses to provide cancer warnings, the warnings must be factually accurate and not misleading. As applied to glyphosate, the required warnings are false and misleading. . . .
As plaintiffs have shown that they are likely to succeed on the merits of their First Amendment claim, are likely to suffer irreparable harm absent an injunction, and that the balance of equities and public interest favor an injunction, the court will grant plaintiffs’ request to enjoin Proposition 65’s warning requirement for glyphosate.
The Monsanto Case commendably highlights a little-appreciated threat of government overregulatory zeal. Not only may excessive regulation fail a cost-benefit test, and undermine private property rights, it may violates the First Amendment speech rights of private actors when it compels inaccurate speech. The negative economic consequences may be substantial when the government-mandated speech involves a claim about a technical topic that not only lacks empirical support (and thus may be characterized as “junk science”), but is deceptive and misleading (if not demonstrably false). Deceptive and misleading speech in the commercial market place reduces marketplace efficiency and reduces social welfare (both consumer’s surplus and producer’s surplus). In particular, it does this by deterring mutually beneficial transactions (for example, purchases of Roundup that would occur absent misleading labeling about cancer risks), generating suboptimal transactions (for example, purchases of inferior substitutes to Roundup due to misleading Roundup labeling), and distorting competition within the marketplace (the reallocation of market shares among Roundup and substitutes not subject to labeling). The short-term static effects of such market distortions may be dwarfed by the dynamic effects, such as firms’ disincentives to invest in innovation (or even participate) in markets subject to inaccurate information concerning the firms’ products or services.
In short, the Monsanto Case highlights the fact that government regulation not only imposes an implicit tax on business – it affirmatively distorts the workings of individual markets if it causes the introduction misleading or deceptive information that is material to marketplace decision-making. The threat of such distortive regulation may be substantial, especially in areas where regulators interact with “public interest clients” that have an incentive to demonize disfavored activities by private commercial actors – one example being the health and safety regulation of agricultural chemicals. In those areas, there may be a case for federal preemption of state regulation, and for particularly close supervision of federal agencies to avoid economically inappropriate commercial speech mandates. Stay tuned for future discussion of such potential legal reforms.
Over the last two decades, the United States government has taken the lead in convincing jurisdictions around the world to outlaw “hard core” cartel conduct. Such cartel activity reduces economic welfare by artificially fixing prices and reducing the output of affected goods and services. At the same, the United States has acted to promote international cooperation among government antitrust enforcers to detect, investigate, and punish cartels.
In 2017, however, the U.S. Court of Appeal for the Second Circuit (citing concerns of “international comity”) held that a Chinese export cartel that artificially raised the price of vitamin imports into the United States should be shielded from U.S. antitrust penalties—based merely on one brief from a Chinese government agency that said it approved of the conduct. The U.S. Supreme Court is set to review that decision later this year, in a case styled Animal Science Products, Inc., v. Hebei Welcome Pharmaceutical Co. Ltd. By overturning the Second Circuit’s ruling (and disavowing the overly broad “comity doctrine” cited by that court), the Supreme Court would reaffirm the general duty of federal courts to apply federal law as written, consistent with the constitutional separation of powers. It would also reaffirm the importance of the global fight against cartels, which has reflected consistent U.S. executive branch policy for decades (and has enjoyed strong support from the International Competition Network, the OECD, and the World Bank).
Finally, as a matter of economic policy, the Animal Science Products case highlights the very real harm that occurs when national governments tolerate export cartels that reduce economic welfare outside their jurisdictions, merely because domestic economic interests are not directly affected. In order to address this problem, the U.S. government should negotiate agreements with other nations under which the signatory states would agree: (1) not to legally defend domestic exporting entities that impose cartel harm in other jurisdictions; and (2) to cooperate more fully in rooting out harmful export-cartel activity, wherever it is found.
For a more fulsome discussion of the separation of powers, international relations, and economic policy issues raised by the Animal Science Products case, see my recent Heritage Foundation Legal Memorandum entitled The Supreme Court and Animal Science Products: Sovereignty and Export Cartels.
The Internet is a modern miracle: from providing all varieties of entertainment, to facilitating life-saving technologies, to keeping us connected with distant loved ones, the scope of the Internet’s contribution to our daily lives is hard to overstate. Moving forward there is undoubtedly much more that we can and will do with the Internet, and part of that innovation will, naturally, require a reconsideration of existing laws and how new Internet-enabled modalities fit into them.
But when undertaking such a reconsideration, the goal should not be simply to promote Internet-enabled goods above all else; rather, it should be to examine the law’s effect on the promotion of new technology within the context of other, competing social goods. In short, there are always trade-offs entailed in changing the legal order. As such, efforts to reform, clarify, or otherwise change the law that affects Internet platforms must be balanced against other desirable social goods, not automatically prioritized above them.
Unfortunately — and frequently with the best of intentions — efforts to promote one good thing (for instance, more online services) inadequately take account of the balance of the larger legal realities at stake. And one of the most important legal realities that is too often readily thrown aside in the rush to protect the Internet is that policy be established through public, (relatively) democratically accountable channels.
Trade deals and domestic policy
Recently a letter was sent by a coalition of civil society groups and law professors asking the NAFTA delegation to incorporate U.S.-style intermediary liability immunity into the trade deal. Such a request is notable for its timing in light of the ongoing policy struggles over SESTA —a bill currently working its way through Congress that seeks to curb human trafficking through online platforms — and the risk that domestic platform companies face of losing (at least in part) the immunity provided by Section 230 of the Communications Decency Act. But this NAFTA push is not merely about a tradeoff between less trafficking and more online services, but between promoting policies in a way that protects the rule of law and doing so in a way that undermines the rule of law.
Indeed, the NAFTA effort appears to be aimed at least as much at sidestepping the ongoing congressional fight over platform regulation as it is aimed at exporting U.S. law to our trading partners. Thus, according to EFF, for example, “[NAFTA renegotiation] comes at a time when Section 230 stands under threat in the United States, currently from the SESTA and FOSTA proposals… baking Section 230 into NAFTA may be the best opportunity we have to protect it domestically.”
It may well be that incorporating Section 230 into NAFTA is the “best opportunity” to protect the law as it currently stands from efforts to reform it to address conflicting priorities. But that doesn’t mean it’s a good idea. In fact, whatever one thinks of the merits of SESTA, it is not obviously a good idea to use a trade agreement as a vehicle to override domestic reforms to Section 230 that Congress might implement. Trade agreements can override domestic law, but that is not the reason we engage in trade negotiations.
In fact, other parts of NAFTA remain controversial precisely for their ability to undermine domestic legal norms, in this case in favor of guaranteeing the expectations of foreign investors. EFF itself is deeply skeptical of this “investor-state” dispute process (“ISDS”), noting that “[t]he latest provisions would enable multinational corporations to undermine public interest rules.” The irony here is that ISDS provides a mechanism for overriding domestic policy that is a close analogy for what EFF advocates for in the Section 230/SESTA context.
ISDS allows foreign investors to sue NAFTA signatories in a tribunal when domestic laws of that signatory have harmed investment expectations. The end result is that the signatory could be responsible for paying large sums to litigants, which in turn would serve as a deterrent for the signatory to continue to administer its laws in a similar fashion.
Stated differently, NAFTA currently contains a mechanism that favors one party (foreign investors) in a way that prevents signatory nations from enacting and enforcing laws approved of by democratically elected representatives. EFF and others disapprove of this.
Yet, at the same time, EFF also promotes the idea that NAFTA should contain a provision that favors one party (Internet platforms) in a way that would prevent signatory nations from enacting and enforcing laws like SESTA that (might be) approved of by democratically elected representatives.
A more principled stance would be skeptical of the domestic law override in both contexts.
Restating Copyright or creating copyright policy?
Take another example: Some have suggested that the American Law Institute (“ALI”) is being used to subvert Congressional will. Since 2013, ALI has taken upon itself the project to “restate” the law of copyright. ALI is well known and respected for its common law restatements, but it may be that something more than mere restatement is going on here. As the NY Bar Association recently observed:
The Restatement as currently drafted appears inconsistent with the ALI’s long-standing goal of promoting clarity in the law: indeed, rather than simply clarifying or restating that law, the draft offers commentary and interpretations beyond the current state of the law that appear intended to shape current and future copyright policy.
It is certainly odd that ALI (or any other group) would seek to restate a body of law that is already stated in the form of an overarching federal statute. The point of a restatement is to gather together the decisions of disparate common law courts interpreting different laws and precedent in order to synthesize a single, coherent framework approximating an overall consensus. If done correctly, a restatement of a federal statute would, theoretically, end up with the exact statute itself along with some commentary about how judicial decisions have filled in the blanks differently — a state of affairs that already exists with the copious academic literature commenting on federal copyright law.
But it seems that merely restating judicial interpretations was not the only objective behind the copyright restatement effort. In a letter to ALI, one of the scholars responsible for the restatement project noted that:
While congressional efforts to improve the Copyright Act… may be a welcome and beneficial development, it will almost certainly be a long and contentious process… Register Pallante… [has] not[ed] generally that “Congress has moved slowly in the copyright space.”
Reform of copyright law, in other words, and not merely restatement of it, was an important impetus for the project. As an attorney for the Copyright Office observed, “[a]lthough presented as a “Restatement” of copyright law, the project would appear to be more accurately characterized as a rewriting of the law.” But “rewriting” is a job for the legislature. And even if Congress moves slowly, or the process is frustrating, the democratic processes that produce the law should still be respected.
Pyrrhic Policy Victories
Attempts to change copyright or entrench liability immunity through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.
It’s no surprise why some may be frustrated and concerned about intermediary liability and copyright issues: On the margin, it’s definitely harder to operate an Internet platform if it faces sweeping liability for the actions of third parties (whether for human trafficking or infringing copyrights). Maybe copyright law needs to be reformed and perhaps intermediary liability must be maintained exactly as it is (or expanded). But the right way to arrive at these policy outcomes is not through backdoors — and it is not to begin with the assertion that such outcomes are required.
Congress and the courts can be frustrating vehicles through which to enact public policy, but they have the virtue of being relatively open to public deliberation, and of having procedural constraints that can circumscribe excesses and idiosyncratic follies. We might get bad policy from Congress. We might get bad cases from the courts. But the theory of our system is that, on net, having a frustratingly long, circumscribed, and public process will tend to weed out most of the bad ideas and impulses that would otherwise result from unconstrained decision making, even if well-intentioned.
We should meet efforts like these to end-run Congress and the courts with significant skepticism. Short term policy “victories” are likely not worth the long-run consequences. These are important, complicated issues. If we surreptitiously adopt idiosyncratic solutions to them, we risk undermining the rule of law itself.
The two-year budget plan passed last week makes important changes to payment obligations in the Medicare Part D coverage gap, also known as the donut hole. While the new plan produces a one-year benefit for seniors by reducing what they pay a year earlier than was already mandated, it permanently shifts much of the drug costs insurance companies were paying to drug makers. It’s far from clear whether this windfall for insurers will result in lower drug costs for Medicare beneficiaries.
Medicare Part D is voluntary prescription drug insurance for seniors and the permanently disabled provided by private insurance plans that are approved by the Medicare program. Last year, more than 42 million people enrolled in Medicare Part D plans. Payment for prescription drugs under Medicare Part D depends on how much enrollees spend on drugs. In 2018, after hitting a deductible that varies by plan, enrollees pay 25% of their drug costs while the Part D plans pay 75%. However, once the individual and the plan have spent a total of $3,750, enrollees hit the coverage gap that lasts until $8,418 has been spent. In the coverage gap, enrollees pay 35% of brand drug costs, the Part D plans pay 15%, and drug makers are required to offer 50% discounts on brand drugs to cover the rest. Once total spending reaches $8,418, enrollees enter catastrophic coverage in which they pay only 5% of drug costs, the Part D plans pay 15%, and the Medicare program pays the other 80%.
The Affordable Care Act (ACA) included provisions to phase out the coverage gap by 2020, so that enrollees will pay only 25% of drug costs from the time they meet the deductible until they hit the catastrophic coverage level. The budget plan passed last week speeds up this phase out by one year, so enrollees will start paying only 25% in 2019 instead of 2020. The ACA anticipated that with enrollees paying 25% of drug costs and drug maker discounts of 50%, the Part D plans would pay the other 25%. However, last week’s budget plan drastically redistributed the payment responsibilities from the Part D insurance plans to drug makers. Under the new plan drug makers are required to offer 70% discounts so that the plans only have to pay 5% of the total drug costs. That is, the new plan shifts 20% of total drug costs in the coverage gap from insurers to drug makers.
Although the drug spending in each individual’s coverage gap is less than $5,000, with over 42 million people covered, the total spending, and the 20% of spending shifted from insurers to drug makers, is significant. CMS has estimated that when drug makers’ discounts were only covering 50% of drug spending in the gap, the annual total discounts amounted to over $5.6 billion. Requiring drug makers to cover another 20% of drug spending will add several billion dollars more to this total.
A government intervention that forces suppliers to cover 70% of the spending in a market is a surprising move for Republicans—supposed advocates of free markets. Moreover, although reducing prescription drug costs has become a national priority, it’s unclear whether shifting costs from insurers to drug makers will benefit individuals at all. Theoretically, as the individual Part D plans pay less of their enrollees’ drug costs, they should pass on the savings to enrollees in the form of lower premiums. However, several studies suggest that enrollees may not experience a net decrease in drug spending. The Centers for Medicare and Medicaid Services (CMS) has determined that under Medicare Part D, drug makers increase list prices to offset other concessions and to more quickly move enrollees out of the coverage gap where drug makers are required to offer price discounts. Higher list prices mean that enrollees’ total out-of-pocket drug spending increases; even a 5% cost-sharing obligation in the catastrophic coverage for a high-priced drug can be a significant expense. Higher list prices that push enrollees out of the coverage gap also shift more costs onto the Medicare program that pays 80% of drug costs in the catastrophic coverage phase.
A better, more direct way to reduce Medicare Part D enrollees’ out-of-pocket drug spending is to require point-of-sale rebates. Currently, drug makers offer rebates to Part D plans in order to improve their access to the millions of individuals covered by the plans. However, the rebates, which total over $16 billion annually, are paid after the point-of-sale, and evidence shows that only a portion of these rebates get passed through to beneficiaries in the form of reduced insurance premiums. Moreover, a reduction in premiums does little to benefit those enrolled individuals who have the highest aggregate out-of-pocket spending on drugs. (As an aside, in contrast to the typical insurance subsidization of high-cost enrollees by low-cost enrollees, high-spending enrollees under Medicare Part D generate greater rebates for their plans, but then the rebates are spread across all enrollees in the form of lower premiums).
Drug maker rebates will more directly benefit Medicare Part D enrollees if rebates are passed through at the point-of-sale to reduce drug copays. Point-of-sale rebates would ensure that enrollees see immediate savings as they meet their cost-sharing obligations. Moreover, the enrollees with the highest aggregate out-of-pocket spending would be the ones to realize the greatest savings. CMS has recently solicited comments on a plan to require some portion of drug makers’ rebates to be applied at the point of sale, and the President’s budget plan released yesterday proposes point-of-sale rebates to lower Medicare Part D enrollees’ out-of-pocket spending. Ultimately, targeting rebates to consumers at the point-of-sale will more effectively lower drug spending than reducing insurance plans’ payment obligations in hopes that they pass on the savings to enrollees.