Archives For Credit Cards

The U.S. economy survived the COVID-19 pandemic and associated government-imposed business shutdowns with a variety of innovations that facilitated online shopping, contactless payments, and reduced use and handling of cash, a known vector of disease transmission.

While many of these innovations were new, they would have been impossible but for their reliance on an established and ubiquitous technological infrastructure: the global credit and debit-card payments system. Not only did consumers prefer to use plastic instead of cash, the number of merchants going completely “cashless” quadrupled in the first two months of the pandemic alone. From food delivery to online shopping, many small businesses were able to survive largely because of payment cards.

But there are costs to maintain the global payment-card network that processes billions of transactions daily, and those costs are higher for online payments, which present elevated fraud and security risks. As a result, while the boom in online shopping over this past year kept many retailers and service providers afloat, that hasn’t prevented them from grousing about their increased card-processing costs.

So it is that retailers are now lobbying Washington to impose new regulations on payment-card markets designed to force down the fees they pay for accepting debit and credit cards. Called interchange fees, these fees are charged by banks that issue debit cards on each transaction, and they are part of a complex process that connects banks, card networks, merchants, and consumers.

Fig. 1: A basic illustration of the 3- and 4-party payment-processing networks that underlie the use of credit cards.

Regulation II—a provision of 2010’s Dodd–Frank Wall Street Reform and Consumer Protection Act commonly known as the “Durbin amendment,” after its primary sponsor, Senate Majority Whip Richard Durbin (D-Ill.)—placed price controls on interchange fees for debit cards issued by larger banks and credit unions (those with more than $10 billion in assets). It required all debit-card issuers to offer multiple networks for “routing” and processing card transactions. Merchants now want to expand these routing provisions to credit cards, as well. The consequences for consumers, especially low-income consumers, would be disastrous.

The price controls imposed by the Durbin amendment have led to a 52% decrease in the average per-transaction interchange fee, resulting in billions of dollars in revenue losses for covered depositories. But banks and credit unions have passed on these losses to consumers in the form of fewer free checking accounts, higher fees, and higher monthly minimums required to avoid those fees.

One empirical study found that the share of covered banks offering free checking accounts fell from 60% to 20%, the average monthly checking accounts fees increased from $4.34 to $7.44, and the minimum account balance required to avoid those fees increased by roughly 25%. Another study found that fees charged by covered institutions were 15% higher than they would have been absent the price regulation; those increases offset about 90% of the depositories’ lost revenue. Banks and credit unions also largely eliminated cash-back and other rewards on debit cards.

In fact, those who have been most harmed by the Durbin amendment’s consequences have been low-income consumers. Middle-class families hardly noticed the higher minimum balance requirements, or used their credit cards more often to offset the disappearance of debit-card rewards. Those with the smallest checking account balances, however, suffered the most from reduced availability of free banking and higher monthly maintenance and other fees. Priced out of the banking system, as many as 1 million people might have lost bank accounts in the wake of the Durbin amendment, forcing them to turn to such alternatives as prepaid cards, payday lenders, and pawn shops to make ends meet. Lacking bank accounts, these needy families weren’t even able to easily access their much-needed government stimulus funds at the onset of the pandemic without paying fees to alternative financial services providers.

In exchange for higher bank fees and reduced benefits, merchants promised lower prices at the pump and register. This has not been the case. Scholarship since  implementation of the Federal Reserve’s rule shows that whatever benefits have been gained have gone to merchants, with little pass-through to consumers. For instance, one study found that covered banks had their interchange revenue drop by 25%, but little evidence of a corresponding drop in prices from merchants.

Another study found that the benefits and costs to merchants have been unevenly distributed, with retailers who sell large-ticket items receiving a windfall, while those specializing in small-ticket items have often faced higher effective rates. Discounts previously offered to smaller merchants have been eliminated to offset reduced revenues from big-box stores. According to a 2014 Federal Reserve study, when acceptance fees increased, merchants hiked retail prices; but when fees were reduced, merchants pocketed the windfall.

Moreover, while the Durbin amendment’s proponents claimed it would only apply to big banks, the provisions that determine how transactions are routed on the payment networks apply to cards issued by credit unions and community banks, as well. As a result, smaller players have also seen average interchange fees beaten down, reducing this revenue stream even as they have been forced to cope with higher regulatory costs imposed by Dodd-Frank. Extending the Durbin amendment’s routing provisions to credit cards would further drive down interchange-fee revenue, creating the same negative spiral of higher consumer fees and reduced benefits that the original Durbin amendment spawned for debit cards.

More fundamentally, merchants believe it is their decision—not yours—as to which network will route your transaction. You may prefer Visa or Mastercard because of your confidence in their investments in security and anti-fraud detection, but later discover that the merchant has routed your transaction through a processor you’ve never heard of, simply because that network is cheaper for the merchant.

The resilience of the U.S. economy during this horrible viral contagion is due, in part, to the ubiquitous access of American families to credit and debit cards. That system has proved its mettle this past year, seamlessly adapting to the sudden shift to electronic payments. Yet, in the wake of this American success story, politicians and regulators, egged on by powerful special interests, instead want to meddle with this system just so big-box retailers can transfer their costs onto American families and small banks. As the economy and public health recovers, Congress and regulators should resist the impulse to impose new financial harm on working-class families.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Thomas B. Nachbar is a professor of law at the University of Virginia School of Law and a senior fellow at the Center for National Security Law.]

It would be impossible to describe Ajit Pai’s tenure as chair of the Federal Communications Commission as ordinary. Whether or not you thought his regulatory style or his policies were innovative, his relationship with the public has been singular for an FCC chair. His Reese’s mug, alone, has occupied more space in the American media landscape than practically any past FCC chair. From his first day, he has attracted consistent, highly visible criticism from a variety of media outlets, although at least John Oliver didn’t describe him as a dingo. Just today, I read that Ajit Pai single handedly ruined the internet, which when I got up this morning seemed to be working pretty much the same way it was four years ago.

I might be biased in my view of Ajit. I’ve known him since we were law school classmates, when he displayed the same zeal and good-humored delight in confronting hard problems that I’ve seen in him at the commission. So I offer my comments not as an academic and student of FCC regulation, but rather as an observer of the communications regulatory ecosystem that Ajit has dominated since his appointment. And while I do not agree with everything he’s done at the commission, I have admired his single-minded determination to pursue policies that he believes will expand access to advanced telecommunications services. One can disagree with how he’s pursued that goal—and many have—but characterizing his time as chair in any other way simply misses the point. Ajit has kept his eye on expanding access, and he has been unwavering in pursuit of that objective, even when doing so has opened him to criticism, which is the definition of taking political risk.

Thus, while I don’t think it’s going to be the most notable policy he’s participated in at the commission, I would like to look at Ajit’s tenure through the lens of one small part of one fairly specific proceeding: the commission’s decision to include SpaceX as a low-latency provider in the Rural Digital Opportunity Fund (RDOF) Auction.

The decision to include SpaceX is at one level unremarkable. SpaceX proposes to offer broadband internet access through low-Earth-orbit satellites, which is the kind of thing that is completely amazing but is becoming increasingly un-amazing as communications technology advances. SpaceX’s decision to use satellites is particularly valuable for initiatives like the RDOF, which specifically seek to provide services where previous (largely terrestrial) services have not. That is, in fact, the whole point of the RDOF, a point that sparked fiery debate over the FCC’s decision to focus the first phase of the RDOF on areas with no service rather than areas with some service. Indeed, if anything typifies the current tenor of the debate (at the center of which Ajit Pai has resided since his confirmation as chair), it is that a policy decision over which kind of under-served areas should receive more than $16 billion in federal funding should spark such strongly held views. In the end, SpaceX was awarded $885.5 million to participate in the RDOF, almost 10% of the first-round funds awarded.

But on a different level, the decision to include SpaceX is extremely remarkable. Elon Musk, SpaceX’s pot-smoking CEO, does not exactly fit regulatory stereotypes. (Disclaimer: I personally trust Elon Musk enough to drive my children around in one of his cars.) Even more significantly, SpaceX’s Starlink broadband service doesn’t actually exist as a commercial product. If you go to Starlink’s website, you won’t find a set of splashy webpages featuring products, services, testimonials, and a variety of service plans eager for a monthly assignation with your credit card or bank account. You will be greeted with a page asking for your email and service address in case you’d like to participate in Starlink’s beta program. In the case of my address, which is approximately 100 miles from the building where the FCC awarded SpaceX over $885 million to participate in the RDOF, Starlink is not yet available. I will, however, “be notified via email when service becomes available in your area,” which is reassuring but doesn’t get me any closer to watching cat videos.

That is perhaps why Chairman Pai was initially opposed to including SpaceX in the low-latency portion of the RDOF. SpaceX was offering unproven technology and previous satellite offerings had been high-latency, which is good for some uses but not others.

But then, an even more remarkable thing happened, at least in Washington: a regulator at the center of a controversial issue changed his mind and—even more remarkably—admitted his decision might not work out. When the final order was released, SpaceX was allowed to bid for low-latency RDOF funds even though the commission was “skeptical” of SpaceX’s ability to deliver on its low-latency promise. Many doubted that SpaceX would be able to effectively compete for funds, but as we now know, that decision led to SpaceX receiving a large share of the Phase I funds. Of course, that means that if SpaceX doesn’t deliver on its latency promises, a substantial part of the RDOF Phase I funds will fail to achieve their purpose, and the FCC will have backed the wrong horse.

I think we are unlikely to see such regulatory risk-taking, both technically and politically, in what will almost certainly be a more politically attuned commission in the coming years. Even less likely will be acknowledgments of uncertainty in the commission’s policies. Given the political climate and the popular attention policies like network neutrality have attracted, I would expect the next chair’s views about topics like network neutrality to exhibit more unwavering certainty than curiosity and more resolve than risk-taking. The most defining characteristic of modern communications technology and markets is change. We are all better off with a commission in which the other things that can change are minds.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Doug Melamed (Professor of the Practice of Law, Stanford law School).

The big digital platforms make people uneasy.  Part of the unease is no doubt attributable to widespread populist concerns about large and powerful business entities.  Platforms like Facebook and Google in particular cause unease because they affect sensitive issues of communications, community, and politics.  But the platforms also make people uneasy because they seem boundless – enduring monopolies protected by ever-increasing scale and network economies, and growing monopolies aided by scope economies that enable them to conquer complementary markets.  They provoke a discussion about whether antitrust law is sufficient for the challenge.

Nicolas Petit’s Big Tech and the Digital Economy: The Moligopoly Scenario provides an insightful and valuable antidote to this unease.  While neither Panglossian nor comprehensive, Petit’s analysis persuasively argues that some of the concerns about the platforms are misguided or at least overstated.  As Petit sees it, the platforms are not so much monopolies in discrete markets – search, social networking, online commerce, and so on – as “multibusiness firms with business units in partly overlapping markets” that are engaged in a “dynamic oligopoly game” that might be “the socially optimal industry structure.”  Petit suggests that we should “abandon or at least radically alter traditional antitrust principles,” which are aimed at preserving “rivalry,” and “adapt to the specific non-rival economics of digital markets.”  In other words, the law should not try to diminish the platforms’ unique dominance in their individual sectors, which have already tipped to a winner-take-all (or most) state and in which protecting rivalry is not “socially beneficial.”  Instead, the law should encourage reductions of output in tipped markets in which the dominant firm “extracts a monopoly rent” in order to encourage rivalry in untipped markets. 

Petit’s analysis rests on the distinction between “tipped markets,” in which “tech firms with observed monopoly positions can take full advantage of their market power,” and “untipped markets,” which are “characterized by entry, instability and uncertainty.”  Notably, however, he does not expect “dispositive findings” as to whether a market is tipped or untipped.  The idea is to define markets, not just by “structural” factors like rival goods and services, market shares and entry barriers, but also by considering “uncertainty” and “pressure for change.”

Not surprisingly, given Petit’s training and work as a European scholar, his discussion of “antitrust in moligopoly markets” includes prescriptions that seem to one schooled in U.S. antitrust law to be a form of regulation that goes beyond proscribing unlawful conduct.  Petit’s principal concern is with reducing monopoly rents available to digital platforms.  He rejects direct reduction of rents by price regulation as antithetical to antitrust’s DNA and proposes instead indirect reduction of rents by permitting users on the inelastic side of a platform (the side from which the platform gains most of its revenues) to collaborate in order to gain countervailing market power and by restricting the platforms’ use of vertical restraints to limit user bypass. 

He would create a presumption against all horizontal mergers by dominant platforms in order to “prevent marginal increases of the output share on which the firms take a monopoly rent” and would avoid the risk of defining markets narrowly and thus failing to recognize that platforms are conglomerates that provide actual or potential competition in multiple partially overlapping commercial segments. By contrast, Petit would restrict the platforms’ entry into untipped markets only in “exceptional circumstances.”  For this, Petit suggests four inquiries: whether leveraging of network effects is involved; whether platform entry deters or forecloses entry by others; whether entry by others pressures the monopoly rents; and whether entry into the untipped market is intended to deter entry by others or is a long-term commitment.

One might question the proposition, which is central to much of Petit’s argument, that reducing monopoly rents in tipped markets will increase the platforms’ incentives to enter untipped markets.  Entry into untipped markets is likely to depend more on expected returns in the untipped market, the cost of capital, and constraints on managerial bandwidth than on expected returns in the tipped market.  But the more important issue, at least from the perspective of competition law, is whether – even assuming the correctness of all aspects of Petit’s economic analysis — the kind of categorical regulatory intervention proposed by Petit is superior to a law enforcement regime that proscribes only anticompetitive conduct that increases or threatens to increase market power.  Under U.S. law, anticompetitive conduct is conduct that tends to diminish the competitive efficacy of rivals and does not sufficiently enhance economic welfare by reducing costs, increasing product quality, or reducing above-cost prices.

If there were no concerns about the ability of legal institutions to know and understand the facts, a law enforcement regime would seem clearly superior.  Consider, for example, Petit’s recommendation that entry by a platform monopoly into untipped markets should be restricted only when network effects are involved and after taking into account whether the entry tends to protect the tipped market monopoly and whether it reflects a long-term commitment.  Petit’s proposed inquiries might make good sense as a way of understanding as a general matter whether market extension by a dominant platform is likely to be problematic.  But it is hard to see how economic welfare is promoted by permitting a platform to enter an adjacent market (e.g., Amazon entering a complementary product market) by predatory pricing or by otherwise unprofitable self-preferencing, even if the entry is intended to be permanent and does not protect the platform monopoly. 

Similarly, consider the proposed presumption against horizontal mergers.  That might not be a good idea if there is a small (10%) chance that the acquired firm would otherwise endure and modestly reduce the platform’s monopoly rents and an equal or even smaller chance that the acquisition will enable the platform, by taking advantage of economies of scope and asset complementarities, to build from the acquired firm an improved business that is much more valuable to consumers.  In that case, the expected value of the merger in welfare terms might be very positive.  Similarly, Petit would permit acquisitions by a platform of firms outside the tipped market as long as the platform has the ability and incentive to grow the target.  But the growth path of the target is not set in stone.  The platform might use it as a constrained complement, while an unaffiliated owner might build it into something both more valuable to consumers and threatening to the platform.  Maybe one of these stories describes Facebook’s acquisition of Instagram.

The prototypical anticompetitive horizontal merger story is one in which actual or potential competitors agree to share the monopoly rents that would be dissipated by competition between them. That story is confounded by communications that seem like threats, which imply a story of exclusion rather than collusion.  Petit refers to one such story.  But the threat story can be misleading.  Suppose, for example, that Platform sees Startup introduce a new business concept and studies whether it could profitably emulate Startup.  Suppose further that Platform concludes that, because of scale and scope economies available to it, it could develop such a business and come to dominate the market for a cost of $100 million acting alone or $25 million if it can acquire Startup and take advantage of its existing expertise, intellectual property, and personnel.  In that case, Platform might explain to Startup the reality that Platform is going to take the new market either way and propose to buy Startup for $50 million (thus offering Startup two-thirds of the gains from trade).  Startup might refuse, perhaps out of vanity or greed, in which case Platform as promised might enter aggressively and, without engaging in predatory or other anticompetitive conduct, drive Startup from the market.  To an omniscient law enforcement regime, there should be no antitrust violation from either an acquisition or the aggressive competition.  Either way, the more efficient provider prevails so the optimum outcome is realized in the new market.  The merger would have been more efficient because it would have avoided wasteful duplication of startup costs, and the merger proposal (later characterized as a threat) was thus a benign, even procompetitive, invitation to collude.  It would be a different story of course if Platform could overcome Startup’s first mover advantage only by engaging in anticompetitive conduct.

The problem is that antitrust decision makers often cannot understand all the facts.  Take the threat story, for example.  If Startup acquiesces and accepts the $50 million offer, the decision maker will have to determine whether Platform could have driven Startup from the market without engaging in predatory or anticompetitive conduct and, if not, whether absent the merger the parties would have competed against one another.  In other situations, decision makers are asked to determine whether the conduct at issue would be more likely than the but-for world to promote innovation or other, similarly elusive matters.

U.S. antitrust law accommodates its unavoidable uncertainty by various default rules and practices.  Some, like per se rules and the controversial Philadelphia National Bank presumption, might on occasion prohibit conduct that would actually have been benign or even procompetitive.  Most, however, insulate from antitrust liability conduct that might actually be anticompetitive.  These include rules applicable to predatory pricing, refusals to deal, two-sided markets, and various matters involving patents.  Perhaps more important are proof requirements in general.  U.S. antitrust law is based on the largely unexamined notion that false positives are worse than false negatives and thus, for the most part, puts the burden of uncertainty on the plaintiff.

Petit is proposing, in effect, an alternative approach for the digital platforms.  This approach would not just proscribe anticompetitive conduct.  It would, instead, apply to specific firms special rules that are intended to promote a desired outcome, the reduction in monopoly rents in tipped digital markets.  So, one question suggested by Petit’s provocative study is whether the inevitable uncertainty surrounding issues of platform competition are best addressed by the kinds of categorical rules Petit proposes or by case-by-case application of abstract legal principles.  Put differently, assuming that economic welfare is the objective, what is the best way to minimize error costs?

Broadly speaking, there are two kinds of error costs: specification errors and application errors.  Specification errors reflect legal rules that do not map perfectly to the normative objectives of the law (e.g., a rule that would prohibit all horizontal mergers by dominant platforms when some such mergers are procompetitive or welfare-enhancing).  Application errors reflect mistaken application of the legal rule to the facts of the case (e.g., an erroneous determination whether the conduct excludes rivals or provides efficiency benefits).   

Application errors are the most likely source of error costs in U.S. antitrust law.  The law relies largely on abstract principles that track the normative objectives of the law (e.g., conduct by a monopoly that excludes rivals and has no efficiency benefit is illegal). Several recent U.S. antitrust decisions (American Express, Qualcomm, and Farelogix among them) suggest that error costs in a law enforcement regime like that in the U.S. might be substantial and even that case-by-case application of principles that require applying economic understanding to diverse factual circumstances might be beyond the competence of generalist judges.  Default rules applicable in special circumstances reduce application errors but at the expense of specification errors.

Specification errors are more likely with categorical rules, like those suggested by Petit.  The total costs of those specification errors are likely to exceed the costs of mistaken decisions in individual cases because categorical rules guide firm conduct in general, not just in decided cases, and rules that embody specification errors are thus likely to encourage undesirable conduct and to discourage desirable conduct in matters that are not the subject of enforcement proceedings.  Application errors, unless systematic and predictable, are less likely to impose substantial costs beyond the costs of mistaken decisions in the decided cases themselves.  Whether any particular categorical rules are likely to have error costs greater than the error costs of the existing U.S. antitrust law will depend in large part on the specification errors of the rules and on whether their application is likely to be accompanied by substantial application costs.

As discussed above, the particular rules suggested by Petit appear to embody important specification errors.  They are likely also to lead to substantial application errors because they would require determination of difficult factual issues.  These include, for example, whether the market at issue has tipped, whether the merger is horizontal, and whether the platform’s entry into an untipped market is intended to be permanent.  It thus seems unlikely, at least from this casual review, that adoption of the rules suggested by Petit will reduce error costs.

 Petit’s impressive study might therefore be most valuable, not as a roadmap for action, but as a source of insight and understanding of the facts – what Petit calls a “mental model to help decision makers understand the idiosyncrasies of digital markets.”  If viewed, not as a prescription for action, but as a description of the digital world, the Moligopoly Scenario can help address the urgent matter of reducing the costs of application errors in U.S. antitrust law.

With the passing of Justice Ruth Bader Ginsburg, many have already noted her impact on the law as an advocate for gender equality and women’s rights, her importance as a role model for women, and her civility. Indeed, a key piece of her legacy is that she was a jurist in the classic sense of the word: she believed in using coherent legal reasoning to reach a result. And that meant Justice Ginsburg’s decisions sometimes cut against partisan political expectations. 

This is clearly demonstrated in our little corner of the law: RBG frequently voted in the majority on antitrust cases in a manner that—to populist leftwing observers—would be surprising. Moreover, she authored an important case on price discrimination that likewise cuts against the expectation of populist antitrust critics and demonstrates her nuanced jurisprudence.

RBG’s record on the Court shows a respect for the evolving nature of antitrust law

In the absence of written opinions of her own, it is difficult to discern what was actually in Justice Ginsburg’s mind as she encountered antitrust issues. But, her voting record represents at least a willingness to approach antitrust in an apolitical manner. 

Over the last several decades, Justice Ginsburg joined the Supreme Court majority in many cases dealing with a wide variety of antitrust issues, including the duty to deal doctrine, vertical restraints, joint ventures, and mergers. In many of these cases, RBG aligned herself with judgments of the type that the antitrust populists criticize.

The following are major consumer welfare standard cases that helped shape the current state of antitrust law in which she joined the majority or issued a concurrence: 

  • Verizon Commc’ns Inc. v. Law Offices of Curtis Trinko, LLP, 540 U.S. 398 (2004) (unanimous opinion heightening the standard for finding a duty to deal)
  • Pacific Bell Tel. Co v. linkLine Commc’ns, Inc.,  555 U.S. 438 (2009) (Justice Ginsburg joined the concurrence finding there was no “price squeeze” but suggesting the predatory pricing claim should be remanded)
  • Weyerhaeuser Co. v. Ross-Simmons Hardwood Lumber Co., Inc., 549 U.S. 312 (2007) (unanimous opinion finding predatory buying claims are still subject to the dangerous probability of recoupment test from Brooke Group)
  • Apple, Inc. v. Robert Pepper, 139 S.Ct. 1514 (2019) (part of majority written by Justice Kavanaugh finding that iPhone owners were direct purchasers under Illinois Brick that may sue Apple for alleged monopolization)
  • State Oil Co. v. Khan, 522 U.S. 3 (1997) (unanimous opinion overturning per se treatment of vertical maximum price fixing under Albrecht and applying rule of reason standard)
  • Texaco Inc. v. Dagher, 547 U.S. 1 (2006) (unanimous opinion finding it is not per se illegal under §1 of the Sherman Act for a lawful, economically integrated joint venture to set the prices at which it sells its products)
  • Illinois Tool Works Inc. v. Independent Ink, Inc., 547 U.S. 28 (2006) (unanimous opinion finding a patent does not necessarily confer market power upon the patentee, in all cases involving a tying arrangement, the plaintiff must prove that the defendant has market power in the tying product)
  • U.S. v. Baker Hughes, Inc., 908 F. 2d 981 (D.C. Cir. 1990) (unanimous opinion written by then-Judge Clarence Thomas while both were on the D.C. Circuit of Appeals finding against the government’s argument that the defendant in a Section 7 merger challenge can rebut a prima facie case only by a clear showing that entry into the market by competitors would be quick and effective)

Even where she joined the dissent in antitrust cases, she did so within the ambit of the consumer welfare standard. Thus, while she was part of the dissent in cases like Leegin Creative Leather Products, Inc. v. PSKS, Inc., 551 U.S. 877 (2007), Bell Atlantic Corp v. Twombly, 550 U.S. 544 (2007), and Ohio v. American Express Co., 138 S.Ct. 2274 (2018), she still left a legacy of supporting modern antitrust jurisprudence. In those cases, RBG simply  had a different vision for how best to optimize consumer welfare. 

Justice Ginsburg’s Volvo Opinion

The 2006 decision Volvo Trucks North America, Inc. v. Reeder-Simco GMC, Inc. was one of the few antitrust decisions authored by RBG and shows her appreciation for the consumer welfare standard. In particular, Justice Ginsburg affirmed the notion that antitrust law is designed to protect competition not competitors—a lesson that, as of late, needs to be refreshed. 

Volvo, a 7-2 decision, dealt with the Robinson-Patman Act’s prohibition on price discimination. Reeder-Simco, a retail car dealer that sold Volvos, alleged that Volvo Inc. was violating the Robinson-Patman Act by selling cars to them at different prices than to other Volvo dealers.

The Robinson-Patman Act is frequently cited by antitrust populists as a way to return antitrust law to its former glory. A main argument of Lina Khan’s Amazon’s Antitrust Paradox was that the Chicago School had distorted the law on vertical restraints generally, and price discrimination in particular. One source of this distortion in Khan’s opinion has been the Supreme Court’s mishandling of the Robinson-Patman Act.

Yet, in Volvo we see Justice Ginsburg wrestling with the Robinson-Patman Act in a way to give effect to the law as written, which may run counter to some of the contemporary populist impulse to revise the Court’s interpretation of antitrust laws. Justice Ginsburg, citing Brown & Williamson, first noted that: 

Mindful of the purposes of the Act and of the antitrust laws generally, we have explained that Robinson-Patman does not “ban all price differences charged to different purchasers of commodities of like grade and quality.”

Instead, the Robinson-Patman Act was aimed at a particular class of harms that Congress believed existed when large chain-stores were able to exert something like monopsony buying power. Moreover, Justice Ginsburg noted, the Act “proscribes ‘price discrimination only to the extent that it threatens to injure competition’[.]”

Under the Act, plaintiffs needed to demonstrate evidence of Volvo Inc. systematically treating plaintiffs as “disfavored” purchasers as against another set of “favored” purchasers. Instead, all plaintiffs could produce was anecdotal and inconsistent evidence of Volvo Inc. disfavoring them. Thus, the plaintiffs— and theoretically other similarly situated Volvo dealers— were in fact harmed in a sense by Volvo Inc. Yet, Justice Ginsburg was unwilling to rewrite the Act on Congress’s behalf to incorporate new harms later discovered (a fact which would not earn her accolades in populist circles these days). 

Instead, Justice Ginsburg wrote that:

Interbrand competition, our opinions affirm, is the “primary concern of antitrust law.”… The Robinson-Patman Act signals no large departure from that main concern. Even if the Act’s text could be construed in the manner urged by [plaintiffs], we would resist interpretation geared more to the protection of existing competitors than to the stimulation of competition. In the case before us, there is no evidence that any favored purchaser possesses market power, the allegedly favored purchasers are dealers with little resemblance to large independent department stores or chain operations, and the supplier’s selective price discounting fosters competition among suppliers of different brands… By declining to extend Robinson-Patman’s governance to such cases, we continue to construe the Act “consistently with broader policies of the antitrust laws.” Brooke Group, 509 U.S., at 220… (cautioning against Robinson-Patman constructions that “extend beyond the prohibitions of the Act and, in doing so, help give rise to a price uniformity and rigidity in open conflict with the purposes of other antitrust legislation”).

Thus, interested in the soundness of her jurisprudence in the face of a well-developed body of antitrust law, Justice Ginsburg chose to continue to develop that body of law rather than engage in judicial policymaking in favor of a sympathetic plaintiff. 

It must surely be tempting for a justice on the Court to adopt less principled approaches to the law in any given case, and it is equally as impressive that Justice Ginsburg consistently stuck to her principles. We can only hope her successor takes note of Justice Ginsburg’s example.

During last week’s antitrust hearing, Representative Jamie Raskin (D-Md.) provided a sound bite that served as a salvo: “In the 19th century we had the robber barons, in the 21st century we get the cyber barons.” But with sound bites, much like bumper stickers, there’s no room for nuance or scrutiny.

The news media has extensively covered the “questioning” of the CEOs of Facebook, Google, Apple, and Amazon (collectively “Big Tech”). Of course, most of this questioning was actually political posturing with little regard for the actual answers or antitrust law. But just like with the so-called robber barons, the story of Big Tech is much more interesting and complex. 

The myth of the robber barons: Market entrepreneurs vs. political entrepreneurs

The Robber Barons: The Great American Capitalists, 1861–1901 (1934) by Matthew Josephson, was written in the midst of America’s Great Depression. Josephson, a Marxist with sympathies for the Soviet Union, made the case that the 19th century titans of industry were made rich on the backs of the poor during the industrial revolution. This idea that the rich are wealthy due to their robbing of the rest of us is an idea that has long outlived Josephson and Marx down to the present day, as exemplified by the writings of Matt Stoller and the politics of the House Judiciary Committee.

In his Myth of the Robber Barons, Burton Folsom, Jr. makes the case that much of the received wisdom on the great 19th century businessmen is wrong. He distinguishes between the market entrepreneurs, which generated wealth by selling newer, better, or less expensive products on the free market without any government subsidies, and the political entrepreneurs, who became rich primarily by influencing the government to subsidize their businesses, or enacting legislation or regulation that harms their competitors. 

Folsom narrates the stories of market entrepreneurs, like Thomas Gibbons & Cornelius Vanderbilt (steamships), James Hill (railroads), the Scranton brothers (iron rails), Andrew Carnegie & Charles Schwab (steel), and John D. Rockefeller (oil), who created immense value for consumers by drastically reducing the prices of the goods and services their companies provided. Yes, these men got rich. But the value society received was arguably even greater. Wealth was created because market exchange is a positive-sum game.

On the other hand, the political entrepreneurs, like Robert Fulton & Edward Collins (steamships), and Leland Stanford & Henry Villard (railroads), drained societal resources by using taxpayer money to create inefficient monopolies. Because they were not subject to the same market discipline due to their favored position, cutting costs and prices were less important to them than the market entrepreneurs. Their wealth was at the expense of the rest of society, because political exchange is a zero-sum game.

Big Tech makes society better off

Today’s titans of industry, i.e. Big Tech, have created enormous value for society. This is almost impossible to deny, though some try. From zero-priced search on Google, to the convenience and price of products on Amazon, to the nominally free social network(s) of Facebook, to the plethora of options in Apple’s App Store, consumers have greatly benefited from Big Tech. Consumers flock to use Google, Facebook, Amazon, and Apple for a reason: they believe they are getting a great deal. 

By and large, the techlash comes from “intellectuals” who think they know better than consumers acting in the marketplace about what is good for them. And as noted by Alec Stapp, Americans in opinion polls consistently put a great deal of trust in Big Tech, at least compared to government institutions:

One of the basic building blocks of economics is that both parties benefit from voluntary exchanges ex ante, or else they would not be willing to engage in it. The fact that consumers use Big Tech to the extent they do is overwhelming evidence of their value. Obfuscations like “market power” mislead more than they inform. In the absence of governmental barriers to entry, consumers voluntarily choosing Big Tech does not mean they have power, it means they provide great service.

Big Tech companies are run by entrepreneurs who must ultimately answer to consumers. In a market economy, profits are a signal that entrepreneurs have successfully brought value to society. But they are also a signal to potential competitors. If Big Tech companies don’t continue to serve the interests of their consumers, they risk losing them to competitors.

Big Tech’s CEOs seem to get this. For instance, Jeff Bezos’ written testimony emphasized the importance of continual innovation at Amazon as a reason for its success:

Since our founding, we have strived to maintain a “Day One” mentality at the company. By that I mean approaching everything we do with the energy and entrepreneurial spirit of Day One. Even though Amazon is a large company, I have always believed that if we commit ourselves to maintaining a Day One mentality as a critical part of our DNA, we can have both the scope and capabilities of a large company and the spirit and heart of a small one. 

In my view, obsessive customer focus is by far the best way to achieve and maintain Day One vitality. Why? Because customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and a constant desire to delight customers drives us to constantly invent on their behalf. As a result, by focusing obsessively on customers, we are internally driven to improve our services, add benefits and features, invent new products, lower prices, and speed up shipping times—before we have to. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it. And I could give you many such examples. Not every business takes this customer-first approach, but we do, and it’s our greatest strength.

The economics of multi-sided platforms: How Big Tech does it

Economically speaking, Big Tech companies are (mostly) multi-sided platforms. Multi-sided platforms differ from regular firms in that they have to serve two or more of these distinct types of consumers to generate demand from any of them.

Economist David Evans, who has done as much as any to help us understand multi-sided platforms, has identified three different types:

  1. Market-Makers enable members of distinct groups to transact with each other. Each member of a group values the service more highly if there are more members of the other group, thereby increasing the likelihood of a match and reducing the time it takes to find an acceptable match. (Amazon and Apple’s App Store)
  2. Audience-Makers match advertisers to audiences. Advertisers value a service more if there are more members of an audience who will react positively to their messages; audiences value a service more if there is more useful “content” provided by audience-makers. (Google, especially through YouTube, and Facebook, especially through Instagram)
  3. Demand-Coordinators make goods and services that generate indirect network effects across two or more groups. These platforms do not strictly sell “transactions” like a market maker or “messages” like an audience-maker; they are a residual category much like irregular verbs – numerous, heterogeneous, and important. Software platforms such as Windows and the Palm OS, payment systems such as credit cards, and mobile telephones are demand coordinators. (Android, iOS)

In order to bring value, Big Tech has to consider consumers on all sides of the platform they operate. Sometimes, this means consumers on one side of the platform subsidize the other. 

For instance, Google doesn’t charge its users to use its search engine, YouTube, or Gmail. Instead, companies pay Google to advertise to their users. Similarly, Facebook doesn’t charge the users of its social network, advertisers on the other side of the platform subsidize them. 

As their competitors and critics love to point out, there are some complications in that some platforms also compete in the markets they create. For instance, Apple does place its own apps inits App Store, and Amazon does engage in some first-party sales on its platform. But generally speaking, both Apple and Amazon act as matchmakers for exchanges between users and third parties.

The difficulty for multi-sided platforms is that they need to balance the interests of each part of the platform in a way that maximizes its value. 

For Google and Facebook, they need to balance the interests of users and advertisers. In the case of each, this means a free service for users that is subsidized by the advertisers. But the advertisers gain a lot of value by tailoring ads based upon search history, browsing history, and likes and shares. For Apple and Amazon they need to create platforms which are valuable for buyers and sellers, and balance how much first-party competition they want to have before they lose the benefits of third-party sales.

There are no easy answers to creating a search engine, a video service, a social network, an App store, or an online marketplace. Everything from moderation practices, to pricing on each side of the platform, to the degree of competition from the platform operators themselves needs to be balanced right or these platforms would lose participants on one side of the platform or the other to competitors. 


Representative Raskin’s “cyber barons” were raked through the mud by Congress. But much like the falsely identified robber barons of the 19th century who were truly market entrepreneurs, the Big Tech companies of today are wrongfully maligned.

No one is forcing consumers to use these platforms. The incredible benefits they have brought to society through market processes shows they are not robbing anyone. Instead, they are constantly innovating and attempting to strike a balance between consumers on each side of their platform. 

The myth of the cyber barons need not live on any longer than last week’s farcical antitrust hearing.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Sam Bowman, (Director of Competition Policy, ICLE).]

No support package for workers and businesses during the coronavirus shutdown can be comprehensive. In the UK, for example, the government is offering to pay 80% of the wages of furloughed workers, but this will not apply to self-employed people or many gig economy workers, and so far it’s been hard to think of a way of giving them equivalent support. It’s likely that the bill going through Congress will have similar issues.

Whether or not solutions are found for these problems, it may be worth putting in place what you might call a ‘backstop’ policy that allows people to access money in case they cannot access it through the other policies that are being put into place. This doesn’t need to provide equivalent support to other packages, just to ensure that everyone has access to the money they need during the shutdown to pay their bills and rent, and cover other essential costs. The aim here is just to keep everyone afloat.

One mechanism for doing this might be to offer income-contingent loans to anyone currently resident in the country during the shutdown period. These are loans whose repayment is determined by the borrower’s income later on, and are how students in the UK and Australia pay for university. 

In the UK, for example, under the current student loan repayment terms, once a student has graduated, their earnings above a certain income threshold (currently £25,716/year) are taxed at 9% to repay the loan. So, if I earn £30,000/year and have a loan to repay, I pay an additional £385.56/year to repay the loan (9% of the £4,284 I’m earning above the income threshold); if I earn £40,000/year, I pay an additional £1,285.56/year. The loan incurs an annual interest rate equal to an annual measure of inflation plus 3%. Once you have paid off the loan, no more repayments are taken, and any amount still unpaid thirty years after the loan was first taken out is written off.

In practice, these terms mean that there is a significant subsidy to university students, most of whom never pay off the full amount. Under a less generous repayment scheme that was in place until recently, with a lower income threshold for repayment, out of every £1 borrowed by students the long-run cost to the government was 43.3p. This is regarded by many as a feature of the system rather than a bug, because of the belief that university education has positive externalities, and because this approach pools some of the risk associated with pursuing a graduate-level career (the risk of ending up with a low-paid job despite having spent a lot on your education, for example).

For loans available to the wider public, a different set of repayment criteria could apply. We could allow anyone who has filed a W-2 or 1099 tax statement in the past eighteen months (or filed a self-assessment tax return in the UK) to borrow up to something around 20% of median national annual income, to be paid back via an extra few percentage points on their federal income tax or, in the UK, National Insurance contributions over the following ten years, with the rate returning to normal after they have paid off the loan. Some other provision may have to be made for people approaching retirement.

With a low, inflation-indexed interest rate, this would allow people who need funds to access them, but make it mostly pointless for anyone who did not need to borrow. 

If, like student tuition fees, loans were written off after a certain period, low earners would probably never pay back the entirety of the ‘loan’ – as a one-off transfer (ie, one that does not distort work or savings incentives for recipients) to low paid people, this is probably not a bad thing. Most people, though, would pay back as and when they were able to. For self-employed people in particular, it could be a valuable source of liquidity during an unexpected period where they cannot work. Overall, it would function as a cash transfer to lower earners, and a liquidity injection for everyone else who takes advantage of the scheme.

This would have advantages over money being given to every US or UK citizen, as some have proposed, because most of the money being given out would be repaid, so the net burden on taxpayers would be lower and so the deadweight losses created by the additional tax needed to pay for it would be smaller. But you would also eliminate the need for means-testing, relying on self-selection instead.

The biggest obstacle to rolling something like this out may be administrative. However, if the government committed to setting up something like this, banks and credit card companies may be willing to step in in the short-run to issue short-term loans in the knowledge that people could be able to repay them once the government scheme was set up. To facilitate this, the government could guarantee the loans made by banks and credit card companies now, then allow people to opt into the income-contingent loans later, so there was no need for legislation immediately.

Speed is extremely important in helping people plug the gaps in their finances. As a complement to the government’s other plans, income-contingent loans to groups like self-employed people may be a useful way of catching people who would otherwise fall through the cracks.

Goodhart and Bad Policy

Eric Fruits —  18 March 2020

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

Wells Fargo faces billions of dollars of fines for creating millions of fraudulent savings, checking, credit, and insurance accounts on behalf of their clients without their customers’ consent. Last weekend, tens of thousands of travelers were likely exposed to coronavirus while waiting hours for screening at crowded airports. Consumers and businesses around the world pay higher energy prices as their governments impose costly programs to reduce carbon emissions.

These seemingly unrelated observations have something in common: They are all victims of some version of Goodhart’s Law.

Being a central banker, Charles Goodhart’s original statement was a bit more dense: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

The simple version of the law is: “When a measure becomes a target it ceases to be a good measure.”

Economist Charles Munger puts it more succinctly: “Show me the incentive and I’ll show you the outcome.”

The Wells Fargo scandal is a case study in Goodhart’s Law. It came from a corporate culture pushed by the CEO, Dick Kovacevich, that emphasized “cross-selling” products to existing customers, as related in a Vanity Fair profile.

As Kovacevich told me in a 1998 profile of him I wrote for Fortune magazine, the key question facing banks was “How do you sell money?” His answer was that financial instruments—A.T.M. cards, checking accounts, credit cards, loans—were consumer products, no different from, say, screwdrivers sold by Home Depot. In Kovacevich’s lingo, bank branches were “stores,” and bankers were “salespeople” whose job was to “cross-sell,” which meant getting “customers”—not “clients,” but “customers”—to buy as many products as possible. “It was his business model,” says a former Norwest executive. “It was a religion. It very much was the culture.”

It was underpinned by the financial reality that customers who had, say, lines of credit and savings accounts with the bank were far more profitable than those who just had checking accounts. In 1997, prior to Norwest’s merger with Wells Fargo, Kovacevich launched an initiative called “Going for Gr-Eight,” which meant getting the customer to buy eight products from the bank. The reason for eight? “It rhymes with GREAT!” he said.

The concept makes sense. It’s easier to get sales from existing customers than trying to find new customers. Also, if revenues are rising, there’s less pressure to reduce costs. 

Kovacevich came to Wells Fargo in the late 1990s by way of its merger with Norwest, where he was CEO. After the merger, he noticed that the Wells unit was dragging down the merged firm’s sales-per-customer numbers. So, Wells upped the pressure. 

One staffer reported that every morning, they’d have a conference call with their managers. Staff were supposed to to explain how they’d make their sales goal for the day. If the goal wasn’t hit at the end of the day, staff had to explain why they missed the goal and how they planned to fix it. Bonuses were offered for hitting their targets, and staffers were let go for missing their targets.

Wells Fargo had rules against “gaming” the system. Yes, it was called “gaming.” But the incentives were so strongly aligned in favor of gaming, that the rules were ignored.

Wells Fargo’s internal investigation estimated between 2011 and 2015 its employees had opened more than 1.5 million deposit accounts and more than 565,000 credit-card accounts that may not have been authorized. Customers were charged fees on the accounts, some accounts were sent to collections over unpaid fees, cars were repossessed, and homes went into foreclosure.

Some customers were charged fees on accounts they didn’t know they had, and some customers had collection agencies calling them due to unpaid fees on accounts they didn’t know existed.

Goodhart’s Law hit Wells Fargo hard. Cross-selling was the bank’s target. Once management placed pressure to hit the target, cross-selling became not just a bad target, it corrupted the entire retail side of the business.

Last Friday, my son came home from his study abroad in Spain. He landed less than eight hours before the travel ban went into effect. He was lucky–he got out of the airport less than an hour after landing. 

The next day was pandemonium. In addition to the travel ban, the U.S. imposed health screening on overseas arrivals. Over the weekend, travelers reported being forced into crowded terminals for up to eight hours to go through customs and receive screening. 

The screening process resulted in exactly the opposite of what health officials are advising, to avoid close contact and large crowds. We still don’t know if the screenings have helped reduce the spread of the coronavirus or if the forced crowding fostered the spread.

The government seemed to forget Goodhart’s Law. Public demand for enhanced screenings made screening the target. Screenings were implemented hastily without any thought of the consequences of clustering potentially infected flyers with the uninfected. Someday, we may learn that a focus on screening came at the expense of slowing the spread.

More and more we’re being told climate change presents an existential threat to our planet. We’re told the main culprit is carbon emissions from economic activity. Toward that end, governments around the world are trying to take extraordinary measures to reduce carbon emissions. 

In Oregon, the legislature has been trying for more than a decade to implement a cap-and-trade program to reduce carbon emissions in the state. A state that accounts for less than one-tenth of one percent of global greenhouse gas emissions. Even if Oregon went to zero GHG emissions, the world would never know.

Legislators pushing cap-and-trade want the state to address climate change immediately. But, when the microphones are turned off, they admit their cap-and-trade program would do nothing to slow global climate change.

In yet another case of Goodhart’s Law, Oregon and other jurisdictions have made carbon emissions the target. As a consequence, if cap-and-trade were ever to become law in the state, businesses and consumers would be paying hundreds or thousands of dollars of dollars a year more in energy prices, with zero effect on global temperatures. Those dollars could be better spent on acknowledging the consequences of climate change and making investments to deal with those consequences.

The funny thing about Goodhart’s Law is that once you know about it, you see it everywhere. And, it’s not just some quirky observation. It’s a failure that can have serious consequences on our health, our livelihoods, and our economy.

In mid-November, the 50 state attorneys general (AGs) investigating Google’s advertising practices expanded their antitrust probe to include the company’s search and Android businesses. Texas Attorney General Ken Paxton, the lead on the case, was supportive of the development, but made clear that other states would manage the investigations of search and Android separately. While attorneys might see the benefit in splitting up search and advertising investigations, platforms like Google need to be understood as a coherent whole. If the state AGs case is truly concerned with the overall impact on the welfare of consumers, it will need to be firmly grounded in the unique economics of this platform.

Back in September, 50 state AGs, including those in Washington, DC and Puerto Rico, announced an investigation into Google. In opening the case, Paxton said that, “There is nothing wrong with a business becoming the biggest game in town if it does so through free market competition, but we have seen evidence that Google’s business practices may have undermined consumer choice, stifled innovation, violated users’ privacy, and put Google in control of the flow and dissemination of online information.” While the original document demands focused on Google’s “overarching control of online advertising markets and search traffic,” reports since then suggest that the primary investigation centers on online advertising.

Defining the market

Since the market definition is the first and arguably the most important step in an antitrust case, Paxton has tipped his hand and shown that the investigation is converging on the online ad market. Yet, he faltered when he wrote in The Wall Street Journal that, “Each year more than 90% of Google’s $117 billion in revenue comes from online advertising. For reference, the entire market for online advertising is around $130 billion annually.” As Patrick Hedger of the Competitive Enterprise Institute was quick to note, Paxton cited global revenue numbers and domestic advertising statistics. In reality, Google’s share of the online advertising market in the United States is 37 percent and is widely expected to fall.

When Google faced scrutiny by the Federal Trade Commission in 2013, the leaked staff report explained that “the Commission and the Department of Justice have previously found online ‘search advertising’ to be a distinct product market.” This finding, which dates from 2007, simply wouldn’t stand today. Facebook’s ad platform was launched in 2007 and has grown to become a major competitor to Google. Even more recently, Amazon has jumped into the space and independent platforms like Telaria, Rubicon Project, and The Trade Desk have all made inroads. In contrast to the late 2000s, advertisers now use about four different online ad platforms.

Moreover, the relationship between ad prices and industry concentration is complicated. In traditional economic analysis, fewer suppliers of a product generally translates into higher prices. In the online ad market, however, fewer advertisers means that ad buyers can efficiently target people through keywords. Because advertisers have access to superior information, research finds that more concentration tends to lead to lower search engine revenues. 

The addition of new fronts in the state AGs’ investigation could spell disaster for consumers. While search and advertising are distinct markets, it is the act of tying the two together that makes platforms like Google valuable to users and advertisers alike. Demand is tightly integrated between the two sides of the platform. Changes in user and advertiser preferences have far outsized effects on the overall platform value because each side responds to the other. If users experience an increase in price or a reduction in quality, then they will use the platform less or just log off completely. Advertisers see this change in users and react by reducing their demand for ad placements as well. When advertisers drop out, the total amount of content also recedes and users react once again. Economists call these relationships demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazines, newspapers, and social media sites all support the existence of demand interdependencies. 

Economists David Evans and Richard Schmalensee, who were cited extensively in the Supreme Court case Ohio v. American Express, explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments. Understanding these relationships makes the investigation all that more difficult.

The limits of remedies

Most likely, this current investigation will follow the trajectory of Microsoft in the 1990s when states did the legwork for a larger case brought by the Department of Justice (DoJ). The DoJ already has its own investigation into Google and will probably pull together all of the parties for one large suit. Google is also subject to a probe by the House of Representatives Judiciary Committee as well. What is certain is that Google will be saddled with years of regulatory scrutiny, but what remains unclear is what kind of changes the AGs are after.

The investigation might aim to secure behavioral changes, but these often come with a cost in platform industries. The European Commission, for example, got Google to change its practices with its Android operating system for mobile phones. Much like search and advertising, the Android ecosystem is a platform with cross subsidization and demand interdependencies between the various sides of the market. Because the company was ordered to stop tying the Android operating system to apps, manufacturers of phones and tablets now have to pay a licensing fee in Europe if they want Google’s apps and the Play Store. Remedies meant to change one side of the platform resulted in those relationships being unbundled. When regulators force cross subsidization to become explicit prices, consumers are the one who pay.

The absolute worst case scenario would be a break up of Google, which has been a centerpiece of Senator Elizabeth Warren’s presidential platform. As I explained last year, that would be a death warrant for the company:

[T]he value of both Facebook and Google comes in creating the platform, which combines users with advertisers. Before the integration of ad networks, the search engine industry was struggling and it was simply not a major player in the Internet ecosystem. In short, the search engines, while convenient, had no economic value. As Michael Moritz, a major investor of Google, said of those early years, “We really couldn’t figure out the business model. There was a period where things were looking pretty bleak.” But Google didn’t pave the way. Rather, Bill Gross at succeeded in showing everyone how advertising could work to build a business. Google founders Larry Page and Sergey Brin merely adopted the model in 2002 and by the end of the year, the company was profitable for the first time. Marrying the two sides of the platform created value. Tearing them apart will also destroy value.

The state AGs need to resist making this investigation into a political showcase. As Pew noted in documenting the rise of North Carolina Attorney General Josh Stein to national prominence, “What used to be a relatively high-profile position within a state’s boundaries has become a springboard for publicity across the country.” While some might cheer the opening of this investigation, consumer welfare needs to be front and center. To properly understand how consumer welfare might be impacted by an investigation, the state AGs need to take seriously the path already laid out by platform economics. For the sake of consumers, let’s hope they are up to the task. 

These days, lacking a coherent legal theory presents no challenge to the would-be antitrust crusader. In a previous post, we noted how Shaoul Sussman’s predatory pricing claims against Amazon lacked a serious legal foundation. Sussman has returned with a new post, trying to build out his fledgling theory, but fares little better under even casual scrutiny.

According to Sussman, Amazon’s allegedly anticompetitive 

conduct not only cemented its role as the primary destination for consumers that shop online but also helped it solidify its power over brands.

Further, the company 

was willing to go to great lengths to ensure brand availability and inventory, including turning to the grey market, recruiting unauthorized sellers, and even selling diverted goods and counterfeits to its customers.

Sussman is trying to make out a fairly convoluted predatory pricing case, but once again without ever truly connecting the dots in a way that develops a cognizable antitrust claim. According to Sussman: 

Amazon sold products as a first-party to consumers on its platform at below average variable cost and [] Amazon recently began to recoup its losses by shifting the bulk of the transactions that occur on the website to its marketplace, where millions of third-party sellers pay hefty fees that enable Amazon to take a deep cut of every transaction.

Sussman now bases this claim on an allegation that Amazon relied on  “grey market” sellers on its platform, the presence of which forces legitimate brands onto the Amazon Marketplace. Moreover, Sussman claims that — somehow — these brands coming on board on Amazon’s terms forces those brands raise prices elsewhere, and the net effect of this process at scale is that prices across the economy have risen. 

As we detail below, Sussman’s chimerical argument depends on conflating unrelated concepts and relies on non-public anecdotal accounts to piece together an argument that, even if you squint at it, doesn’t make out a viable theory of harm.

Conflating legal reselling and illegal counterfeit selling as the “grey market”

The biggest problem with Sussman’s new theory is that he conflates pro-consumer unauthorized reselling and anti-consumer illegal counterfeiting, erroneously labeling both the “grey market”: 

Amazon had an ace up its sleeve. My sources indicate that the company deliberately turned to and empowered the “grey market“ — where both genuine, authentic goods and knockoffs are purchased and resold outside of brands’ intended distribution pipes — to dominate certain brands.

By definition, grey market goods are — as the link provided by Sussman states — “goods sold outside the authorized distribution channels by entities which may have no relationship with the producer of the goods.” Yet Sussman suggests this also encompasses counterfeit goods. This conflation is no minor problem for his argument. In general, the grey market is legal and beneficial for consumers. Brands such as Nike may try to limit the distribution of their products to channels the company controls, but they cannot legally prevent third parties from purchasing Nike products and reselling them on Amazon (or anywhere else).

This legal activity can increase consumer choice and can lead to lower prices, even though Sussman’s framing omits these key possibilities:

In the course of my conversations with former Amazon employees, some reported that Amazon actively sought out and recruited unauthorized sellers as both third-party sellers and first-party suppliers. Being unauthorized, these sellers were not bound by the brands’ policies and therefore outside the scope of their supervision.

In other words, Amazon actively courted third-party sellers who could bring legitimate goods, priced competitively, onto its platform. Perhaps this gives Amazon “leverage” over brands that would otherwise like to control the activities of legal resellers, but it’s exceedingly strange to try to frame this as nefarious or anticompetitive behavior.

Of course, we shouldn’t ignore the fact that there are also potential consumer gains when Amazon tries to restrict grey market activity by partnering with brands. But it is up to Amazon and the brands to determine through a contracting process when it makes the most sense to partner and control the grey market, or when consumers are better served by allowing unauthorized resellers. The point is: there is simply no reason to assume that either of these approaches is inherently problematic. 

Yet, even when Amazon tries to restrict its platform to authorized resellers, it exposes itself to a whole different set of complaints. In 2018, the company made a deal with Apple to bring the iPhone maker onto its marketplace platform. In exchange for Apple selling its products directly on Amazon, the latter agreed to remove unauthorized Apple resellers from the platform. Sussman portrays this as a welcome development in line with the policy changes he recommends. 

But news reports last month indicate the FTC is reviewing this deal for potential antitrust violations. One is reminded of Ronald Coase’s famous lament that he “had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down they said it was predatory pricing, and when they stayed the same they said it was tacit collusion.” It seems the same is true for Amazon and its relationship with the grey market.

Amazon’s incentive to remove counterfeits

What is illegal — and explicitly against Amazon’s marketplace rules  — is selling counterfeit goods. Counterfeit goods destroy consumer trust in the Amazon ecosystem, which is why the company actively polices its listings for abuses. And as Sussman himself notes, when there is an illegal counterfeit listing, “Brands can then file a trademark infringement lawsuit against the unauthorized seller in order to force Amazon to suspend it.”

Sussman’s attempt to hang counterfeiting problems around Amazon’s neck belies the actual truth about counterfeiting: probably the most cost-effective way to stop counterfeiting is simply to prohibit all third-party sellers. Yet, a serious cost-benefit analysis of Amazon’s platforms could hardly support such an action (and would harm the small sellers that antitrust activists seem most concerned about).

But, more to the point, if Amazon’s strategy is to encourage piracy, it’s doing a terrible job. It engages in litigation against known pirates, and earlier this year it rolled out a suite of tools (called Project Zero) meant to help brand owners report and remove known counterfeits. As part of this program, according to Amazon, “brands provide key data points about themselves (e.g., trademarks, logos, etc.) and we scan over 5 billion daily listing update attempts, looking for suspected counterfeits.” And when a brand identifies a counterfeit listing, they can remove it using a self-service tool (without needing approval from Amazon). 

Any large platform that tries to make it easy for independent retailers to reach customers is going to run into a counterfeit problem eventually. In his rush to discover some theory of predatory pricing to stick on Amazon, Sussman ignores the tradeoffs implicit in running a large platform that essentially democratizes retail:

Indeed, the democratizing effect of online platforms (and of technology writ large) should not be underestimated. While many are quick to disparage Amazon’s effect on local communities, these arguments fail to recognize that by reducing the costs associated with physical distance between sellers and consumers, e-commerce enables even the smallest merchant on Main Street, and the entrepreneur in her garage, to compete in the global marketplace.

In short, Amazon Marketplace is designed to make it as easy as possible for anyone to sell their products to Amazon customers. As the WSJ reported

Counterfeiters, though, have been able to exploit Amazon’s drive to increase the site’s selection and offer lower prices. The company has made the process to list products on its website simple—sellers can register with little more than a business name, email and address, phone number, credit card, ID and bank account—but that also has allowed impostors to create ersatz versions of hot-selling items, according to small brands and seller consultants.

The existence of counterfeits is a direct result of policies designed to lower prices and increase consumer choice. Thus, we would expect some number of counterfeits to exist as a result of running a relatively open platform. The question is not whether counterfeits exist, but — at least in terms of Sussman’s attempt to use antitrust law — whether there is any reason to think that Amazon’s conduct with respect to counterfeits is actually anticompetitive. But, even if we assume for the moment that there is some plausible way to draw a competition claim out of the existence of counterfeit goods on the platform, his theory still falls apart. 

There is both theoretical and empirical evidence for why Amazon is likely not engaged in the conduct Sussman describes. As a platform owner involved in a repeated game with customers, sellers, and developers, Amazon has an incentive to increase trust within the ecosystem. Counterfeit goods directly destroy that trust and likely decrease sales in the long run. If individuals can’t depend on the quality of goods on Amazon, they can easily defect to Walmart, eBay, or any number of smaller independent sellers. That’s why Amazon enters into agreements with companies like Apple to ensure there are only legitimate products offered. That’s also why Amazon actively sues counterfeiters in partnership with its sellers and brands, and also why Project Zero is a priority for the company.

Sussman relies on private, anecdotal claims while engaging in speculation that is entirely unsupported by public data 

Much of Sussman’s evidence is “[b]ased on conversations [he] held with former employees, sellers, and brands following the publication of [his] paper”, which — to put it mildly — makes it difficult for anyone to take seriously, let alone address head on. Here’s one example:

One third-party seller, who asked to remain anonymous, was willing to turn over his books for inspection in order to illustrate the magnitude of the increase in consumer prices. Together, we analyzed a single product, of which tens of thousands of units have been sold since 2015. The minimum advertised price for this single product, at any and all outlets, has increased more than 30 percent in the past four years. Despite this fact, this seller’s margins on this product are tighter than ever due to Amazon’s fee increases.

Needless to say, sales data showing the minimum advertised price for a single product “has increased more than 30 percent in the past four years” is not sufficient to prove, well, anything. At minimum, showing an increase in prices above costs would require data from a large and representative sample of sellers. All we have to go on from the article is a vague anecdote representing — maybe — one data point.

Not only is Sussman’s own data impossible to evaluate, but he bases his allegations on speculation that is demonstrably false. For instance, he asserts that Amazon used its leverage over brands in a way that caused retail prices to rise throughout the economy. But his starting point assumption is flatly contradicted by reality: 

To remedy this, Amazon once again exploited brands’ MAP policies. As mentioned, MAP policies effectively dictate the minimum advertised price of a given product across the entire retail industry. Traditionally, this meant that the price of a typical product in a brick and mortar store would be lower than the price online, where consumers are charged an additional shipping fee at checkout.

Sussman presents no evidence for the claim that “the price of a typical product in a brick and mortar store would be lower than the price online.” The widespread phenomenon of showrooming — when a customer examines a product at a brick-and-mortar store but then buys it for a lower price online — belies the notion that prices are higher online. One recent study by Nielsen found that “nearly 75% of grocery shoppers have used a physical store to ‘showroom’ before purchasing online.”

In fact, the company’s downward pressure on prices is so large that researchers now speculate that Amazon and other internet retailers are partially responsible for the low and stagnant inflation in the US over the last decade (dubbing this the “Amazon effect”). It is also curious that Sussman cites shipping fees as the reason prices are higher online while ignoring all the overhead costs of running a brick-and-mortar store which online retailers don’t incur. The assumption that prices are lower in brick-and-mortar stores doesn’t pass the laugh test.


Sussman can keep trying to tell a predatory pricing story about Amazon, but the more convoluted his theories get — and the less based in empirical reality they are — the less convincing they become. There is a predatory pricing law on the books, but it’s hard to bring a case because, as it turns out, it’s actually really hard to profitably operate as a predatory pricer. Speculating over complicated new theories might be entertaining, but it would be dangerous and irresponsible if these sorts of poorly supported theories were incorporated into public policy.

This post was co-authored with Chelsea Boyd

The Food and Drug Administration has spoken, and its words have, once again, ruffled many feathers. Coinciding with the deadline for companies to lay out their plans to prevent youth access to e-cigarettes, the agency has announced new regulatory strategies that are sure to not only make it more difficult for young people to access e-cigarettes, but for adults who benefit from vaping to access them as well.

More surprising than the FDA’s paradoxical strategy of preventing teen smoking by banning not combustible cigarettes, but their distant cousins, e-cigarettes, is that the biggest support for establishing barriers to accessing e-cigarettes seems to come from the tobacco industry itself.

Going above and beyond the FDA’s proposals, both Altria and JUUL are self-restricting flavor sales, creating more — not fewer — barriers to purchasing their products. And both companies now publicly support a 21-to-purchase mandate. Unfortunately, these barriers extend beyond restricting underage access and will no doubt affect adult smokers seeking access to reduced-risk products.

To say there are no benefits to self-regulation by e-cigarette companies would be misguided. Perhaps the biggest benefit is to increase the credibility of these companies in an industry where it has historically been lacking. Proposals to decrease underage use of their product show that these companies are committed to improving the lives of smokers. Going above and beyond the FDA’s regulations also allows them to demonstrate that they take underage use seriously.

Yet regulation, whether imposed by the government or as part of a business plan, comes at a price. This is particularly true in the field of public health. In other health areas, the FDA is beginning to recognize that it needs to balance regulatory prudence with the risks of delaying innovation. For example, by decreasing red tape in medical product development, the FDA aims to help people access novel treatments for conditions that are notoriously difficult to treat. Unfortunately, this mindset has not expanded to smoking.

Good policy, whether imposed by government or voluntarily adopted by private actors, should not help one group while harming another. Perhaps the question that should be asked, then, is not whether these new FDA regulations and self-imposed restrictions will decrease underage use of e-cigarettes, but whether they decrease underage use enough to offset the harm caused by creating barriers to access for adult smokers.

The FDA’s new point-of-sale policy restricts sales of flavored products (not including tobacco flavors or menthol/mint flavors) to either specialty, age-restricted, in-person locations or to online retailers with heightened age-verification systems. JUUL, Reynolds and Altria have also included parts of this strategy in their proposed self-regulations, sometimes going even further by limiting sales of flavored products to their company websites.

To many people, these measures may not seem like a significant barrier to purchasing e-cigarettes, but in fact, online retail is a luxury that many cannot access. Heightened online age-verification processes are likely to require most of the following: a credit or debit card, a Social Security number, a government-issued ID, a cellphone to complete two-factor authorization, and a physical address that matches the user’s billing address. According to a 2017 Federal Deposit Insurance Corp. survey, one in four U.S. households are unbanked or underbanked, which is an indicator of not having a debit or credit card. That factor alone excludes a quarter of the population, including many adults, from purchasing online. It’s also important to note that the demographic characteristics of people who lack the items required to make online purchases are also the characteristics most associated with smoking.

Additionally, it’s likely that these new point-of-sale restrictions won’t have much of an effect at all on the target demographic — those who are underage. According to a 2017 Centers for Disease Control and Prevention study, of the 9 percent of high school students who currently use electronic nicotine delivery systems (ENDS), only 13 percent reported purchasing the device for themselves from a store. This suggests that 87 percent of underage users won’t be deterred by prohibitive measures to move sales to specialty stores or online. Moreover, Reynolds estimates that only 20 percent of its VUSE sales happen online, indicating that more than three-quarters of users — consisting mainly of adults — purchase products in brick-and-mortar retail locations.

Existing enforcement techniques, if properly applied at the point of sale, could have a bigger impact on youth access. Interestingly, a recent analysis by Baker White of FDA inspection reports suggests that the agency’s existing approaches to prevent youth access may be lacking — meaning that there is much room for improvement. Overall, selling to minors is extremely low-risk for stores. The likelihood of a store receiving a fine for violation of the minimum age of sale is once for every 36.7 years of operation, the financial risk is about 2 cents per day, and the risk of receiving a no sales order (the most severe consequence) is 1 for every 2,825 years of operation. Furthermore, for every $279 the FDA receives in fines, it spends over $11,800. With odds like those, it’s no wonder some stores are willing to sell to minors: Their risk is minimal.

Eliminating access to flavored products is the other arm of the FDA’s restrictions. Many people have suggested that flavors are designed to appeal to youth, yet fewer talk about the proportion of adults who use flavored e-cigarettes. In reality, flavors are an important factor for adults who switch from combustible cigarettes to e-cigarettes. A 2018 survey of 20,676 US adults who frequently use e-cigarettes showed that “since 2013, fruit-flavored e-liquids have replaced tobacco-flavored e-liquids as the most popular flavors with which participants had initiated e-cigarette use.” By relegating flavored products to specialty retailers and online sales, the FDA has forced adult smokers, who may switch from combustible cigarettes to e-cigarettes, to go out of their way to initiate use.

It remains to be seen if new regulations, either self- or FDA-imposed, will decrease underage use. However, we already know who is most at risk for negative outcomes from these new regulations: people who are geographically disadvantaged (for instance, people who live far away from adult-only retailers), people who might not have credit to go through an online retailer, and people who rely on new flavors as an incentive to stay away from combustible cigarettes. It’s not surprising or ironic that these are also the people who are most at risk for using combustible cigarettes in the first place.

Given the likelihood that the new way of doing business will have minimal positive effects on youth use but negative effects on adult access, we must question what the benefits of these policies are. Fortunately, we know the answer already: The FDA gets political capital and regulatory clout; industry gets credibility; governments get more excise tax revenue from cigarette sales. And smokers get left behind.

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.