Archives For payments

Twitter has seen a lot of ups and downs since Elon Musk closed on his acquisition of the company in late October and almost immediately set about his initiatives to “reform” the platform’s operations.

One of the stories that has gotten somewhat lost in the ensuing chaos is that, in the short time under Musk, Twitter has made significant inroads—on at least some margins—against the visibility of child sexual abuse material (CSAM) by removing major hashtags that were used to share it, creating a direct reporting option, and removing major purveyors. On the other hand, due to the large reductions in Twitter’s workforce—both voluntary and involuntary—there are now very few human reviewers left to deal with the issue.

Section 230 immunity currently protects online intermediaries from most civil suits for CSAM (a narrow carveout is made under Section 1595 of the Trafficking Victims Protection Act). While the federal government could bring criminal charges if it believes online intermediaries are violating federal CSAM laws, and certain narrow state criminal claims could be brought consistent with federal law, private litigants are largely left without the ability to find redress on their own in the courts.

This, among other reasons, is why there has been a push to amend Section 230 immunity. Our proposal (along with co-author Geoffrey Manne) suggests online intermediaries should have a reasonable duty of care to remove illegal content. But this still requires thinking carefully about what a reasonable duty of care entails.

For instance, one of the big splash moves made by Twitter after Musk’s acquisition was to remove major CSAM distribution hashtags. While this did limit visibility of CSAM for a time, some experts say it doesn’t really solve the problem, as new hashtags will arise. So, would a reasonableness standard require the periodic removal of major hashtags? Perhaps it would. It appears to have been a relatively low-cost way to reduce access to such material, and could theoretically be incorporated into a larger program that uses automated discovery to find and remove future hashtags.

Of course it won’t be perfect, and will be subject to something of a Whac-A-Mole dynamic. But the relevant question isn’t whether it’s a perfect solution, but whether it yields significant benefit relative to its cost, such that it should be regarded as a legally reasonable measure that platforms should broadly implement.

On the flip side, Twitter has lost such a large amount of its workforce that it potentially no longer has enough staff to do the important review of CSAM. As long as Twitter allows adult nudity, and algorithms are unable to effectively distinguish between different types of nudity, human reviewers remain essential. A reasonableness standard might also require sufficient staff and funding dedicated to reviewing posts for CSAM. 

But what does it mean for a platform to behave “reasonably”?

Platforms Should Behave ‘Reasonably’

Rethinking platforms’ safe harbor from liability as governed by a “reasonableness” standard offers a way to more effectively navigate the complexities of these tradeoffs without resorting to the binary of immunity or total liability that typically characterizes discussions of Section 230 reform.

It could be the case that, given the reality that machines can’t distinguish between “good” and “bad” nudity, it is patently unreasonable for an open platform to allow any nudity at all if it is run with the level of staffing that Musk seems to prefer for Twitter.

Consider the situation that MindGeek faced a couple of years ago. It was pressured by financial providers, including PayPal and Visa, to clean up the CSAM and nonconsenual pornography that appeared on its websites. In response, they removed more than 80% of suspected illicit content and required greater authentication for posting.

Notwithstanding efforts to clean up the service, a lawsuit was filed against MindGeek and Visa by victims who asserted that the credit-card company was a knowing conspirator for processing payments to MindGeek’s sites when they were purveying child pornography. Notably, Section 230 issues were dismissed early on in the case, but the remaining claims—rooted in the Racketeer Influenced and Corrupt Organizations Act (RICO) and the Trafficking Victims Protection Act (TVPA)—contained elements that support evaluating the conduct of online intermediaries, including payment providers who support online services, through a reasonableness lens.

In our amicus, we stressed the broader policy implications of failing to appropriately demarcate the bounds of liability. In short, we stressed that deterrence is best encouraged by placing responsibility for control on the party most closely able to monitor the situation—i.e., MindGeek, and not Visa. Underlying this, we believe that an appropriately tuned reasonableness standard should be able to foreclose these sorts of inquiries at early stages of litigation if there is good evidence that an intermediary behaved reasonably under the circumstances.

In this case, we believed the court should have taken seriously the fact that a payment processor needs to balance a number of competing demands— legally, economically, and morally—in a way that enables them to serve their necessary prosocial roles. Here, Visa had to balance its role, on the one hand, as a neutral intermediary responsible for handling millions of daily transactions, with its interests to ensure that it did not facilitate illegal behavior. But it also was operating, essentially, under a veil of ignorance: all of the information it had was derived from news reports, as it was not directly involved in, nor did it have special insight into, the operation of MindGeek’s businesses.

As we stressed in our intermediary-liability paper, there is indeed a valid concern that changes to intermediary-liability policy not invite a flood of ruinous litigation. Instead, there needs to be some ability to determine at the early stages of litigation whether a defendant behaved reasonably under the circumstances. In the MindGeek case, we believed that Visa did.

In essence, much of this approach to intermediary liability boils down to finding socially and economically efficient dividing lines that can broadly demarcate when liability should attach. For example, if Visa is liable as a co-conspirator in MindGeek’s allegedly illegal enterprise for providing a payment network that MindGeek uses by virtue of its relationship with yet other intermediaries (i.e., the banks that actually accept and process the credit-card payments), why isn’t the U.S. Post Office also liable for providing package-delivery services that allow MindGeek to operate? Or its maintenance contractor for cleaning and maintaining its offices?

Twitter implicitly engaged in this sort of analysis when it considered becoming an OnlyFans competitor. Despite having considerable resources—both algorithmic and human—Twitter’s internal team determined they could not “accurately detect child sexual exploitation and non-consensual nudity at scale.” As a result, they abandoned the project. Similarly, Tumblr tried to make many changes, including taking down CSAM hashtags, before finally giving up and removing all pornographic material in order to remain in the App Store for iOS. At root, these firms demonstrated the ability to weigh costs and benefits in ways entirely consistent with a reasonableness analysis. 

Thinking about the MindGeek situation again, it could also be the case that MindGeek did not behave reasonably. Some of MindGeek’s sites encouraged the upload of user-generated pornography. If MindGeek experienced the same limitations in detecting “good” and “bad” pornography (which is likely), it could be that the company behaved recklessly for many years, and only tightened its verification procedures once it was caught. If true, that is behavior that should not be protected by the law with a liability shield, as it is patently unreasonable.

Apple is sometimes derided as an unfair gatekeeper of speech through its App Store. But, ironically, Apple itself has made complex tradeoffs between data security and privacy—through use of encryption, on the one hand, and checking devices for CSAM material, on the other. Prioritizing encryption over scanning devices (especially photos and messages) for CSAM is a choice that could allow for more CSAM to proliferate. But the choice is, again, a difficult one: how much moderation is needed and how do you balance such costs against other values important to users, such as privacy for the vast majority of nonoffending users?

As always, these issues are complex and involve tradeoffs. But it is obvious that more can and needs to be done by online intermediaries to remove CSAM.

But What Is ‘Reasonable’? And How Do We Get There?

The million-dollar legal question is what counts as “reasonable?” We are not unaware of the fact that, particularly when dealing with online platforms that deal with millions of users a day, there is a great deal of surface area exposed to litigation by potentially illicit user-generated conduct. Thus, it is not the case, at least for the foreseeable future, that we need to throw open gates of a full-blown common-law process to determine questions of intermediary liability. What is needed, instead, is a phased-in approach that gets courts in the business of parsing these hard questions and building up a body of principles that, on the one hand, encourage platforms to do more to control illicit content on their services, and on the other, discourages unmeritorious lawsuits by the plaintiffs’ bar.

One of our proposals for Section 230 reform is for a multistakeholder body, overseen by an expert agency like the Federal Trade Commission or National Institute of Standards and Technology, to create certified moderation policies. This would involve online intermediaries working together with a convening federal expert agency to develop a set of best practices for removing CSAM, including thinking through the cost-benefit analysis of more moderation—human or algorithmic—or even wholesale removal of nudity and pornographic content.

Compliance with these standards should, in most cases, operate to foreclose litigation against online service providers at an early stage. If such best practices are followed, a defendant could point to its moderation policies as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor similar to the one currently in place in Section 230.

In litigation, after a defendant answers a complaint with its certified moderation policies, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity. And, periodically, the body overseeing the development of this process would incorporate changes to the best practices standards based on the cases being brought in front of courts.

Online service providers don’t need to be perfect in their content-moderation decisions, but they should behave reasonably. A properly designed duty-of-care standard should be flexible and account for a platform’s scale, the nature and size of its user base, and the costs of compliance, among other considerations. What is appropriate for YouTube, Facebook, or Twitter may not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform.

Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common-law negligence. Allowing courts to apply the flexible common-law duty of reasonable care would also enable jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Conclusion

Twitter and other online intermediaries continue to struggle with the best approach to removing CSAM, nonconsensual pornography, and a whole host of other illicit content. There are no easy answers, but there are strong ethical reasons, as well as legal and market pressures, to do more. Section 230 reform is just one part of a complete regulatory framework, but it is an important part of getting intermediary liability incentives right. A reasonableness approach that would hold online platforms accountable in a cost-beneficial way is likely to be a key part of a positive reform agenda for Section 230.

Banco Central do Brasil (BCB), Brazil’s central bank, launched a new real-time payment (RTP) system in November 2020 called Pix. Evangelists at the central bank hoped that Pix would offer a low-cost alternative to existing payments systems and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system.

A recent review of Pix, published by the Bank for International Settlements, claims that the payment system has achieved these goals and that it is a model for other jurisdictions. However, the BIS review seems to have been written with rose-tinted spectacles. This is perhaps not surprising, given that the lead author runs the division of the central bank that developed Pix. In a critique published this week, I suggest that, when seen in full color, Pix looks a lot less pretty. 

Among other things, the BIS review misconstrues the economics of payment networks. By ignoring the two-sided nature of such networks, the authors claim erroneously that payment cards incur a net economic cost. In fact, evidence shows that payment cards generate net benefits. One study put their value add to the Brazilian economy at 0.17% of GDP. 

The report also obscures the costs of the Pix system and fails to explain that, whereas private payment systems must recover their full operational cost, Pix appears to benefit from both direct and indirect subsidies. The direct subsidies come from the BCB, which incurred substantial costs in developing and promoting Pix and, unlike other central banks such as the U.S. Federal Reserve, is not required to recover all operational costs. Indirect subsidies come from the banks and other payment-service providers (PSPs), many of which have been forced by the BCB to provide Pix to their clients, even though doing so cannibalizes their other payment systems, including interchange fees earned from payment cards. 

Moreover, the BIS review mischaracterizes the role of interchange fees, which are often used to encourage participation in the payment-card network. In the case of debit cards, this often includes covering some or all of the operational costs of bank accounts. The availability of “free” bank accounts with relatively low deposit requirements offers customers incentives to open and maintain accounts. 

While the report notes that Pix has “signed up” 67% of adult Brazilians, it fails to mention that most of these were automatically enrolled by their banks, the majority of which were required by the BCB to adopt Pix. It also fails to mention that 33% of adult Brazilians have not “signed up” to Pix, nor that a recent survey found that more than 20% of adult Brazilians remain unbanked or underbanked, nor that the main reason given for not having a bank account was the cost of such accounts. Moreover, by diverting payments away from debit cards, Pix has reduced interchange fees and thereby reduced the ability of banks and other PSPs to subsidize bank accounts, which might otherwise have increased financial inclusion.  

The BIS review falsely asserts that “Big Tech” payment networks are able to establish and maintain market power. In reality, tech firms operate in highly competitive markets and have little to no market power in payment networks. Nonetheless, the report uses this claim regarding Big Tech’s alleged market power to justify imposing restrictions on the WhatsApp payment system. The irony, of course, is that by moving to prohibit the WhatsApp payment service shortly before the rollout of Pix, the BCB unfairly inhibited competition, effectively giving Pix a monopoly on RTP with the full support of the government. 

In acting as both a supplier of a payment service and the regulator of payment service providers, the BCB has a massive conflict of interest. Indeed, the BIS itself has recommended that, in cases where such conflicts might exist, it is good practice to ensure that the regulator is clearly separated from the supplier. Pix, in contrast, was developed and promoted by the same part of the bank as the payments regulator. 

Finally, the BIS report also fails to address significant security issues associated with Pix, including a dramatic rise in the number of “lightning kidnappings” in which hostages were forced to send funds to Pix addresses. 

The U.S. economy survived the COVID-19 pandemic and associated government-imposed business shutdowns with a variety of innovations that facilitated online shopping, contactless payments, and reduced use and handling of cash, a known vector of disease transmission.

While many of these innovations were new, they would have been impossible but for their reliance on an established and ubiquitous technological infrastructure: the global credit and debit-card payments system. Not only did consumers prefer to use plastic instead of cash, the number of merchants going completely “cashless” quadrupled in the first two months of the pandemic alone. From food delivery to online shopping, many small businesses were able to survive largely because of payment cards.

But there are costs to maintain the global payment-card network that processes billions of transactions daily, and those costs are higher for online payments, which present elevated fraud and security risks. As a result, while the boom in online shopping over this past year kept many retailers and service providers afloat, that hasn’t prevented them from grousing about their increased card-processing costs.

So it is that retailers are now lobbying Washington to impose new regulations on payment-card markets designed to force down the fees they pay for accepting debit and credit cards. Called interchange fees, these fees are charged by banks that issue debit cards on each transaction, and they are part of a complex process that connects banks, card networks, merchants, and consumers.

Fig. 1: A basic illustration of the 3- and 4-party payment-processing networks that underlie the use of credit cards.

Regulation II—a provision of 2010’s Dodd–Frank Wall Street Reform and Consumer Protection Act commonly known as the “Durbin amendment,” after its primary sponsor, Senate Majority Whip Richard Durbin (D-Ill.)—placed price controls on interchange fees for debit cards issued by larger banks and credit unions (those with more than $10 billion in assets). It required all debit-card issuers to offer multiple networks for “routing” and processing card transactions. Merchants now want to expand these routing provisions to credit cards, as well. The consequences for consumers, especially low-income consumers, would be disastrous.

The price controls imposed by the Durbin amendment have led to a 52% decrease in the average per-transaction interchange fee, resulting in billions of dollars in revenue losses for covered depositories. But banks and credit unions have passed on these losses to consumers in the form of fewer free checking accounts, higher fees, and higher monthly minimums required to avoid those fees.

One empirical study found that the share of covered banks offering free checking accounts fell from 60% to 20%, the average monthly checking accounts fees increased from $4.34 to $7.44, and the minimum account balance required to avoid those fees increased by roughly 25%. Another study found that fees charged by covered institutions were 15% higher than they would have been absent the price regulation; those increases offset about 90% of the depositories’ lost revenue. Banks and credit unions also largely eliminated cash-back and other rewards on debit cards.

In fact, those who have been most harmed by the Durbin amendment’s consequences have been low-income consumers. Middle-class families hardly noticed the higher minimum balance requirements, or used their credit cards more often to offset the disappearance of debit-card rewards. Those with the smallest checking account balances, however, suffered the most from reduced availability of free banking and higher monthly maintenance and other fees. Priced out of the banking system, as many as 1 million people might have lost bank accounts in the wake of the Durbin amendment, forcing them to turn to such alternatives as prepaid cards, payday lenders, and pawn shops to make ends meet. Lacking bank accounts, these needy families weren’t even able to easily access their much-needed government stimulus funds at the onset of the pandemic without paying fees to alternative financial services providers.

In exchange for higher bank fees and reduced benefits, merchants promised lower prices at the pump and register. This has not been the case. Scholarship since  implementation of the Federal Reserve’s rule shows that whatever benefits have been gained have gone to merchants, with little pass-through to consumers. For instance, one study found that covered banks had their interchange revenue drop by 25%, but little evidence of a corresponding drop in prices from merchants.

Another study found that the benefits and costs to merchants have been unevenly distributed, with retailers who sell large-ticket items receiving a windfall, while those specializing in small-ticket items have often faced higher effective rates. Discounts previously offered to smaller merchants have been eliminated to offset reduced revenues from big-box stores. According to a 2014 Federal Reserve study, when acceptance fees increased, merchants hiked retail prices; but when fees were reduced, merchants pocketed the windfall.

Moreover, while the Durbin amendment’s proponents claimed it would only apply to big banks, the provisions that determine how transactions are routed on the payment networks apply to cards issued by credit unions and community banks, as well. As a result, smaller players have also seen average interchange fees beaten down, reducing this revenue stream even as they have been forced to cope with higher regulatory costs imposed by Dodd-Frank. Extending the Durbin amendment’s routing provisions to credit cards would further drive down interchange-fee revenue, creating the same negative spiral of higher consumer fees and reduced benefits that the original Durbin amendment spawned for debit cards.

More fundamentally, merchants believe it is their decision—not yours—as to which network will route your transaction. You may prefer Visa or Mastercard because of your confidence in their investments in security and anti-fraud detection, but later discover that the merchant has routed your transaction through a processor you’ve never heard of, simply because that network is cheaper for the merchant.

The resilience of the U.S. economy during this horrible viral contagion is due, in part, to the ubiquitous access of American families to credit and debit cards. That system has proved its mettle this past year, seamlessly adapting to the sudden shift to electronic payments. Yet, in the wake of this American success story, politicians and regulators, egged on by powerful special interests, instead want to meddle with this system just so big-box retailers can transfer their costs onto American families and small banks. As the economy and public health recovers, Congress and regulators should resist the impulse to impose new financial harm on working-class families.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

James Van Dyke is President and Founder of Javelin Strategy and Research.

I feel that at least two important issues are being left out of the raging controversy over the cost of interchange. (At this point my readers are probably deciding if I’ll follow with a pro-merchant or pro-bank POV…but guess what: here comes one of each to make my point that we’re being a bit simplistic in this debate!).

Point one: card fees replace other costly forms of payments, and this must be included in any policy discussion. Previous Javelin research of several hundred merchants found that total costs for handling checks and cash are often considered to be as high as those for cards. Cash comes with risk of employee fraud, and may prevent consumer sales for those without current access to paper or cash money. Every check is an incident of fraud waiting to happen, as these antiquated IOU instuments are essentially dead or dying in every country other than the U.S. Checks invite criminals to follow the simple methods documented by Frank Abagnale of “Catch Me If You Can” fame.

Point two: Any discussion over cost of interchange should include a review of who pays for fraud. Between merchants and banks, if there is a disparity over who handles the brunt of fraud (the recent study we did for LexisNexis found that merchants incur 90% of direct fraud cost) brings up issues of incentives. Simply put, if you can pass a cost on to someone else your incentive for minimizing the cost may be reduced.

And now that I’ve said something to anger both merchants and bankers, I’ll eagerly await the responses…