Archives For

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

The Wall Street Journal reports that Amazon employees have been using data from individual sellers to identify products to compete with with its own ‘private label’ (or own-brand) products, such as AmazonBasics, Presto!, and Pinzon.

It’s implausible that this is an antitrust problem, as some have suggested. It’s extremely common for retailers to sell their own private label products and use data on how other products in their stores have sold to help development and marketing. They account for about 14–17% of overall US retail sales, and for an estimated 19% of Walmart’s and Kroger’s sales and 29% of Costco’s sales of consumer packaged goods. 

And Amazon accounts for 39% of US e-commerce spending, and about 6% of all US retail spending. Any antitrust-based argument against Amazon doing this should also apply to Walmart, Kroger and Costco as well. In other words, the case against Amazon proves too much. Alec Stapp has a good discussion of these and related facts here.

However, it is interesting to think about the underlying incentives facing Amazon here, and in particular why Amazon’s company policy is not to use individual seller data to develop products (rogue employees violating this policy, notwithstanding). One possibility is that it is a way for Amazon to balance its competition with some third parties with protections for others that it sees as valuable to its platform overall.

Amazon does use aggregated seller data to develop and market its products. If two or more merchants are selling a product, Amazon’s employees can see how popular it is. This might seem like a trivial distinction, but it might exist for good reason. It could be because sellers of unique products actually do have the bargaining power to demand that Amazon does not use their data to compete with them, or for public relations reasons, although it’s not clear how successful that has been. 

But another possibility is that it may be a self-imposed restraint. Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to the Journal, they account for less than 1% of Amazon’s product sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, since that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

But to the extent that Amazon competes against innovative third-party sellers (typically manufacturers doing direct sales, as opposed to pure retailers simply re-selling others’ products), there is a possibility that the prospect of having to compete with Amazon may diminish their incentive to develop new products and sell them on Amazon’s platform. 

This is the strongest argument that is made against private label offerings in general. When they involve some level of copying an innovative product, where the innovator has been collecting above-normal profits and those profits are what spur the innovation in the first place, a private label product that comes along and copies the product effectively free rides on the innovation and captures some of its return. That may get us less innovation than society—or a platform trying to host as many innovative products as possible—would like.

While the Journal conflates these two kinds of products, Amazon’s own policies may be tailored specifically to take account of the distinction, and maximise the total value of its marketplace to consumers.

This is nominally the focus of the Journal story: a car trunk organiser company with an (apparently) innovative product says that Amazon moving in to compete with its own AmazonBasics version competed away many of its sales. In this sort of situation, the free-rider problem described above might apply where future innovation is discouraged. Why bother to invent things like this if you’re just going to have your invention ripped off?

Of course, many such innovations are protected by patents. But there may be valuable innovations that are not, and even patented innovations are not perfectly protected given the costs of enforcement. But a platform like Amazon can adopt rules that fine-tune the protections offered by the legal system in an effort to increase the value of the platform for both innovators and consumers alike.

And that may be why Amazon has its rule against using individual seller data to compete: to allow creators of new products to collect more rents from their inventions, with a promise that, unless and until their product is commodified by other means (as indicated by the product being available from multiple other sellers), Amazon won’t compete against such sellers using any special insights it might have from that seller using Amazon’s Marketplace. 

This doesn’t mean Amazon refuses to compete (or refuses to allow others to compete); it has other rules that sometimes determine that boundary, as when it enters into agreements with certain brands to permit sales of the brand on the platform only by sellers authorized by the brand owner. Rather, this rule is a more limited—but perhaps no less important—one that should entice innovators to use Amazon’s platform to sell their products without concern that doing so will create a special risk that Amazon can compete away their returns using information uniquely available to it. In effect, it’s a promise that innovators won’t lose more by choosing to sell on Amazon rather than through other retail channels.. 

Like other platforms, to maximise its profits Amazon needs to strike a balance between being an attractive place for third party merchants to sell their goods, and being attractive to consumers by offering as many inexpensive, innovative, and reliable products as possible. Striking that balance is challenging, but a rule that restrains the platform from using its unique position to expropriate value from innovative sellers helps to protect the income of genuinely innovative third parties, and induces them to sell products consumers want on Amazon, while still allowing Amazon (and third-party sellers) to compete with commodity products. 

The fact that Amazon has strong competition online and offline certainly acts as an important constraint here, too: if Amazon behaved too badly, third parties might not sell on it at all, and Amazon would have none of the seller data that is allegedly so valuable to it.

But even in a world where Amazon had a huge, sticky customer base that meant it was not an option to sell elsewhere—which the Journal article somewhat improbably implies—Amazon would still need third parties to innovate and sell things on its platform. 

What the Journal story really seems to demonstrate is the sort of genuine principal-agent problem that all large businesses face: the company as a whole needs to restrain its private label section in various respects but its agents in the private label section want to break those rules to maximise their personal performance (in this case, by launching a successful new AmazonBasics product). It’s like a rogue trader at a bank who breaks the rules to make herself look good by, she hopes, getting good results.This is just one of many rules that a platform like Amazon has to preserve the value of its platform. It’s probably not the most important one. But understanding why it exists may help us to understand why simple stories of platform predation don’t add up, and help to demonstrate the mechanisms that companies like Amazon use to maximise the total value of their platform, not just one part of it.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Sam Bowman, (Director of Competition Policy, ICLE).]

No support package for workers and businesses during the coronavirus shutdown can be comprehensive. In the UK, for example, the government is offering to pay 80% of the wages of furloughed workers, but this will not apply to self-employed people or many gig economy workers, and so far it’s been hard to think of a way of giving them equivalent support. It’s likely that the bill going through Congress will have similar issues.

Whether or not solutions are found for these problems, it may be worth putting in place what you might call a ‘backstop’ policy that allows people to access money in case they cannot access it through the other policies that are being put into place. This doesn’t need to provide equivalent support to other packages, just to ensure that everyone has access to the money they need during the shutdown to pay their bills and rent, and cover other essential costs. The aim here is just to keep everyone afloat.

One mechanism for doing this might be to offer income-contingent loans to anyone currently resident in the country during the shutdown period. These are loans whose repayment is determined by the borrower’s income later on, and are how students in the UK and Australia pay for university. 

In the UK, for example, under the current student loan repayment terms, once a student has graduated, their earnings above a certain income threshold (currently £25,716/year) are taxed at 9% to repay the loan. So, if I earn £30,000/year and have a loan to repay, I pay an additional £385.56/year to repay the loan (9% of the £4,284 I’m earning above the income threshold); if I earn £40,000/year, I pay an additional £1,285.56/year. The loan incurs an annual interest rate equal to an annual measure of inflation plus 3%. Once you have paid off the loan, no more repayments are taken, and any amount still unpaid thirty years after the loan was first taken out is written off.

In practice, these terms mean that there is a significant subsidy to university students, most of whom never pay off the full amount. Under a less generous repayment scheme that was in place until recently, with a lower income threshold for repayment, out of every £1 borrowed by students the long-run cost to the government was 43.3p. This is regarded by many as a feature of the system rather than a bug, because of the belief that university education has positive externalities, and because this approach pools some of the risk associated with pursuing a graduate-level career (the risk of ending up with a low-paid job despite having spent a lot on your education, for example).

For loans available to the wider public, a different set of repayment criteria could apply. We could allow anyone who has filed a W-2 or 1099 tax statement in the past eighteen months (or filed a self-assessment tax return in the UK) to borrow up to something around 20% of median national annual income, to be paid back via an extra few percentage points on their federal income tax or, in the UK, National Insurance contributions over the following ten years, with the rate returning to normal after they have paid off the loan. Some other provision may have to be made for people approaching retirement.

With a low, inflation-indexed interest rate, this would allow people who need funds to access them, but make it mostly pointless for anyone who did not need to borrow. 

If, like student tuition fees, loans were written off after a certain period, low earners would probably never pay back the entirety of the ‘loan’ – as a one-off transfer (ie, one that does not distort work or savings incentives for recipients) to low paid people, this is probably not a bad thing. Most people, though, would pay back as and when they were able to. For self-employed people in particular, it could be a valuable source of liquidity during an unexpected period where they cannot work. Overall, it would function as a cash transfer to lower earners, and a liquidity injection for everyone else who takes advantage of the scheme.

This would have advantages over money being given to every US or UK citizen, as some have proposed, because most of the money being given out would be repaid, so the net burden on taxpayers would be lower and so the deadweight losses created by the additional tax needed to pay for it would be smaller. But you would also eliminate the need for means-testing, relying on self-selection instead.

The biggest obstacle to rolling something like this out may be administrative. However, if the government committed to setting up something like this, banks and credit card companies may be willing to step in in the short-run to issue short-term loans in the knowledge that people could be able to repay them once the government scheme was set up. To facilitate this, the government could guarantee the loans made by banks and credit card companies now, then allow people to opt into the income-contingent loans later, so there was no need for legislation immediately.

Speed is extremely important in helping people plug the gaps in their finances. As a complement to the government’s other plans, income-contingent loans to groups like self-employed people may be a useful way of catching people who would otherwise fall through the cracks.