Archives For New York Times

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, International Center for Law & Economics).]

Republican Senator Josh Hawley infamously argued that Big Tech is overrated. In his words:

My biggest critique of big tech is: what big innovation have they really given us? What is it now that in the last 15, 20 years that people who say they are the brightest minds in the country have given this country? What are their great innovations?

To Senator Hawley these questions seemed rhetorical. Big Tech’s innovations were trivial gadgets: “autoplay” and “snap streaks”, to quote him once more.

But, as any Monty Python connoisseur will tell you, rhetorical questions have a way of being … not so rhetorical. In one of Python’s most famous jokes, members of the “People’s Front of Judea” ask “what have the Romans ever done for us”? To their own surprise, the answer turns out to be a great deal:

This post is the first in a series examining some of the many ways in which Big Tech is making Coronavirus-related lockdowns and social distancing more bearable, and how Big Tech is enabling our economies to continue functioning (albeit at a severely reduced pace) throughout the outbreak. 

Although Big Tech’s contributions are just a small part of a much wider battle, they suggest that the world is drastically better situated to deal with COVID-19 than it would have been twenty years ago – and this is in no small part thanks to Big Tech’s numerous innovations.

Of course, some will say that the world would be even better equipped to handle COVID-19, if Big Tech had only been subject to more (or less) regulation. Whether these critiques are correct, or not, they are not the point of this post. For many, like Senator Hawley, it is apparently undeniable that tech does more harm than good. But, as this post suggests, that is surely not the case. And before we do decide whether and how we want to regulate it in the future, we should be particularly mindful of what aspects of “Big Tech” seem particularly suited to dealing with the current crisis, and ensure that we don’t adopt regulations that thoughtlessly undermine these.

1. Priceless information 

One of the most important ways in which Big Tech firms have supported international attempts to COVID-19 has been their role as  information intermediaries. 

As the title of a New York Times article put it:

When Facebook Is More Trustworthy Than the President: Social media companies are delivering reliable information in the coronavirus crisis. Why can’t they do that all the time?

The author is at least correct on the first part. Big Tech has become a cornucopia of reliable information about the virus:

  • Big Tech firms are partnering with the White House and other agencies to analyze massive COVID-19 datasets in order to help discover novel answers to questions about transmission, medical care, and other interventions. This partnership is possible thanks to the massive investments in AI infrastructure that the leading tech firms have made. 
  • Google Scholar has partnered with renowned medical journals (as well as public authorities) to guide citizens towards cutting edge scholarship relating to COVID-19. This a transformative ressource in a world of lockdows and overburdened healthcare providers.
  • Google has added a number of features to its main search engine – such as a “Coronavirus Knowledge Panel” and SOS alerts – in order to help users deal with the spread of the virus.
  • On Twitter, information and insights about COVID-19 compete in the market for ideas. Numerous news outlets have published lists of recommended people to follow (Fortune, Forbes). 

    Furthermore – to curb some of the unwanted effects of an unrestrained market for ideas – Twitter (and most other digital platforms) links to the websites of public authorities when users search for COVID-related hashtags.
  • This flow of information is a two-way street: Twitter, Facebook and Reddit, among others, enable citizens and experts to weigh in on the right policy approach to COVID-19. 

    Though the results are sometimes far from perfect, these exchanges may prove invaluable in critical times where usual methods of policy-making (such as hearings and conferences) are mostly off the table.
  • Perhaps most importantly, the Internet is a precious source of knowledge about how to deal with an emerging virus, as well as life under lockdown. We often take for granted how much of our lives benefit from extreme specialization. These exchanges are severely restricted under lockdown conditions. Luckily, with the internet and modern search engines (pioneered by Google), most of the world’s information is but a click away.

    For example, Facebook Groups have been employed by users of the social media platform in order to better coordinate necessary activity among community members — like giving blood — while still engaging in social distancing.

In short, search engines and social networks have been beacons of information regarding COVID-19. Their mostly bottom-up approach to knowledge generation (i.e. popular topics emerge organically) is essential in a world of extreme uncertainty. This has ultimately enabled these players to stay ahead of the curve in bringing valuable information to citizens around the world.

2. Social interactions

This is probably the most obvious way in which Big Tech is making life under lockdown more bearable for everyone. 

  • In Italy, Whatsapp messages and calls jumped by 20% following the outbreak of COVID-19. And Microsoft claims that the use of Skype jumped by 100%.
  • Younger users are turning to social networks, like TikTok, to deal with the harsh realities of the pandemic.
  • Strangers are using Facebook groups to support each other through difficult times.
  • And institutions, like the WHO, are piggybacking on this popularity to further raise awareness about COVID-19 via social media. 
  • In South Africa, health authorities even created a whatsapp contact to answer users questions about the virus.
  • Most importantly, social media is a godsend for senior citizens and anyone else who may have to live in almost total isolation for the foreseeable future. For instance, nursing homes are putting communications apps, like Skype and WhatsApp, in the hands of their patients, to keep up their morale (here and here).

And with the economic effects of COVID-19 starting to gather speed, users will more than ever be grateful to receive these services free of charge. Sharing data – often very limited amounts – with a platform is an insignificant price to pay in times of economic hardship. 

3. Working & Learning

It will also be impossible to effectively fight COVID-19 if we cannot maintain the economy afloat. Stock markets have already plunged by record amounts. Surely, these losses would be unfathomably worse if many of us were not lucky enough to be able to work, and from the safety of our own homes. And for those individuals who are unable to work from home, their own exposure is dramatically reduced thanks to a significant proportion of the population that can stay out of public.

Once again, we largely have Big Tech to thank for this. 

  • Downloads of Microsoft Teams and Zoom are surging on both Google and Apple’s app stores. This is hardly surprising. With much of the workforce staying at home, these video-conference applications have become essential. The increased load generated by people working online might even have caused Microsoft Teams to crash in Europe.
  • According to Microsoft, the number of Microsoft Teams meetings increased by 500 percent in China.
  • Sensing that the current crisis may last for a while, some firms have also started to conduct job interviews online; populars apps for doing so include Skype, Zoom and Whatsapp. 
  • Slack has also seen a surge in usage, as firms set themselves up to work remotely. It has started offering free training, to help firms move online.
  • Along similar lines, Google recently announced that its G suite of office applications – which enables users to share and work on documents online – had recently passed 2 Billion users.
  • Some tech firms (including Google, Microsoft and Zoom) have gone a step further and started giving away some of their enterprise productivity software, in order to help businesses move their workflows online.

And Big Tech is also helping universities, schools and parents to continue providing coursework and lectures to their students/children.

  • Zoom and Microsoft Teams have been popular choices for online learning. To facilitate the transition to online learning, Zoom has notably lifted time limits relating to the free version of its app (for schools in the most affected areas).
  • Even in the US, where the virus outbreak is currently smaller than in Europe, thousands of students are already being taught online.
  • Much of the online learning being conducted for primary school children is being done with affordable Chromebooks. And some of these Chromebooks are distributed to underserved schools through grant programs administered by Google.
  • Moreover, at the time of writing, most of the best selling books on Amazon.com are pre-school learning books:

Finally, the advent of online storage services, such as Dropbox and Google Drive, has largely alleviated the need for physical copies of files. In turn, this enables employees to remotely access all the files they need to stay productive. While this may be convenient under normal circumstances, it becomes critical when retrieving a binder in the office is no longer an option.

4. So what has Big Tech ever done for us?

With millions of families around the world currently under forced lockdown, it is becoming increasingly evident that Big Tech’s innovations are anything but trivial. Innovations that seemed like convenient tools only a couple of days ago, are now becoming essential parts of our daily lives (or, at least, we are finally realizing how powerful they truly are). 

The fight against COVID-19 will be hard. We can at least be thankful that we have Big Tech by our side. Paraphrasing the Monty Python crew: 

Q: What has Big Tech ever done for us? 

A: Abundant, free, and easily accessible information. Precious social interactions. Online working and learning.

Q: But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?

For the answer to this question, I invite you to stay tuned for the next post in this series.

Big Ink vs. Bigger Tech

Ramsi Woodcock —  30 December 2019

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Ramsi Woodcock, Assistant Professor, College of Law, and Assistant Professor, Department of Management at Gatton College of Business & Economics, University of Kentucky.

When in 2011 Paul Krugman attacked the press for bending over backwards to give equal billing to conservative experts on social security, even though the conservatives were plainly wrong, I celebrated. Social security isn’t the biggest part of the government’s budget, and calls to privatize it in order to save the country from bankruptcy were blatant fear mongering. Why should the press report those calls with a neutrality that could mislead readers into thinking the position reasonable?

Journalists’ ethic of balanced reporting looked, at the time, like gross negligence at best, and deceit at worst. But lost in the pathos of the moment was the rationale behind that ethic, which is not so much to ensure that the truth gets into print as to prevent the press from making policy. For if journalists do not practice balance, then they ultimately decide the angle to take.

And journalists, like the rest of us, will choose their own.

The dark underbelly of the engaged journalism unleashed by progressives like Krugman has nowhere been more starkly exposed than in the unfolding assault of journalists, operating as a special interest, on Google, Facebook, and Amazon, three companies that writers believe have decimated their earnings over the past decade.

In story after story, journalists have manufactured an antitrust movement aimed at breaking up these companies, even though virtually no expert in antitrust law or economics, on either the right or the left, can find an antitrust case against them, and virtually no expert would place any of these three companies at the top of the genuinely long list of monopolies in America that are due for an antitrust reckoning.

Bitter ledes

Headlines alone tell the story. We have: “What Happens After Amazon’s Domination Is Complete? Its Bookstore Offers Clues”; “Be Afraid, Jeff Bezos, Be Very Afraid”; “How Should Big Tech Be Reined In? Here Are 4 Prominent Ideas”;  “The Case Against Google”; and “Powerful Coalition Pushes Back on Anti-Tech Fervor.”

My favorite is: “It’s Time to Break Up Facebook.” Unlike the others, it belongs to an Op-Ed, so a bias is appropriate. Not appropriate, however, is the howler, contained in the article’s body, that “a host of legal scholars like Lina Khan, Barry Lynn and Ganesh Sitaraman are plotting a way forward” toward breakup. Lina Khan has never held an academic appointment. Barry Lynn does not even have a law degree. And Ganesh Sitaraman’s academic specialty is constitutional law, not antitrust. But editors let it through anyway.

As this unguarded moment shows, the press has treated these and other members of a small network of activists and legal scholars who operate on antitrust’s fringes as representative of scholarly sentiment regarding antitrust action. The only real antitrust scholar among them is Tim Wu, who, when you look closely at his public statements, has actually gone no further than to call for Facebook to unwind its acquisitions of Instagram and WhatsApp.

In more sober moments, the press has acknowledged that the law does not support antitrust attacks on the tech giants. But instead of helping readers to understand why, the press instead presents this as a failure of the law. “To Take Down Big Tech,” read one headline in The New York Times, “They First Need to Reinvent the Law.” I have documented further instances of unbalanced reporting here.

This is not to say that we don’t need more antitrust in America. Herbert Hovenkamp, who the New York Times once recognized as  “the dean of American antitrust law,” but has since downgraded to “an antitrust expert” after he came out against the breakup movement, has advocated stronger monopsony enforcement across labor markets. Einer Elhauge at Harvard is pushing to prevent index funds from inadvertently generating oligopolies in markets ranging from airlines to pharmacies. NYU economist Thomas Philippon has called for deconcentration of banking. Yale’s Fiona Morton has pointed to rising markups across the economy as a sign of lax antitrust enforcement. Jonathan Baker has argued with great sophistication for more antitrust enforcement in general.

But no serious antitrust scholar has traced America’s concentration problem to the tech giants.

Advertising monopolies old and new

So why does the press have an axe to grind with the tech giants? The answer lies in the creative destruction wrought by Amazon on the publishing industry, and Google and Facebook upon the newspaper industry.

Newspapers were probably the most durable monopolies of the 20th century, so lucrative that Warren Buffett famously picked them as his preferred example of businesses with “moats” around them. But that wasn’t because readers were willing to pay top dollar for newspapers’ reporting. Instead, that was because, incongruously for organizations dedicated to exposing propaganda of all forms on their front pages, newspapers have long striven to fill every other available inch of newsprint with that particular kind of corporate propaganda known as commercial advertising.

It was a lucrative arrangement. Newspapers exhibit powerful network effects, meaning that the more people read a paper the more advertisers want to advertise in it. As a result, many American cities came to have but one major newspaper monopolizing the local advertising market.

One such local paper, the Lorain Journal of Lorain, Ohio, sparked a case that has since become part of the standard antitrust curriculum in law schools. The paper tried to leverage its monopoly to destroy a local radio station that was competing for its advertising business. The Supreme Court affirmed liability for monopolization.

In the event, neither radio nor television ultimately undermined newspapers’ advertising monopolies. But the internet is different. Radio, television, and newspaper advertising can coexist, because they can target only groups, and often not the same ones, minimizing competition between them. The internet, by contrast, reaches individuals, making it strictly superior to group-based advertising. The internet also lets at least some firms target virtually all individuals in the country, allowing those firms to compete with all comers.

You might think that newspapers, which quickly became an important web destination, were perfectly positioned to exploit the new functionality. But being a destination turned out to be a problem. Consumers reveal far more valuable information about themselves to web gateways, like search and social media, than to particular destinations, like newspaper websites. But consumer data is the key to targeted advertising.

That gave Google and Facebook a competitive advantage, and because these companies also enjoy network effects—search and social media get better the more people use them—they inherited the newspapers’ old advertising monopolies.

That was a catastrophe for journalists, whose earnings and employment prospects plummeted. It was also a catastrophe for the public, because newspapers have a tradition of plowing their monopoly profits into investigative journalism that protects democracy, whereas Google and Facebook have instead invested their profits in new technologies like self-driving cars and cryptocurrencies.

The catastrophe of countervailing power

Amazon has found itself in journalists’ crosshairs for disrupting another industry that feeds writers: publishing. Book distribution was Amazon’s first big market, and Amazon won it, driving most brick and mortar booksellers to bankruptcy. Publishing, long dominated by a few big houses that used their power to extract high wholesale prices from booksellers, some of the profit from which they passed on to authors as royalties, now faced a distribution industry that was even more concentrated and powerful than was publishing. The Department of Justice stamped out a desperate attempt by publishers to cartelize in response, and profits, and author royalties, have continued to fall.

Journalists, of course, are writers, and the disruption of publishing, taken together with the disruption of news, have left journalists with the impression that they have nowhere to turn to escape the new economy.

The abuse of antitrust

Except antitrust.

Unschooled in the fine points of antitrust policy, it seems obvious to them that the Armageddon in newspapers and publishing is a problem of monopoly and that antitrust enforcers should do something about it.  

Only it isn’t and they shouldn’t. The courts have gone to great lengths over the past 130 years to distinguish between doing harm to competition, which is prohibited by the antitrust laws, and doing harm to competitors, which is not.

Disrupting markets by introducing new technologies that make products better is no antitrust violation, even if doing so does drive legacy firms into bankruptcy, and throws their employees out of work and into the streets. Because disruption is really the only thing capitalism has going for it. Disruption is the mechanism by which market economies generate technological advances and improve living standards in the long run. The antitrust laws are not there to preserve old monopolies and oligopolies such as those long enjoyed by newspapers and publishers.

In fact, by tearing down barriers to market entry, the antitrust laws strive to do the opposite: to speed the destruction and replacement of legacy monopolies with new and more innovative ones.

That’s why the entire antitrust establishment has stayed on the sidelines regarding the tech fight. It’s hard to think of three companies that have more obviously risen to prominence over the past generation by disrupting markets using superior technologies than Amazon, Google, and Facebook. It may be possible to find an anticompetitive practice here or there—I certainly have—but no serious antitrust scholar thinks the heart of these firms’ continued dominance lies other than in their technical savvy. The nuclear option of breaking up these firms just makes no sense.

Indeed, the disruption inflicted by these firms on newspapers and publishing is a measure of the extent to which these firms have improved book distribution and advertising, just as the vast disruption created by the industrial revolution was a symptom of the extraordinary technological advances of that period. Few people, and not even Karl Marx, thought that the solution to those disruptions lay with Ned Ludd. The solution to the disruption wrought by Google, Amazon, and Facebook today similarly does not lie in using the antitrust laws to smash the machines.

Governments eventually learned to address the disruption created by the original industrial revolution not by breaking up the big firms that brought that revolution about, but by using tax and transfer, and rate regulation, to ensure that the winners share their gains with the losers. However the press’s campaign turns out, rate regulation, not antitrust, is ultimately the approach that government will take to Amazon, Google, and Facebook if these companies continue to grow in power. Because we don’t have to decide between social justice and technological advance. We can have both. And voters will demand it.

The anti-progress wing of the progressive movement

Alas, smashing the machines is precisely what journalists and their supporters are demanding in calling for the breakup of Amazon, Google, and Facebook. Zephyr Teachout, for example, recently told an audience at Columbia Law School that she would ban targeted advertising except for newspapers. That would restore newspapers’ old advertising monopolies, but also make targeted advertising less effective, for the same reason that Google and Facebook are the preferred choice of advertisers today. (Of course, making advertising more effective might not be a good thing. More on this below.)

This contempt for technological advance has been coupled with a broader anti-intellectualism, best captured by an extraordinary remark made by Barry Lynn, director of the pro-breakup Open Markets Institute, and sometime advocate for the Author’s Guild. The Times quotes him saying that because the antitrust laws once contained a presumption against mergers to market shares in excess of 25%, all policymakers have to do to get antitrust right is “be able to count to four. We don’t need economists to help us count to four.”

But size really is not a good measure of monopoly power. Ask Nokia, which controlled more than half the market for cell phones in 2007, on the eve of Apple’s introduction of the iPhone, but saw its share fall almost to zero by 2012. Or Walmart, the nation’s largest retailer and a monopolist in many smaller retail markets, which nevertheless saw its stock fall after Amazon announced one-day shipping.

Journalists themselves acknowledge that size does not always translate into power when they wring their hands about the Amazon-driven financial troubles of large retailers like Macy’s. Determining whether a market lacks competition really does require more than counting the number of big firms in the market.

I keep waiting for a devastating critique of arguments that Amazon operates in highly competitive markets to emerge from the big tech breakup movement. But that’s impossible for a movement that rejects economics as a corporate plot. Indeed, even an economist as pro-antitrust as Thomas Philippon, who advocates a return to antitrust’s mid-20th century golden age of massive breakups of firms like Alcoa and AT&T, affirms in a new book that American retail is actually a bright spot in an otherwise concentrated economy.

But you won’t find journalists highlighting that. The headline of a Times column promoting Philippon’s book? “Big Business Is Overcharging you $5000 a Year.” I tend to agree. But given all the anti-tech fervor in the press, Philippon’s chapter on why the tech giants are probably not an antitrust problem ought to get a mention somewhere in the column. It doesn’t.

John Maynard Keynes famously observed that “though no one will believe it—economics is a technical and difficult subject.” So too antitrust. A failure to appreciate the field’s technical difficulty is manifest also in Democratic presidential candidate Elizabeth Warren’s antitrust proposals, which were heavily influenced by breakup advocates.

Warren has argued that no large firm should be able to compete on its own platforms, not seeming to realize that doing business means competing on your own platforms. To show up to work in the morning in your own office space is to compete on a platform, your office, from which you exclude competitors. The rule that large firms (defined by Warren as those with more than $25 billion in revenues) cannot compete on their own platforms would just make doing large amounts of business illegal, a result that Warren no doubt does not desire.

The power of the press

The press’s campaign against Amazon, Google, and Facebook is working. Because while they may not be as well financed as Amazon, Google, or Facebook, writers can offer their friends something more valuable than money: publicity.

That appears to have induced a slew of politicians, including both Senator Warren on the left and Senator Josh Hawley on the right, to pander to breakup advocates. The House antitrust investigation into the tech giants, led by a congressman who is simultaneously championing legislation advocated by the News Media Alliance, a newspaper trade group, to give newspapers an exemption from the antitrust laws, may also have similar roots. So too the investigations announced by dozens of elected state attorneys general.

The investigations recently opened by the FTC and Department of Justice may signal no more than a desire not to look idle while so many others act. Which is why the press has the power to turn fiction into reality. Moreover, under the current Administration, the Department of Justice has already undertaken two suspiciously partisan antitrust investigations, and President Trump has made clear his hatred for the liberal bastions that are Amazon, Google and Facebook. The fact that the press has made antitrust action against the tech giants a progressive cause provides convenient cover for the President to take down some enemies.

The future of the news

Rate regulation of Amazon, Google, or Facebook is the likely long-term resolution of concerns about these firms’ power. But that won’t bring back newspapers, which henceforth will always play the loom to Google and Facebook’s textile mills, at least in the advertising market.

Journalists and their defenders, like Teachout, have been pushing to restore newspapers’ old monopolies by government fiat. No doubt that would make existing newspapers, and their staffs, very happy. But what is good for Big News is not necessarily good for journalism in the long run.

The silver lining to the disruption of newspapers’ old advertising monopolies is that it has created an opportunity for newspapers to wean themselves off a funding source that has always made little sense for organizations dedicated to helping Americans make informed, independent decisions, free of the manipulation of others.

For advertising has always had a manipulative function, alongside its function of disseminating product information to consumers. And, as I have argued elsewhere, now that the vast amounts of product information available for free on the internet have made advertising obsolete as a source of product information, manipulation is now advertising’s only real remaining function.

Manipulation causes consumers to buy products they don’t really want, giving firms that advertise a competitive advantage that they don’t deserve. That makes for an antitrust problem, this time with real consequences not just for competitors, but also for technological advance, as manipulative advertising drives dollars away from superior products toward advertised products, and away from investment in innovation and toward investment in consumer seduction.

The solution is to ban all advertising, targeted or not, rather than to give newspapers an advertising monopoly. And to give journalism the state subsidies that, like all public goods, from defense to highways, are journalism’s genuine due. The BBC provides a model of how that can be done without fear of government influence.

Indeed, Teachout’s proposed newspaper advertising monopoly is itself just a government subsidy, but a subsidy extracted through an advertising medium that harms consumers. Direct government subsidization achieves the same result, without the collateral consumer harm.

The press’s brazen advocacy of antitrust action against the tech giants, without making clear how much the press itself has to gain from that action, and the utter absence of any expert support for this approach, represents an abdication by the press of its responsibility to create an informed citizenry that is every bit as profound as the press’s lapses on social security a decade ago.

I’m glad we still have social security. But I’m also starting to miss balanced journalism.

1/3/2020: Editor’s note – this post was edited for clarification and minor copy edits.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Geoffrey A. Manne, president and founder of the International Center for Law & Economics, and Alec Stapp, Research Fellow at the International Center for Law & Economics.

Source: The Economist

Is there a relationship between concentrated economic power and political power? Do big firms have success influencing politicians and regulators to a degree that smaller firms — or even coalitions of small firms — could only dream of? That seems to be the narrative that some activists, journalists, and scholars are pushing of late. And, to be fair, it makes some intuitive sense (before you look at the data). The biggest firms have the most resources — how could they not have an advantage in the political arena?

The argument that corporate power leads to political power faces at least four significant challenges, however. First, the little empirical research there is does not support the claim. Second, there is almost no relationship between market capitalization (a proxy for economic power) and lobbying expenditures (a, admittedly weak, proxy for political power). Third, the absolute level of spending on lobbying is surprisingly low in the US given the potential benefits from rent-seeking (this is known as the Tullock paradox). Lastly, the proposed remedy for this supposed problem is to make antitrust more political — an intervention that is likely to make the problem worse rather than better (assuming there is a problem to begin with).

The claims that political power follows economic power

The claim that large firms or industry concentration causes political power (and thus that under-enforcement of antitrust laws is a key threat to our democratic system of government) is often repeated, and accepted as a matter of faith. Take, for example, Robert Reich’s March 2019 Senate testimony on “Does America Have a Monopoly Problem?”:

These massive corporations also possess substantial political clout. That’s one reason they’re consolidating: They don’t just seek economic power; they also seek political power.

Antitrust laws were supposed to stop what’s been going on.

* * *

[S]uch large size and gigantic capitalization translate into political power. They allow vast sums to be spent on lobbying, political campaigns, and public persuasion. (emphasis added)

Similarly, in an article in August of 2019 for The Guardian, law professor Ganesh Sitaraman argued there is a tight relationship between economic power and political power:

[R]eformers recognized that concentrated economic power — in any form — was a threat to freedom and democracy. Concentrated economic power not only allowed for localized oppression, especially of workers in their daily lives, it also made it more likely that big corporations and wealthy people wouldn’t be subject to the rule of law or democratic controls. Reformers’ answer to the concentration of economic power was threefold: break up economic power, rein it in through regulation, and tax it.

It was the reformers of the Gilded Age and Progressive Era who invented America’s antitrust laws — from the Sherman Antitrust Act of 1890 to the Clayton Act and Federal Trade Commission Acts of the early 20th century. Whether it was Republican trust-buster Teddy Roosevelt or liberal supreme court justice Louis Brandeis, courageous leaders in this era understood that when companies grow too powerful they threatened not just the economy but democratic government as well. Break-ups were a way to prevent the agglomeration of economic power in the first place, and promote an economic democracy, not just a political democracy. (emphasis added)

Luigi Zingales made a similar argument in his 2017 paper “Towards a Political Theory of the Firm”:

[T]he interaction of concentrated corporate power and politics is a threat to the functioning of the free market economy and to the economic prosperity it can generate, and a threat to democracy as well. (emphasis added)

The assumption that economic power leads to political power is not a new one. Not only, as Zingales points out, have political thinkers since Adam Smith asserted versions of the same, but more modern social scientists have continued the claims with varying (but always indeterminate) degrees of quantification. Zingales quotes Adolf Berle and Gardiner Means’ 1932 book, The Modern Corporation and Private Property, for example:

The rise of the modern corporation has brought a concentration of economic power which can compete on equal terms with the modern state — economic power versus political power, each strong in its own field. 

Russell Pittman (an economist at the DOJ Antitrust Division) argued in 1988 that rent-seeking activities would be undertaken only by firms in highly concentrated industries because:

if the industry in question is unconcentrated, then the firm may decide that the level of benefits accruing to the industry will be unaffected by its own level of contributions, so that the benefits may be enjoyed without incurrence of the costs. Such a calculation may be made by other firms in the industry, of course, with the result that a free-rider problem prevents firms individually from making political contributions, even if it is in their collective interest to do so.

For the most part the claims are virtually entirely theoretical and their support anecdotal. Reich, for example, supports his claim with two thin anecdotes from which he draws a firm (but, in fact, unsupported) conclusion: 

To take one example, although the European Union filed fined [sic] Google a record $2.7 billion for forcing search engine users into its own shopping platforms, American antitrust authorities have not moved against the company.

Why not?… We can’t be sure why the FTC chose not to pursue Google. After all, section 5 of the Federal Trade Commission Act of 1914 gives the Commission broad authority to prevent unfair acts or practices. One distinct possibility concerns Google’s political power. It has one of the biggest lobbying powerhouses in Washington, and the firm gives generously to Democrats as well as Republicans.

A clearer example of an abuse of power was revealed last November when the New York Times reported that Facebook executives withheld evidence of Russian activity on their platform far longer than previously disclosed.

Even more disturbing, Facebook employed a political opposition research firm to discredit critics. How long will it be before Facebook uses its own data and platform against critics? Or before potential critics are silenced even by the possibility? As the Times’s investigation made clear, economic power cannot be separated from political power. (emphasis added)

The conclusion — that “economic power cannot be separated from political power” — simply does not follow from the alleged evidence. 

The relationship between economic power and political power is extremely weak

Few of these assertions of the relationship between economic and political power are backed by empirical evidence. Pittman’s 1988 paper is empirical (as is his previous 1977 paper looking at the relationship between industry concentration and contributions to Nixon’s re-election campaign), but it is also in direct contradiction to several other empirical studies (Zardkoohi (1985); Munger (1988); Esty and Caves (1983)) that find no correlation between concentration and political influence; Pittman’s 1988 paper is indeed a response to those papers, in part. 

In fact, as one study (Grier, Muger & Roberts (1991)) summarizes the evidence:

[O]f ten empirical investigations by six different authors/teams…, relatively few of the studies find a positive, significant relation between contributions/level of political activity and concentration, though a variety of measures of both are used…. 

There is little to recommend most of these studies as conclusive one way or the other on the question of interest. Each one suffers from a sample selection or estimation problem that renders its results suspect. (emphasis added)

And, as they point out, there is good reason to question the underlying theory of a direct correlation between concentration and political influence:

[L]egislation or regulation favorable to an industry is from the perspective of a given firm a public good, and therefore subject to Olson’s collective action problem. Concentrated industries should suffer less from this difficulty, since their sparse numbers make bargaining cheaper…. [But at the same time,] concentration itself may affect demand, suggesting that the predicted correlation between concentration and political activity may be ambiguous, or even negative. 

* * *

The only conclusion that seems possible is that the question of the correct relation between the structure of an industry and its observed level of political activity cannot be resolved theoretically. While it may be true that firms in a concentrated industry can more cheaply solve the collective action problem that inheres in political action, they are also less likely to need to do so than their more competitive brethren…. As is so often the case, the interesting question is empirical: who is right? (emphasis added)

The results of Grier, Muger & Roberts (1991)’s own empirical study are ambiguous at best (and relate only to political participation, not success, and thus not actual political power):

[A]re concentrated industries more or less likely to be politically active? Numerous previous studies have addressed this topic, but their methods are not comparable and their results are flatly contradictory. 

On the side of predicting a positive correlation between concentration and political activity is the theory that Olson’s “free rider” problem has more bite the larger the number of participants and the smaller their respective individual benefits. Opposing this view is the claim that it precisely because such industries are concentrated that they have less need for government intervention. They can act on their own to gamer the benefits of cartelization that less concentrated industries can secure only through political activity. 

Our results indicate that both sides are right, over some range of concentration. The relation between political activity and concentration is a polynomial of degree 2, rising and then falling, achieving a peak at a four-firm concentration ratio slightly below 0.5. (emphasis added)

Despite all of this, Zingales (like others) explicitly claims that there is a clear and direct relationship between economic power and political power:

In the last three decades in the United States, the power of corporations to shape the rules of the game has become stronger… [because] the size and market share of companies has increased, which reduces the competition across conflicting interests in the same sector and makes corporations more powerful vis-à-vis consumers’ interest.

But a quick look at the empirical data continues to call this assertion into serious question. Indeed, if we look at the lobbying expenditures of the top 50 companies in the US by market capitalization, we see an extremely weak (at best) relationship between firm size and political power (as proxied by lobbying expenditures):

Of course, once again, this says little about the effectiveness of efforts to exercise political power, which could, in theory, correlate with market power but not expenditures. Yet the evidence on this suggests that, while concentration “increases both [political] activity and success…, [n]either firm size nor industry size has a robust influence on political activity or success.” (emphasis added). Of course there are enormous and well-known problems with measuring industry concentration, and it’s not clear that even this attribute is well-correlated with political activity or success (and, interestingly for the argument that profits are a big part of the story because firms in more concentrated industries from lax antitrust realize higher profits have more money to spend on political influence, even concentration in the Esty and Caves study is not correlated with political expenditures.)

Indeed, a couple of examples show the wide range of lobbying expenditures for a given firm size. Costco, which currently has a market cap of $130 billion, has spent only $210,000 on lobbying so far in 2019. By contrast, Amgen, which has a $144 billion market cap, has spent $8.54 million, or more than 40 times as much. As shown in the chart above, this variance is the norm. 

However, discussing the relative differences between these companies is less important than pointing out the absolute levels of expenditure. Spending eight and a half million dollars per year would not be prohibitive for literally thousands of firms in the US. If access is this cheap, what’s going on here?

Why is there so little money in US politics?

The Tullock paradox asks why, if the return to rent-seeking is so high — which it plausibly is because the government spends trillions of dollars each year — is so little money spent on influencing policymakers?

Considering the value of public policies at stake and the reputed influence of campaign contributors in policymaking, Gordon Tullock (1972) asked, why is there so little money in U.S. politics? In 1972, when Tullock raised this question, campaign spending was about $200 million. Assuming a reasonable rate of return, such an investment could have yielded at most $250-300 million over time, a sum dwarfed by the hundreds of billions of dollars worth of public expenditures and regulatory costs supposedly at stake.

A recent article by Scott Alexander updated the numbers for 2019 and compared the total to the $12 billion US almond industry:

[A]ll donations to all candidates, all lobbying, all think tanks, all advocacy organizations, the Washington Post, Vox, Mic, Mashable, Gawker, and Tumblr, combined, are still worth a little bit less than the almond industry.

Maybe it’s because spending money on donations, lobbying, think tanks, journalism and advocacy is ineffective on net (i.e., spending by one group is counterbalanced by spending by another group) and businesses know it?

In his paper on elections, Ansolabehere focuses on the corporate perspective. He argues that money neither makes a candidate much more likely to win, nor buys much influence with a candidate who does win. Corporations know this, which is why they don’t bother spending more. (emphasis added)

To his credit, Zingales acknowledges this issue:

To the extent that US corporations are exercising political influence, it seems that they are choosing less-visible but perhaps more effective ways. In fact, since Gordon Tullock’s (1972) famous article, it has been a puzzle in political science why there is so little money in politics (as discussed in this journal by Ansolabehere, de Figueiredo, and Snyder 2003).

So, what are these “less-visible but perhaps more effective” ways? Unfortunately, the evidence in support of this claim is anecdotal and unconvincing. As noted above, Reich offers only speculation and extremely weak anecdotal assertions. Meanwhile, Zingales tells the story of Robert (mistakenly identified in the paper as “Richard”) Rubin pushing through repeal of Glass-Steagall to benefit Citigroup, then getting hired for $15 million a year when he left the government. Assuming the implication is actually true, is that amount really beyond the reach of all but the largest companies? How many banks with an interest in the repeal of Glass-Steagall were really unlikely at the time to be able to credibly offer future compensation because they would be out of business? Very few, and no doubt some of the biggest and most powerful were arguably at greater risk of bankruptcy than some of the smaller banks.

Maybe only big companies have an interest in doing this kind of thing because they have more to lose? But in concentrated industries they also have more to lose by conferring the benefit on their competitors. And it’s hard to make the repeal or passage of a law, say, apply only to you and not everyone else in the industry. Maybe they collude? Perhaps, but is there any evidence of this? Zingales offers only pure speculation here, as well. For example, why was the US Google investigation dropped but not the EU one? Clearly because of White House visits, says Zingales. OK — but how much do these visits cost firms? If that’s the source of political power, it surely doesn’t require monopoly profits to obtain it. And it’s virtually impossible that direct relationships of this kind are beyond the reach of coalitions of smaller firms, or even small firms, full stop.  

In any case, the political power explanation turns mostly on doling out favors in exchange for individuals’ payoffs — which just aren’t that expensive, and it’s doubtful that the size of a firm correlates with the quality of its one-on-one influence brokering, except to the extent that causation might run the other way — which would be an indictment not of size but of politics. Of course, in the Hobbesian world of political influence brokering, as in the Hobbesian world of pre-political society, size alone is not determinative so long as alliances can be made or outcomes turn on things other than size (e.g., weapons in the pre-Hobbesian world; family connections in the world of political influence)

The Noerr–Pennington doctrine is highly relevant here as well. In Noerr, the Court ruled that “no violation of the [Sherman] Act can be predicated upon mere attempts to influence the passage or enforcement of laws” and “[j]oint efforts to influence public officials do not violate the antitrust laws even though intended to eliminate competition.” This would seem to explain, among other things, the existence of trade associations and other entities used by coalitions of small (and large) firms to influence the policymaking process.

If what matters for influence peddling is ultimately individual relationships and lobbying power, why aren’t the biggest firms in the world the lobbying firms and consultant shops? Why is Rubin selling out for $15 million a year if the benefit to Citigroup is in the billions? And, if concentration is the culprit, why isn’t it plausibly also the solution? It isn’t only the state that keeps the power of big companies in check; it’s other big companies, too. What Henry G. Manne said in his testimony on the Industrial Reorganization Act of 1973 remains true today: 

There is simply no correlation between the concentration ratio in an industry, or the size of its firms, and the effectiveness of the industry in the halls of Government.

In addition to the data presented earlier, this analysis would be incomplete if it did not mention the role of advocacy groups in influencing outcomes, the importance and size of large foundations, the role of unions, and the role of individual relationships.

Maybe voters matter more than money?

The National Rifle Association spends very little on direct lobbying efforts (less than $10 million over the most recent two-year cycle). The organization’s total annual budget is around $400 million. In the grand scheme of things, these are not overwhelming resources. But the NRA is widely-regarded as one of the most powerful political groups in the country, particularly within the Republican Party. How could this be? In short, maybe it’s not Sturm Ruger, Remington Outdoor, and Smith & Wesson — the three largest gun manufacturers in the US — that influence gun regulations; maybe it’s the highly-motivated voters who like to buy guns. 

The NRA has 5.5 million members, many of whom vote in primaries with gun rights as one of their top issues  — if not the top issue. And with low turnout in primaries — only 8.7% of all registered voters participated in 2018 Republican primaries — a candidate seeking the Republican nomination all but has to secure an endorsement from the NRA. On this issue at least, the deciding factor is the intensity of voter preferences, not the magnitude of campaign donations from rent-seeking corporations.

The NRA is not the only counterexample to arguments like those from Zingales. Auto dealers are a constituency that is powerful not necessarily due to its raw size but through its dispersed nature. At the state level, almost every political district has an auto dealership (and the owners are some of the wealthiest and best-connected individuals in the area). It’s no surprise then that most states ban the direct sale of cars from manufacturers (i.e., you have to go through a dealer). This results in higher prices for consumers and lower output for manufacturers. But the auto dealership industry is not highly concentrated at the national level. The dealers don’t need to spend millions of dollars lobbying federal policymakers for special protections; they can do it on the local level — on a state-by-state basis — for much less money (and without merging into consolidated national chains).

Another, more recent, case highlights the factors besides money that may affect political decisions. President Trump has been highly critical of Jeff Bezos and the Washington Post (which Bezos owns) since the beginning of his administration because he views the newspaper as a political enemy. In October, Microsoft beat out Amazon for a $10 billion contract to provide cloud infrastructure for the Department of Defense (DoD). Now, Amazon is suing the government, claiming that Trump improperly influenced the competitive bidding process and cost the company a fair shot at the contract. This case is a good example of how money may not be determinative at the margin, and also how multiple “monopolies” may have conflicting incentives and we don’t know how they net out.

Politicizing antitrust will only make this problem worse

At the FTC’s “Hearings on Competition and Consumer Protection in the 21st Century,” Barry Lynn of the Open Markets Institute advocated using antitrust to counter the political power of economically powerful firms:

[T]he main practical goal of antimonopoly is to extend checks and balances into the political economy. The foremost goal is not and must never be efficiency. Markets are made, they do not exist in any platonic ether. The making of markets is a political and moral act.

In other words, the goal of breaking up economic power is not to increase economic benefits but to decrease political influence. 

But as the author of one of the empirical analyses of the relationship between economic and political power notes the asserted “solution” to the unsupported “problem” of excess political influence by economically powerful firms — more and easier antitrust enforcement — may actually make the alleged problem worse:

Economic rents may be obtained through the process of market competition or be obtained by resorting to governmental protection. Rational firms choose the least costly alternative. Collusion to obtain governmental protection will be less costly, the higher the concentration, ceteris paribus. However, high concentration in itself is neither necessary nor sufficient to induce governmental protection.

The result that rent-seeking activity is triggered when firms are affected by government regulation has a clear implication: to reduce rent-seeking waste, governmental interference in the market place needs to be attenuated. Pittman’s suggested approach, however, is “to maintain a vigorous antitrust policy” (p. 181). In fact, a more strict antitrust policy may exacerbate rent-seeking. For example, the firms which will be affected by a vigorous application of antitrust laws would have incentive to seek moderation (or rents) from Congress or from the enforcement officials.

Rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration. And imbuing antitrust with an ill-defined set of vague political objectives (as many proponents of these arguments desire), would also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so. 

And if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? With an expanded basis for increased enforcement, the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might find that we end up with even more concentration because the exceptions could subsume the rules. All of which of course highlights the fundamental, underlying irony of claims that we need to diminish the economic content of antitrust in order to reduce the political power of private firms: If you make antitrust more political, you’ll get less democratic, more politically determined, results.

Antitrust populists have a long list of complaints about competition policy, including: laws aren’t broad enough or tough enough, enforcers are lax, and judges tend to favor defendants over plaintiffs or government agencies. The populist push got a bump with the New York Times coverage of Lina Khan’s “Amazon’s Antitrust Paradox” in which she advocated breaking up Amazon and applying public utility regulation to platforms. Khan’s ideas were picked up by Sen. Elizabeth Warren, who has a plan for similar public utility regulation and promised to unwind earlier acquisitions by Amazon (Whole Foods and Zappos), Facebook (WhatsApp and Instagram), and Google (Waze, Nest, and DoubleClick).

Khan, Warren, and the other Break Up Big Tech populists don’t clearly articulate how consumers, suppliers — or anyone for that matter — would be better off with their mandated spinoffs. The Khan/Warren plan, however, requires a unique alignment of many factors: Warren must win the White House, Democrats must control both houses of Congress, and judges must substantially shift their thinking. It’s like turning a supertanker on a dime in the middle of a storm. Instead of publishing manifestos and engaging in antitrust hashtag hipsterism, maybe — just maybe — the populists can do something.

The populists seem to have three main grievances:

  • Small firms cannot enter the market or cannot thrive once they enter;
  • Suppliers, including workers, are getting squeezed; and
  • Speculation that someday firms will wake up, realize they have a monopoly, and begin charging noncompetitive prices to consumers.

Each of these grievances can be, and has been, already addressed by antitrust and competition litigation. And, in many cases these grievances were addressed in private antitrust litigation. For example:

In the US, private actions are available for a wide range of alleged anticompetitive conduct, including coordinated conduct (e.g., price-fixing), single-firm conduct (e.g., predatory pricing), and mergers that would substantially lessen competition. 

If the antitrust populists are so confident that concentration is rising and firms are behaving anticompetitively and consumers/suppliers/workers are being harmed, then why don’t they organize an antitrust lawsuit against the worst of the worst violators? If anticompetitive activity is so obvious and so pervasive, finding compelling cases should be easy.

For example, earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. Why not put Sussman’s theory to the test by building an antitrust case around it? The discovery process would unleash a treasure trove of cost data and probably more than a few “hot docs.”

Khan argues:

While predatory pricing technically remains illegal, it is extremely difficult to win predatory pricing claims because courts now require proof that the alleged predator would be able to raise prices and recoup its losses. 

However, in her criticism of the court in the Apple e-books litigation, she lays out a clear rationale for courts to revise their thinking on predatory pricing [emphasis added]:

Judge Cote, who presided over the district court trial, refrained from affirming the government’s conclusion. Still, the government’s argument illustrates the dominant framework that courts and enforcers use to analyze predation—and how it falls short. Specifically, the government erred by analyzing the profitability of Amazon’s e-book business in the aggregate and by characterizing the conduct as “loss leading” rather than potentially predatory pricing. These missteps suggest a failure to appreciate two critical aspects of Amazon’s practices: (1) how steep discounting by a firm on a platform-based product creates a higher risk that the firm will generate monopoly power than discounting on non-platform goods and (2) the multiple ways Amazon could recoup losses in ways other than raising the price of the same e-books that it discounted.

Why not put Khan’s cross-subsidy theory to the test by building an antitrust case around it? Surely there’d be a document explaining how the firm expects to recoup its losses. Or, maybe not. Maybe by the firm’s accounting, it’s not losing money on the discounted products. Without evidence, it’s just speculation.

In fairness, one can argue that recent court decisions have made pursuing private antitrust litigation more difficult. For example, the Supreme Court’s decision in Twombly requires an antitrust plaintiff to show more than mere speculation based on circumstantial evidence in order to move forward to discovery. Decisions in matters such as Ashcroft v. Iqbal have made it more difficult for plaintiffs to maintain antitrust claims. Wal-Mart v. Dukes and Comcast Corp v Behrend subject antitrust class actions to more rigorous analysis. In Ohio v. Amex the court ruled antitrust plaintiffs can’t meet the burden of proof by showing only some effect on some part of a two-sided market.

At the same time Jeld-Wen indicates third party plaintiffs can be awarded damages and obtain divestitures, even after mergers clear. In Jeld-Wen, a competitor filed suit to challenge the consummated Jeld-Wen/Craftmaster merger four years after the DOJ approved the merger without conditions. The challenge was lengthy, but successful, and a district court ordered damages and the divestiture of one of the combined firm’s manufacturing facilities six years after the merger was closed.

Despite the possible challenges of pursuing a private antitrust suit, Daniel Crane’s review of US federal court workload statistics concludes the incidence of private antitrust enforcement in the United States has been relatively stable since the mid-1980s — in the range of 600 to 900 new private antitrust filings a year. He also finds resolution by trial has been relatively stable at an average of less than 1 percent a year. Thus, it’s not clear that recent decisions have erected insurmountable barriers to antitrust plaintiffs.

In the US, third parties may fund private antitrust litigation and plaintiffs’ attorneys are allowed to work under a contingency fee arrangement, subject to court approval. A compelling case could be funded by deep-pocketed supporters of the populists’ agenda, big tech haters, or even investors. Perhaps the most well-known example is Peter Thiel’s bankrolling of Hulk Hogan’s takedown of Gawker. Before that, the savings and loan crisis led to a number of forced mergers which were later challenged in court, with the costs partially funded by the issuance of litigation tracking warrants.

The antitrust populist ranks are chock-a-block with economists, policy wonks, and go-getter attorneys. If they are so confident in their claims of rising concentration, bad behavior, and harm to consumers, suppliers, and workers, then they should put those ideas to the test with some slam dunk litigation. The fact that they haven’t suggests they may not have a case.

The Economists' Hour

John Maynard Keynes wrote in his famous General Theory that “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” 

This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society,  New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning. 

Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.  

Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.” 

Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s. 

Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.

In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.

First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.

The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.

In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.

Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.

Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,

“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”

This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.

Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.

In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data. 

As explained in a recent blog post on Truth on the Market by ICLE’s chief economist Eric Fruits: 

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration. 

In fact, one recent study, titled Are legacy airline mergers pro- or anti-competitive? Evidence from recent U.S. airline mergers takes it a step further. Data from legacy U.S. airline mergers appears to show they have resulted in pro-consumer benefits once quality-adjusted fares are taken into account:

Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger… 

One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.

In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.

Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:

U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.

Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).

Country Model 1 Model 2 Model 3 Model 4
Price Rank Price Rank Price Rank Price Rank
Australia $78.30 28 $82.81 27 $102.63 26 $84.45 23
Austria $48.04 17 $60.59 15 $73.17 11 $74.02 17
Belgium $46.82 16 $66.62 21 $75.29 13 $81.09 22
Canada $69.66 27 $74.99 25 $92.73 24 $76.57 19
Chile $33.42 8 $73.60 23 $83.81 20 $88.97 25
Czech Republic $26.83 3 $49.18 6 $69.91 9 $60.49 6
Denmark $43.46 14 $52.27 8 $69.37 8 $63.85 8
Estonia $30.65 6 $56.91 12 $81.68 19 $69.06 12
Finland $35.00 9 $37.95 1 $57.49 2 $51.61 1
France $30.12 5 $44.04 4 $61.96 4 $54.25 3
Germany $36.00 12 $53.62 10 $75.09 12 $66.06 11
Greece $35.38 10 $64.51 19 $80.72 17 $78.66 21
Iceland $65.78 25 $73.96 24 $94.85 25 $90.39 26
Ireland $56.79 22 $62.37 16 $76.46 14 $64.83 9
Italy $29.62 4 $48.00 5 $68.80 7 $59.00 5
Japan $40.12 13 $53.58 9 $81.47 18 $72.12 15
Latvia $20.29 1 $42.78 3 $63.05 5 $52.20 2
Luxembourg $56.32 21 $54.32 11 $76.83 15 $72.51 16
Mexico $35.58 11 $91.29 29 $120.40 29 $109.64 29
Netherlands $44.39 15 $63.89 18 $89.51 21 $77.88 20
New Zealand $59.51 24 $81.42 26 $90.55 22 $76.25 18
Norway $88.41 29 $71.77 22 $103.98 27 $96.95 27
Portugal $30.82 7 $58.27 13 $72.83 10 $71.15 14
South Korea $25.45 2 $42.07 2 $52.01 1 $56.28 4
Spain $54.95 20 $87.69 28 $115.51 28 $106.53 28
Sweden $52.48 19 $52.16 7 $61.08 3 $70.41 13
Switzerland $66.88 26 $65.01 20 $91.15 23 $84.46 24
United Kingdom $50.77 18 $63.75 17 $79.88 16 $65.44 10
United States $58.00 23 $59.84 14 $64.75 6 $62.94 7
Average $46.55 $61.70 $80.24 $73.73

Model 1: Unadjusted for demographics and content quality

Model 2: Adjusted for demographics but not content quality

Model 3: Adjusted for demographics and data usage

Model 4: Adjusted for demographics and content quality

Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:

The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing. 

In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE. 

Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition. 

In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.

Conclusion

At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway.  For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors. 

So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”

For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in The Economists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.

A spate of recent newspaper investigations and commentary have focused on Apple allegedly discriminating against rivals in the App Store. The underlying assumption is that Apple, as a vertically integrated entity that operates both a platform for third-party apps and also makes it own apps, is acting nefariously whenever it “discriminates” against rival apps through prioritization, enters into popular app markets, or charges a “tax” or “surcharge” on rival apps. 

For most people, the word discrimination has a pejorative connotation of animus based upon prejudice: racism, sexism, homophobia. One of the definitions you will find in the dictionary reflects this. But another definition is a lot less charged: the act of making or perceiving a difference. (This is what people mean when they say that a person has a discriminating palate, or a discriminating taste in music, for example.)

In economics, discrimination can be a positive attribute. For instance, effective price discrimination can result in wealthier consumers paying a higher price than less well off consumers for the same product or service, and it can ensure that products and services are in fact available for less-wealthy consumers in the first place. That would seem to be a socially desirable outcome (although under some circumstances, perfect price discrimination can be socially undesirable). 

Antitrust law rightly condemns conduct only when it harms competition and not simply when it harms a competitor. This is because it is competition that enhances consumer welfare, not the presence or absence of a competitor — or, indeed, the profitability of competitors. The difficult task for antitrust enforcers is to determine when a vertically integrated firm with “market power” in an upstream market is able to effectively discriminate against rivals in a downstream market in a way that harms consumers

Even assuming the claims of critics are true, alleged discrimination by Apple against competitor apps in the App Store may harm those competitors, but it doesn’t necessarily harm either competition or consumer welfare.

The three potential antitrust issues facing Apple can be summarized as:

There is nothing new here economically. All three issues are analogous to claims against other tech companies. But, as I detail below, the evidence to establish any of these claims at best represents harm to competitors, and fails to establish any harm to the competitive process or to consumer welfare.

Prioritization

Antitrust enforcers have rejected similar prioritization claims against Google. For instance, rivals like Microsoft and Yelp have funded attacks against Google, arguing the search engine is harming competition by prioritizing its own services in its product search results over competitors. As ICLE and affiliated scholars have pointed out, though, there is nothing inherently harmful to consumers about such prioritization. There are also numerous benefits in platforms directly answering queries, even if it ends up directing users to platform-owned products or services.

As Geoffrey Manne has observed:

there is good reason to believe that Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to vigorously compete and to decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content to partially displace the original “ten blue links” design of its search results page and offer its own answers to users’ queries in its stead. 

Here, the antitrust case against Apple for prioritization is similarly flawed. For example, as noted in a recent article in the WSJ, users often use the App Store search in order to find apps they already have installed:

“Apple customers have a very strong connection to our products and many of them use search as a way to find and open their apps,” Apple said in a statement. “This customer usage is the reason Apple has strong rankings in search, and it’s the same reason Uber, Microsoft and so many others often have high rankings as well.” 

If a substantial portion of searches within the App Store are for apps already on the iPhone, then showing the Apple app near the top of the search results could easily be consumer welfare-enhancing. 

Apple is also theoretically leaving money on the table by prioritizing its (already pre-loaded) apps over third party apps. If its algorithm promotes its own apps over those that may earn it a 30% fee — additional revenue — the prioritization couldn’t plausibly be characterized as a “benefit” to Apple. Apple is ultimately in the business of selling hardware. Losing customers of the iPhone or iPad by prioritizing apps consumers want less would not be a winning business strategy.

Further, it stands to reason that those who use an iPhone may have a preference for Apple apps. Such consumers would be naturally better served by seeing Apple’s apps prioritized over third-party developer apps. And if consumers do not prefer Apple’s apps, rival apps are merely seconds of scrolling away.

Moreover, all of the above assumes that Apple is engaging in sufficiently pervasive discrimination through prioritzation to have a major impact on the app ecosystem. But substantial evidence exists that the universe of searches for which Apple’s algorithm prioritizes Apple apps is small. For instance, most searches are for branded apps already known by the searcher:

Keywords: how many are brands?

  • Top 500: 58.4%
  • Top 400: 60.75%
  • Top 300: 68.33%
  • Top 200: 80.5%
  • Top 100: 86%
  • Top 50: 90%
  • Top 25: 92%
  • Top 10: 100%

This is corroborated by data from the NYT’s own study, which suggests Apple prioritized its own apps first in only roughly 1% of the overall keywords queried: 

Whatever the precise extent of increase in prioritization, it seems like any claims of harm are undermined by the reality that almost 99% of App Store results don’t list Apple apps first. 

The fact is, very few keyword searches are even allegedly affected by prioritization. And the algorithm is often adjusting to searches for apps already pre-loaded on the device. Under these circumstances, it is very difficult to conclude consumers are being harmed by prioritization in search results of the App Store.

Entry

The issue of Apple building apps to compete with popular apps in its marketplace is similar to complaints about Amazon creating its own brands to compete with what is sold by third parties on its platform. For instance, as reported multiple times in the Washington Post:

Clue, a popular app that women use to track their periods, recently rocketed to the top of the App Store charts. But the app’s future is now in jeopardy as Apple incorporates period and fertility tracking features into its own free Health app, which comes preinstalled on every device. Clue makes money by selling subscriptions and services in its free app. 

However, there is nothing inherently anticompetitive about retailers selling their own brands. If anything, entry into the market is normally procompetitive. As Randy Picker recently noted with respect to similar claims against Amazon: 

The heart of this dynamic isn’t new. Sears started its catalogue business in 1888 and then started using the Craftsman and Kenmore brands as in-house brands in 1927. Sears was acquiring inventory from third parties and obviously knew exactly which ones were selling well and presumably made decisions about which markets to enter and which to stay out of based on that information. Walmart, the nation’s largest retailer, has a number of well-known private brands and firms negotiating with Walmart know full well that Walmart can enter their markets, subject of course to otherwise applicable restraints on entry such as intellectual property laws… I think that is possible to tease out advantages that a platform has regarding inventory experimentation. It can outsource some of those costs to third parties, though sophisticated third parties should understand where they can and cannot have a sustainable advantage given Amazon’s ability to move to build-or-bought first-party inventory. We have entire bodies of law— copyright, patent, trademark and more—that limit the ability of competitors to appropriate works, inventions and symbols. Those legal systems draw very carefully considered lines regarding permitted and forbidden uses. And antitrust law generally favors entry into markets and doesn’t look to create barriers that block firms, large or small, from entering new markets.

If anything, Apple is in an even better position than Amazon. Apple invests revenue in app development, not because the apps themselves generate revenue, but because it wants people to use the hardware, i.e. the iPhones, iPads, and Apple Watches. The reason Apple created an App Store in the first place is because this allows Apple to make more money from selling devices. In order to promote security on those devices, Apple institutes rules for the App Store, but it ultimately decides whether to create its own apps and provide access to other apps based upon its desire to maximize the value of the device. If Apple chooses to create free apps in order to improve iOS for users and sell more hardware, it is not a harm to competition.

Apple’s ability to enter into popular app markets should not be constrained unless it can be shown that by giving consumers another choice, consumers are harmed. As noted above, most searches in the App Store are for branded apps to begin with. If consumers already know what they want in an app, it hardly seems harmful for Apple to offer — and promote — its own, additional version as well. 

In the case of Clue, if Apple creates a free health app, it may hurt sales for Clue. But it doesn’t hurt consumers who want the functionality and would prefer to get it from Apple for free. This sort of product evolution is not harming competition, but enhancing it. And, it must be noted, Apple doesn’t exclude Clue from its devices. If, indeed, Clue offers a better product, or one that some users prefer, they remain able to find it and use it.

The so-called App Store “Tax”

The argument that Apple has an unfair competitive advantage over rival apps which have to pay commissions to Apple to be on the App Store (a “tax” or “surcharge”) has similarly produced no evidence of harm to consumers. 

Apple invested a lot into building the iPhone and the App Store. This infrastructure has created an incredibly lucrative marketplace for app developers to exploit. And, lest we forget a point fundamental to our legal system, Apple’s App Store is its property

The WSJ and NYT stories give the impression that Apple uses its commissions on third party apps to reduce competition for its own apps. However, this is inconsistent with how Apple charges its commission

For instance, Apple doesn’t charge commissions on free apps, which make up 84% of the App Store. Apple also doesn’t charge commissions for apps that are free to download but are supported by advertising — including hugely popular apps like Yelp, Buzzfeed, Instagram, Pinterest, Twitter, and Facebook. Even apps which are “readers” where users purchase or subscribe to content outside the app but use the app to access that content are not subject to commissions, like Spotify, Netflix, Amazon Kindle, and Audible. Apps for “physical goods and services” — like Amazon, Airbnb, Lyft, Target, and Uber — are also free to download and are not subject to commissions. The class of apps which are subject to a 30% commission include:

  • paid apps (like many games),
  • free apps that then have in-app purchases (other games and services like Skype and TikTok), 
  • and free apps with digital subscriptions (Pandora, Hulu, which have 30% commission first year and then 15% in subsequent years), and
  • cross-platform apps (Dropbox, Hulu, and Minecraft) which allow for digital goods and services to be purchased in-app and Apple collects commission on in-app sales, but not sales from other platforms. 

Despite protestations to the contrary, these costs are hardly unreasonable: third party apps receive the benefit not only of being in Apple’s App Store (without which they wouldn’t have any opportunity to earn revenue from sales on Apple’s platform), but also of the features and other investments Apple continues to pour into its platform — investments that make the ecosystem better for consumers and app developers alike. There is enormous value to the platform Apple has invested in, and a great deal of it is willingly shared with developers and consumers.  It does not make it anticompetitive to ask those who use the platform to pay for it. 

In fact, these benefits are probably even more important for smaller developers rather than bigger ones who can invest in the necessary back end to reach consumers without the App Store, like Netflix, Spotify, and Amazon Kindle. For apps without brand reputation (and giant marketing budgets), the ability for consumers to trust that downloading the app will not lead to the installation of malware (as often occurs when downloading from the web) is surely essential to small developers’ ability to compete. The App Store offers this.

Despite the claims made in Spotify’s complaint against Apple, Apple doesn’t have a duty to deal with app developers. Indeed, Apple could theoretically fill the App Store with only apps that it developed itself, like Apple Music. Instead, Apple has opted for a platform business model, which entails the creation of a new outlet for others’ innovation and offerings. This is pro-consumer in that it created an entire marketplace that consumers probably didn’t even know they wanted — and certainly had no means to obtain — until it existed. Spotify, which out-competed iTunes to the point that Apple had to go back to the drawing board and create Apple Music, cannot realistically complain that Apple’s entry into music streaming is harmful to competition. Rather, it is precisely what vigorous competition looks like: the creation of more product innovation, lower prices, and arguably (at least for some) higher quality.

Interestingly, Spotify is not even subject to the App Store commission. Instead, Spotify offers a work-around to iPhone users to obtain its premium version without ads on iOS. What Spotify actually desires is the ability to sell premium subscriptions to Apple device users without paying anything above the de minimis up-front cost to Apple for the creation and maintenance of the App Store. It is unclear how many potential Spotify users are affected by the inability to directly buy the ad-free version since Spotify discontinued offering it within the App Store. But, whatever the potential harm to Spotify itself, there’s little reason to think consumers or competition bear any of it. 

Conclusion

There is no evidence that Apple’s alleged “discrimination” against rival apps harms consumers. Indeed, the opposite would seem to be the case. The regulatory discrimination against successful tech platforms like Apple and the App Store is far more harmful to consumers.

Why Data Is Not the New Oil

Alec Stapp —  8 October 2019

“Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash).

5. Oil is a search good; data is an experience good

Oil is a search good, meaning its value can be assessed prior to purchasing. By contrast, data tends to be an experience good because companies don’t know how much a new dataset is worth until it has been combined with pre-existing datasets and deployed using algorithms (from which value is derived). This is one reason why purpose limitation rules can have unintended consequences. If firms are unable to predict what data they will need in order to develop new products, then restricting what data they’re allowed to collect is per se anti-innovation.

6. Oil has constant returns to scale; data has rapidly diminishing returns

As an energy input into a mechanical process, oil has relatively constant returns to scale (e.g., when oil is used as the fuel source to power a machine). When data is used as an input for an algorithm, it shows rapidly diminishing returns, as the charts collected in a presentation by Google’s Hal Varian demonstrate. The initial training data is hugely valuable for increasing an algorithm’s accuracy. But as you increase the dataset by a fixed amount each time, the improvements steadily decline (because new data is only helpful in so far as it’s differentiated from the existing dataset).

7. Oil is valuable; data is worthless

The features detailed above — rivalrousness, fungibility, marginal cost, returns to scale — all lead to perhaps the most important distinction between oil and data: The average barrel of oil is valuable (currently $56.49) and the average dataset is worthless (on the open market). As Will Rinehart showed, putting a price on data is a difficult task. But when data brokers and other intermediaries in the digital economy do try to value data, the prices are almost uniformly low. The Financial Times had the most detailed numbers on what personal data is sold for in the market:

  • “General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”
  • “A person who is shopping for a car, a financial product or a vacation is more valuable to companies eager to pitch those goods. Auto buyers, for instance, are worth about $0.0021 a pop, or $2.11 per 1,000 people.”
  • “Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”
  • “For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”
  • “The company estimates that the value of a relatively high Klout score adds up to more than $3 in word-of-mouth marketing value.”
  • [T]he sum total for most individuals often is less than a dollar.

Data is a specific asset, meaning it has “a significantly higher value within a particular transacting relationship than outside the relationship.” We only think data is so valuable because tech companies are so valuable. In reality, it is the combination of high-skilled labor, large capital expenditures, and cutting-edge technologies (e.g., machine learning) that makes those companies so valuable. Yes, data is an important component of these production functions. But to claim that data is responsible for all the value created by these businesses, as Lanier does in his NYT op-ed, is farcical (and reminiscent of the labor theory of value). 

Conclusion

People who analogize data to oil or gold may merely be trying to convey that data is as valuable in the 21st century as those commodities were in the 20th century (though, as argued, a dubious proposition). If the comparison stopped there, it would be relatively harmless. But there is a real risk that policymakers might take the analogy literally and regulate data in the same way they regulate commodities. As this article shows, data has many unique properties that are simply incompatible with 20th-century modes of regulation.

A better — though imperfect — analogy, as author Bernard Marr suggests, would be renewable energy. The sources of renewable energy are all around us — solar, wind, hydroelectric — and there is more available than we could ever use. We just need the right incentives and technology to capture it. The same is true for data. We leave our digital fingerprints everywhere — we just need to dust for them.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

And if David finds out the data beneath his profile, you’ll start to be able to connect the dots in various ways with Facebook and Cambridge Analytica and Trump and Brexit and all these loosely-connected entities. Because you get to see inside the beast, you get to see inside the system.

This excerpt from the beginning of Netflix’s The Great Hack shows the goal of the documentary: to provide one easy explanation for Brexit and the election of Trump, two of the most surprising electoral outcomes in recent history.

Unfortunately, in attempting to tell a simple narrative, the documentary obscures more than it reveals about what actually happened in the Facebook-Cambridge Analytica data scandal. In the process, the film wildly overstates the significance of the scandal in either the 2016 US presidential election or the 2016 UK referendum on leaving the EU.

In this article, I will review the background of the case and show seven things the documentary gets wrong about the Facebook-Cambridge Analytica data scandal.

Background

In 2013, researchers published a paper showing that you could predict some personality traits — openness and extraversion — from an individual’s Facebook Likes. Cambridge Analytica wanted to use Facebook data to create a “psychographic” profile — i.e., personality type — of each voter and then micro-target them with political messages tailored to their personality type, ultimately with the hope of persuading them to vote for Cambridge Analytica’s client (or at least to not vote for the opposing candidate).

In this case, the psychographic profile is the person’s Big Five (or OCEAN) personality traits, which research has shown are relatively stable throughout our lives:

  1. Openness to new experiences
  2. Conscientiousness
  3. Extroversion
  4. Agreeableness
  5. Neuroticism

But how to get the Facebook data to create these profiles? A researcher at Cambridge University, Alex Kogan, created an app called thisismydigitallife, a short quiz for determining your personality type. Between 250,000 and 270,000 people were paid a small amount of money to take this quiz. 

Those who took the quiz shared some of their own Facebook data as well as their friends’ data (so long as the friends’ privacy settings allowed third-party app developers to access their data). 

This process captured data on “at least 30 million identifiable U.S. consumers”, according to the FTC. For context, even if we assume all 30 million were registered voters, that means the data could be used to create profiles for less than 20 percent of the relevant population. And though some may disagree with Facebook’s policy for sharing user data with third-party developers, collecting data in this manner was in compliance with Facebook’s terms of service at the time.

What crossed the line was what happened next. Kogan then sold that data to Cambridge Analytica, without the consent of the affected Facebook users and in express violation of Facebook’s prohibition on selling Facebook data between third and fourth parties. 

Upon learning of the sale, Facebook directed Alex Kogan and Cambridge Analytica to delete the data. But the social media company failed to notify users that their data had been misused or confirm via an independent audit that the data was actually deleted.

1. Cambridge Analytica was selling snake oil (no, you are not easily manipulated)

There’s a line in The Great Hack that sums up the opinion of the filmmakers and the subjects in their story: “There’s 2.1 billion people, each with their own reality. And once everybody has their own reality, it’s relatively easy to manipulate them.” According to the latest research from political science, this is completely bogus (and it’s the same marketing puffery that Cambridge Analytica would pitch to prospective clients).

The best evidence in this area comes from Joshua Kalla and David E. Broockman in a 2018 study published by American Political Science Review:

We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero. First, a systematic meta-analysis of 40 field experiments estimates an average effect of zero in general elections. Second, we present nine original field experiments that increase the statistical evidence in the literature about the persuasive effects of personal contact 10-fold. These experiments’ average effect is also zero.

In other words, a meta-analysis covering 49 high-quality field experiments found that in US general elections, advertising has zero effect on the outcome. (However, there is evidence “campaigns are able to have meaningful persuasive effects in primary and ballot measure campaigns, when partisan cues are not present.”)

But the relevant conclusion for the Cambridge Analytica scandal remains the same: in highly visible elections with a polarized electorate, it simply isn’t that easy to persuade voters to change their minds.

2. Micro-targeting political messages is overrated — people prefer general messages on shared beliefs

But maybe Cambridge Analytica’s micro-targeting strategy would result in above-average effects? The literature provides reason for skepticism here as well. Another paper by Eitan D. Hersh and Brian F. Schaffner in The Journal of Politics found that voters “rarely prefer targeted pandering to general messages” and “seem to prefer being solicited based on broad principles and collective beliefs.” It’s political tribalism all the way down. 

A field experiment with 56,000 Wisconsin voters in the 2008 US presidential election found that “persuasive appeals possibly reduced candidate support and almost certainly did not increase it,” suggesting that  “contact by a political campaign can engender a backlash.”

3. Big Five personality traits are not very useful for predicting political orientation

Or maybe there’s something special about targeting political messages based on a person’s Big Five personality traits? Again, there is little reason to believe this is the case. As Kris-Stella Trump mentions in an article for The Washington Post

The ‘Big 5’ personality traits … only predict about 5 percent of the variation in individuals’ political orientations. Even accurate personality data would only add very little useful information to a data set that includes people’s partisanship — which is what most campaigns already work with.

The best evidence we have on the importance of personality traits on decision-making comes from the marketing literature (n.b., it’s likely easier to influence consumer decisions than political decisions in today’s increasingly polarized electorate). Here too the evidence is weak:

In this successful study, researchers targeted ads, based on personality, to more than 1.5 million people; the result was about 100 additional purchases of beauty products than had they advertised without targeting.

More to the point, the Facebook data obtained by Cambridge Analytica couldn’t even accomplish the simple task of matching Facebook Likes to the Big Five personality traits. Here’s Cambridge University researcher Alex Kogan in Michael Lewis’s podcast episode about the scandal: 

We started asking the question of like, well, how often are we right? And so there’s five personality dimensions? And we said like, okay, for what percentage of people do we get all five personality categories correct? We found it was like 1%.

Eitan Hersh, an associate professor of political science at Tufts University, summed it up best: “Every claim about psychographics etc made by or about [Cambridge Analytica] is BS.

4. If Cambridge Analytica’s “weapons-grade communications techniques” were so powerful, then Ted Cruz would be president

The Great Hack:

Ted Cruz went from the lowest rated candidate in the primaries to being the last man standing before Trump got the nomination… Everyone said Ted Cruz had this amazing ground game, and now we know who came up with all of it. Joining me now, Alexander Nix, CEO of Cambridge Analytica, the company behind it all.

Reporting by Nicholas Confessore and Danny Hakim at The New York Times directly contradicts this framing on Cambridge Analytica’s role in the 2016 Republican presidential primary:

Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates.

Most significantly, the Cruz campaign stopped using Cambridge Analytica’s services in February 2016 due to disappointing results, as Kenneth P. Vogel and Darren Samuelsohn reported in Politico in June of that year:

Cruz’s data operation, which was seen as the class of the GOP primary field, was disappointed in Cambridge Analytica’s services and stopped using them before the Nevada GOP caucuses in late February, according to a former staffer for the Texas Republican.

“There’s this idea that there’s a magic sauce of personality targeting that can overcome any issue, and the fact is that’s just not the case,” said the former staffer, adding that Cambridge “doesn’t have a level of understanding or experience that allows them to target American voters.”

Vogel later tweeted that most firms hired Cambridge Analytica “because it was seen as a prerequisite for receiving $$$ from the MERCERS.” So it seems campaigns hired Cambridge Analytica not for its “weapons-grade communications techniques” but for the firm’s connections to billionaire Robert Mercer.

5. The Trump campaign phased out Cambridge Analytica data in favor of RNC data for the general election

Just as the Cruz campaign became disillusioned after working with Cambridge Analytica during the primary, so too did the Trump campaign during the general election, as Major Garrett reported for CBS News:

The crucial decision was made in late September or early October when Mr. Trump’s son-in-law Jared Kushner and Brad Parscale, Mr. Trump’s digital guru on the 2016 campaign, decided to utilize just the RNC data for the general election and used nothing from that point from Cambridge Analytica or any other data vendor. The Trump campaign had tested the RNC data, and it proved to be vastly more accurate than Cambridge Analytica’s, and when it was clear the RNC would be a willing partner, Mr. Trump’s campaign was able to rely solely on the RNC.

And of the little work Cambridge Analytica did complete for the Trump campaign, none involved “psychographics,” The New York Times reported:

Mr. Bannon at one point agreed to expand the company’s role, according to the aides, authorizing Cambridge to oversee a $5 million purchase of television ads. But after some of them appeared on cable channels in Washington, D.C. — hardly an election battleground — Cambridge’s involvement in television targeting ended.

Trump aides … said Cambridge had played a relatively modest role, providing personnel who worked alongside other analytics vendors on some early digital advertising and using conventional micro-targeting techniques. Later in the campaign, Cambridge also helped set up Mr. Trump’s polling operation and build turnout models used to guide the candidate’s spending and travel schedule. None of those efforts involved psychographics.

6. There is no evidence that Facebook data was used in the Brexit referendum

Last year, the UK’s data protection authority fined Facebook £500,000 — the maximum penalty allowed under the law — for violations related to the Cambridge Analytica data scandal. The fine was astonishing considering that the investigation of Cambridge Analytica’s licensed data derived from Facebook “found no evidence that UK citizens were among them,” according to the BBC. This detail demolishes the second central claim of The Great Hack, that data fraudulently acquired from Facebook users enabled Cambridge Analytica to manipulate the British people into voting for Brexit. On this basis, Facebook is currently appealing the fine.

7. The Great Hack wasn’t a “hack” at all

The title of the film is an odd choice given the facts of the case, as detailed in the background section of this article. A “hack” is generally understood as an unauthorized breach of a computer system or network by a malicious actor. People think of a genius black hat programmer who overcomes a company’s cybersecurity defenses to profit off stolen data. Alex Kogan, the Cambridge University researcher who acquired the Facebook data for Cambridge Analytica, was nothing of the sort. 

As Gus Hurwitz noted in an article last year, Kogan entered into a contract with Facebook and asked users for their permission to acquire their data by using the thisismydigitallife personality app. Arguably, if there was a breach of trust, it was when the app users chose to share their friends’ data, too. The editorial choice to call this a “hack” instead of “data collection” or “data scraping” is of a piece with the rest of the film; when given a choice between accuracy and sensationalism, the directors generally chose the latter.

Why does this narrative persist despite the facts of the case?

The takeaway from the documentary is that Cambridge Analytica hacked Facebook and subsequently undermined two democratic processes: the Brexit referendum and the 2016 US presidential election. The reason this narrative has stuck in the public consciousness is that it serves everyone’s self-interest (except, of course, Facebook’s).

It lets voters off the hook for what seem, to many, to be drastic mistakes (i.e., electing a reality TV star president and undoing the European project). If we were all manipulated into making the “wrong” decision, then the consequences can’t be our fault! 

This narrative also serves Cambridge Analytica, to a point. For a time, the political consultant liked being able to tell prospective clients that it was the mastermind behind two stunning political upsets. Lastly, journalists like the story because they compete with Facebook in the advertising market and view the tech giant as an existential threat.

There is no evidence for the film’s implicit assumption that, but for Cambridge Analytica’s use of Facebook data to target voters, Trump wouldn’t have been elected and the UK wouldn’t have voted to leave the EU. Despite its tone and ominous presentation style, The Great Hack fails to muster any support for its extreme claims. The truth is much more mundane: the Facebook-Cambridge Analytica data scandal was neither a “hack” nor was it “great” in historical importance.

The documentary ends with a question:

But the hardest part in all of this is that these wreckage sites and crippling divisions begin with the manipulation of one individual. Then another. And another. So, I can’t help but ask myself: Can I be manipulated? Can you?

No — but the directors of The Great Hack tried their best to do so.

[This post is the seventh in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Alec Stapp, Research Fellow at the International Center for Law & Economics]

Should we break up Microsoft? 

In all the talk of breaking up “Big Tech,” no one seems to mention the biggest tech company of them all. Microsoft’s market cap is currently higher than those of Apple, Google, Amazon, and Facebook. If big is bad, then, at the moment, Microsoft is the worst.

Apart from size, antitrust activists also claim that the structure and behavior of the Big Four — Facebook, Google, Apple, and Amazon — is why they deserve to be broken up. But they never include Microsoft, which is curious given that most of their critiques also apply to the largest tech giant:

  1. Microsoft is big (current market cap exceeds $1 trillion)
  2. Microsoft is dominant in narrowly-defined markets (e.g., desktop operating systems)
  3. Microsoft is simultaneously operating and competing on a platform (i.e., the Microsoft Store)
  4. Microsoft is a conglomerate capable of leveraging dominance from one market into another (e.g., Windows, Office 365, Azure)
  5. Microsoft has its own “kill zone” for startups (196 acquisitions since 1994)
  6. Microsoft operates a search engine that preferences its own content over third-party content (i.e., Bing)
  7. Microsoft operates a platform that moderates user-generated content (i.e., LinkedIn)

To be clear, this is not to say that an antitrust case against Microsoft is as strong as the case against the others. Rather, it is to say that the cases against the Big Four on these dimensions are as weak as the case against Microsoft, as I will show below.

Big is bad

Tim Wu published a book last year arguing for more vigorous antitrust enforcement — including against Big Tech — called “The Curse of Bigness.” As you can tell by the title, he argues, in essence, for a return to the bygone era of “big is bad” presumptions. In his book, Wu mentions “Microsoft” 29 times, but only in the context of its 1990s antitrust case. On the other hand, Wu has explicitly called for antitrust investigations of Amazon, Facebook, and Google. It’s unclear why big should be considered bad when it comes to the latter group but not when it comes to Microsoft. Maybe bigness isn’t actually a curse, after all.

As the saying goes in antitrust, “Big is not bad; big behaving badly is bad.” This aphorism arose to counter erroneous reasoning during the era of structure-conduct-performance when big was presumed to mean bad. Thanks to an improved theoretical and empirical understanding of the nature of the competitive process, there is now a consensus that firms can grow large either via superior efficiency or by engaging in anticompetitive behavior. Size alone does not tell us how a firm grew big — so it is not a relevant metric.

Dominance in narrowly-defined markets

Critics of Google say it has a monopoly on search and critics of Facebook say it has a monopoly on social networking. Microsoft is similarly dominant in at least a few narrowly-defined markets, including desktop operating systems (Windows has a 78% market share globally): 

Source: StatCounter

Microsoft is also dominant in the “professional networking platform” market after its acquisition of LinkedIn in 2016. And the legacy tech giant is still the clear leader in the “paid productivity software” market. (Microsoft’s Office 365 revenue is roughly 10x Google’s G Suite revenue).

The problem here is obvious. These are overly-narrow market definitions for conducting an antitrust analysis. Is it true that Facebook’s platforms are the only service that can connect you with your friends? Should we really restrict the productivity market to “paid”-only options (as the EU similarly did in its Android decision) when there are so many free options available? These questions are laughable. Proper market definition requires considering whether a hypothetical monopolist could profitably impose a small but significant and non-transitory increase in price (SSNIP). If not (which is likely the case in the narrow markets above), then we should employ a broader market definition in each case.

Simultaneously operating and competing on a platform

Elizabeth Warren likes to say that if you own a platform, then you shouldn’t both be an umpire and have a team in the game. Let’s put aside the problems with that flawed analogy for now. What she means is that you shouldn’t both run the platform and sell products, services, or apps on that platform (because it’s inherently unfair to the other sellers). 

Warren’s solution to this “problem” would be to create a regulated class of businesses called “platform utilities” which are “companies with an annual global revenue of $25 billion or more and that offer to the public an online marketplace, an exchange, or a platform for connecting third parties.” Microsoft’s revenue last quarter was $32.5 billion, so it easily meets the first threshold. And Windows obviously qualifies as “a platform for connecting third parties.”

Just as in mobile operating systems, desktop operating systems are compatible with third-party applications. These third-party apps can be free (e.g., iTunes) or paid (e.g., Adobe Photoshop). Of course, Microsoft also makes apps for Windows (e.g., Word, PowerPoint, Excel, etc.). But the more you think about the technical details, the blurrier the line between the operating system and applications becomes. Is the browser an add-on to the OS or a part of it (as Microsoft Edge appears to be)? The most deeply-embedded applications in an OS are simply called “features.”

Even though Warren hasn’t explicitly mentioned that her plan would cover Microsoft, it almost certainly would. Previously, she left Apple out of the Medium post announcing her policy, only to later tell a journalist that the iPhone maker would also be prohibited from producing its own apps. But what Warren fails to include in her announcement that she would break up Apple is that trying to police the line between a first-party platform and third-party applications would be a nightmare for companies and regulators, likely leading to less innovation and higher prices for consumers (as they attempt to rebuild their previous bundles).

Leveraging dominance from one market into another

The core critique in Lina Khan’s “Amazon’s Antitrust Paradox” is that the very structure of Amazon itself is what leads to its anticompetitive behavior. Khan argues (in spite of the data) that Amazon uses profits in some lines of business to subsidize predatory pricing in other lines of businesses. Furthermore, she claims that Amazon uses data from its Amazon Web Services unit to spy on competitors and snuff them out before they become a threat.

Of course, this is similar to the theory of harm in Microsoft’s 1990s antitrust case, that the desktop giant was leveraging its monopoly from the operating system market into the browser market. Why don’t we hear the same concern today about Microsoft? Like both Amazon and Google, you could uncharitably describe Microsoft as extending its tentacles into as many sectors of the economy as possible. Here are some of the markets in which Microsoft competes (and note how the Big Four also compete in many of these same markets):

What these potential antitrust harms leave out are the clear consumer benefits from bundling and vertical integration. Microsoft’s relationships with customers in one market might make it the most efficient vendor in related — but separate — markets. It is unsurprising, for example, that Windows customers would also frequently be Office customers. Furthermore, the zero marginal cost nature of software makes it an ideal product for bundling, which redounds to the benefit of consumers.

The “kill zone” for startups

In a recent article for The New York Times, Tim Wu and Stuart A. Thompson criticize Facebook and Google for the number of acquisitions they have made. They point out that “Google has acquired at least 270 companies over nearly two decades” and “Facebook has acquired at least 92 companies since 2007”, arguing that allowing such a large number of acquisitions to occur is conclusive evidence of regulatory failure.

Microsoft has made 196 acquisitions since 1994, but they receive no mention in the NYT article (or in most of the discussion around supposed “kill zones”). But the acquisitions by Microsoft or Facebook or Google are, in general, not problematic. They provide a crucial channel for liquidity in the venture capital and startup communities (the other channel being IPOs). According to the latest data from Orrick and Crunchbase, between 2010 and 2018, there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion

By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. Making it harder for a startup to be acquired would not result in more venture capital investment (and therefore not in more IPOs), according to recent research by Gordon M. Phillips and Alexei Zhdanov. The researchers show that “the passage of a pro-takeover law in a country is associated with more subsequent VC deals in that country, while the enactment of a business combination antitakeover law in the U.S. has a negative effect on subsequent VC investment.”

As investor and serial entrepreneur Leonard Speiser said recently, “If the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.” 

Search engine bias

Google is often accused of biasing its search results to favor its own products and services. The argument goes that if we broke them up, a thousand search engines would bloom and competition among them would lead to less-biased search results. While it is a very difficult — if not impossible — empirical question to determine what a “neutral” search engine would return, one attempt by Josh Wright found that “own-content bias is actually an infrequent phenomenon, and Google references its own content more favorably than other search engines far less frequently than does Bing.” 

The report goes on to note that “Google references own content in its first results position when no other engine does in just 6.7% of queries; Bing does so over twice as often (14.3%).” Arguably, users of a particular search engine might be more interested in seeing content from that company because they have a preexisting relationship. But regardless of how we interpret these results, it’s clear this not a frequent phenomenon.

So why is Microsoft being left out of the antitrust debate now?

One potential reason why Google, Facebook, and Amazon have been singled out for criticism of practices that seem common in the tech industry (and are often pro-consumer) may be due to the prevailing business model in the journalism industry. Google and Facebook are by far the largest competitors in the digital advertising market, and Amazon is expected to be the third-largest player by next year, according to eMarketer. As Ramsi Woodcock pointed out, news publications are also competing for advertising dollars, the type of conflict of interest that usually would warrant disclosure if, say, a journalist held stock in a company they were covering.

Or perhaps Microsoft has successfully avoided receiving the same level of antitrust scrutiny as the Big Four because it is neither primarily consumer-facing like Apple or Amazon nor does it operate a platform with a significant amount of political speech via user-generated content (UGC) like Facebook or Google (YouTube). Yes, Microsoft moderates content on LinkedIn, but the public does not get outraged when deplatforming merely prevents someone from spamming their colleagues with requests “to add you to my professional network.”

Microsoft’s core areas are in the enterprise market, which allows it to sidestep the current debates about the supposed censorship of conservatives or unfair platform competition. To be clear, consumer-facing companies or platforms with user-generated content do not uniquely merit antitrust scrutiny. On the contrary, the benefits to consumers from these platforms are manifest. If this theory about why Microsoft has escaped scrutiny is correct, it means the public discussion thus far about Big Tech and antitrust has been driven by perception, not substance.


[This post is the fifth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by William Rinehart, Director of Technology and Innovation Policy at American Action Forum.]

Back in May, the New York Times published an op-ed by Chris Hughes, one of the founders of Facebook, in which he called for the break up of his former firm. Hughes joins a growing chorus, including Senator Warren, Roger McNamee and others who have called for the break up of “Big Tech” companies. If Business Insider’s polling is correct, this chorus seems to be quite effective: Nearly 40 percent of Americans now support breaking up Facebook. 

Hughes’ position is perhaps understandable given his other advocacy activities. But it is also worth bearing in mind that he likely was never particularly familiar with or involved in Facebook’s technical backend or business development or sales. Rather, he was important in setting up the public relations and feedback mechanisms. This is relevant because the technical and organizational challenges in breaking up big tech are enormous and underappreciated. 

The Technics of Structural Remedies

As I explained at AAF last year,

Any trust-busting action would also require breaking up the company’s technology stack — a general name for the suite of technologies powering web sites. For example, Facebook developed its technology stack in-house to address the unique problems facing Facebook’s vast troves of data. Facebook created BigPipe to dynamically serve pages faster, Haystack to store billions of photos efficiently, Unicorn for searching the social graph, TAO for storing graph information, Peregrine for querying, and MysteryMachine to help with end-to-end performance analysis. The company also invested billions in data centers to quickly deliver video, and it split the cost of an undersea cable with Microsoft to speed up information travel. Where do you cut these technologies when splitting up the company?

That list, however, leaves out the company’s backend AI platform, known as Horizon. As Christopher Mims reported in the Wall Street Journal, Facebook put serious resources into creating Horizon and it has paid off. About a fourth of the engineers at the company were using this platform in 2017, even though only 30 percent of them were experts in it. The system, as Joaquin Candela explained, is powerful because it was built to be “a very modular layered cake where you can plug in at any level you want.” As Mim was careful to explain, the platform was designed to be “domain-specific,”  or highly modular. In other words, Horizon was meant to be useful across a range of complex problems and different domains. If WhatsApp and Instagram were separated from Facebook, who gets that asset? Does Facebook retain the core tech and then have to sell it at a regulated rate?

Lessons from Attempts to Manage Competition in the Tobacco Industry 

For all of the talk about breaking up Facebook and other tech companies, few really grasp just how lackluster this remedy has been in the past. The classic case to study isn’t AT&T or Standard Oil, but American Tobacco Company

The American Tobacco Company came about after a series of mergers in 1890 orchestrated by J.B. Duke. Then, between 1907 and 1911, the federal government filed and eventually won an antitrust lawsuit, which dissolved the trust into three companies. 

Duke was unique for his time because he worked to merge all of the previous companies into a working coherent firm. The organization that stood trial in 1907 was a modern company, organized around a functional structure. A single purchasing department managed all the leaf purchasing. Tobacco processing plants were dedicated to specific products without any concern for their previous ownership. The American Tobacco Company was rational in a way few other companies were at the time.  

These divisions were pulled apart over eight months. Factories, distribution and storage facilities, back offices and name brands were all separated by government fiat. It was a difficult task. As historian Allan M. Brandt details in “The Cigarette Century,”

It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.

So how did consumers fare after the breakup? Most research suggests that the breakup didn’t substantially change the markets where American Tobacco was involved. Real cigarette prices for consumers were stable, suggesting there wasn’t price competition. The three companies coming out of the suit earned the same profit from 1912 to 1949 as the original American Tobacco Company Trust earned in its heyday from 1898 to 1908. As for the upstream suppliers, the price paid to tobacco farmers didn’t change either. The breakup was a bust.  

The difficulties in breaking up American Tobacco stand in contrast to the methods employed with Standard Oil and AT&T. For them, the split was made along geographic lines. Standard Oil was broken into 34 regional companies. Standard Oil of New Jersey became Exxon, while Standard Oil of California changed its name to Chevron. In the same way, AT&T was broken up in Regional Bell Operating Companies. Facebook doesn’t have geographic lines.

The Lessons of the Past Applied to Facebook

Facebook combines elements of the two primary firm structures and is thus considered a “matrix form” company. While the American Tobacco Company employed a functional organization, the most common form of company organization today is the divisional form. This method of firm rationalization separates the company’s operational functions by product, in order to optimize efficiencies. Under a divisional structure, each product is essentially a company unto itself. Engineering, finance, sales, and customer service are all unified within one division, which sits separate from other divisions within a company. Like countless other tech companies, Facebook merges elements of the two forms. It relies upon flexible teams to solve problems that tend to cross the normal divisional and functional bounds. Communication and coordination is prioritized among teams and Facebook invests heavily to ensure cross-company collaboration. 

Advocates think that undoing the WhatsApp and Instagram mergers will be easy, but there aren’t clean divisional lines within the company. Indeed, Facebook has been working towards a vast reengineering of its backend for some time that, when completed later this year or early 2020, will effectively merge all of the companies into one ecosystem.  Attempting to dismember this ecosystem would almost certainly be disastrous; not just a legal nightmare, but a technical and organizational nightmare as well.

Much like American Tobacco, any attempt to split off WhatsApp and Instagram from Facebook will probably fall flat on its face because government officials will have to create three regulated firms, each with essentially duplicative structures. As a result, the quality of services offered to consumers will likely be inferior to those available from the integrated firm. In other words, this would be a net loss to consumers.

Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.

Arguing that her proposal is “akin to a pollution tax,” Ms. Kim writes:

A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air.  But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.

It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, The Economics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.   

With respect to the proposed tax on Microsoft and other high-tech employers, the fairness argument seems a stretch, and the efficiency argument outright fails. Let’s consider each.

To achieve fairness by forcing a victimizer to pay for imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s view is that Microsoft and its high-paid employees are victimizing (imposing costs on) incumbent renters and lower-paid homebuyers. But is that so clear?

Microsoft’s desire to employ high-skilled workers, and those employees’ desire to live near their work, conflicts with incumbent renters’ desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If Microsoft got its way, incumbent renters and lower paid homebuyers would be worse off.

But incumbent renters’ and lower-paid homebuyers’ insistence on low rents and home prices conflicts with the desires of Microsoft, the high-skilled workers it would like to hire, and local homeowners. If incumbent renters and lower paid homebuyers got their way and prevented Microsoft from employing high-wage workers, Microsoft, its potential employees, and local homeowners would be worse off. Who is the victim here?

As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.

A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.

When a business engages in some productive activity, it uses resources (labor, materials, etc.) to produce some sort of valuable output (e.g., a good or service). In determining what level of productive activity to engage in (e.g., how many hours to run the factory, etc.), it compares its cost of engaging in one more unit of activity to the added benefit (revenue) it will receive from doing so. If its so-called “marginal cost” from the additional activity is less than or equal to the “marginal benefit” it will receive, it will engage in the activity; otherwise, it won’t.  

When the business is bearing all the costs and benefits of its actions, this outcome is efficient. The cost of the inputs used in production are determined by the value they could generate in alternative uses. (For example, if a flidget producer could create $4 of value from an ounce of tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.) If a business finds that continued production generates additional revenue (reflective of consumers’ subjective valuation of the business’s additional product) in excess of its added cost (reflective of the value its inputs could create if deployed toward their next-best use), then making more moves productive resources to their highest and best uses, enhancing social welfare. This outcome is “allocatively efficient,” meaning that productive resources have been allocated in a manner that wrings the greatest possible value from them.

Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others.  Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.

Pigou’s idea was to use taxes to prevent such inefficiencies. If the government were to charge the producer a tax equal to the cost his activity imposed on others ($1 in the above example), then he would capture all the marginal benefit and bear all the marginal cost of his activity. He would thus be motivated to continue his activity only to the point at which its total marginal benefit equaled its total marginal cost. The point of a Pigouvian tax, then, is to achieve allocative efficiency—i.e., to channel productive resources toward their highest and best ends.

When it comes to the negative externality Ms. Kim has identified—an increase in housing prices occasioned by high-tech companies’ hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That is because the external cost at issue here is a “pecuniary” externality, a special sort of externality that does not generate inefficiency.

A pecuniary externality is one where the adverse third-party effect consists of an increase in market prices. If that’s the case, the allocative inefficiency that may justify Pigouvian taxes does not exist. There’s no inefficiency from the mere fact that buyers pay more.  Their loss is perfectly offset by a gain to sellers, and—here’s the crucial part—the higher prices channel productive resources toward, not away from, their highest and best ends. High rent levels, for example, signal to real estate developers that more resources should be devoted to creating living spaces within the city. That’s allocatively efficient.

Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.

In the end, Ms. Kim’s pollution tax analogy fails. The efficiency case for a Pigouvian tax to remedy negative externalities does not apply when, as here, the externality at issue is pecuniary.

For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.