Archives For bankruptcy

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Noah Phillips[1] (Commissioner of the U.S. Federal Trade Commission).]   

Never let a crisis go to waste, or so they say. In the past two weeks, some of the same people who sought to stop mergers and acquisitions during the bull market took the opportunity of the COVID-19 pandemic and the new bear market to call to ban M&A. On Friday, April 24th, Rep. David Cicilline proposed that a merger ban be included in the next COVID-19-related congressional legislative package.[2] By Monday, Senator Elizabeth Warren and Rep. Alexandria Ocasio-Cortez, warning of “predatory” M&A and private equity “vultures”, teamed up with a similar proposal.[3] 

I’m all for stopping anticompetitive M&A that we cannot resolve. In the past few months alone, the Federal Trade Commission has been quite busy, suing to stop transactions in the hospital, e-cigarette, coal, body-worn camera, razor, and gene sequencing industries, and forcing deals to stop in the pharmaceutical, medical staffing, and consumer products spaces. But is a blanket ban, unprecedented in our nation’s history, warranted, now? 

The theory that the pandemic requires the government to shut down M&A goes something like this: the antitrust agencies are overwhelmed and cannot do the job of reviewing mergers under the Hart-Scott-Rodino (HSR) Act, which gives the U.S. antitrust agencies advance notice of certain transactions and 30 days to decide whether to seek more information about them.[4] That state of affairs will, in turn, invite a rush of companies looking to merge with minimal oversight, exacerbating the problem by flooding the premerger notification office (PNO) with new filings. Another version holds, along similar lines, that the precipitous decline in the market will precipitate a merger “wave” in which “dominant corporations” and “private equity vultures” will gobble up defenseless small businesses. Net result: anticompetitive transactions go unnoticed and unchallenged. That’s the theory, at least as it has been explained to me. The facts are different.

First, while the restrictions related to COVID-19 require serious adjustments at the antitrust agencies just as they do at workplaces across the country (we’re working from home, dealing with remote technology, and handling kids just like the rest), merger review continues. Since we started teleworking, the FTC has, among other things, challenged Altria’s $12.8 billion investment in JUUL’s e-cigarette business and resolved competitive concerns with GE’s sale of its biopharmaceutical business to Danaher and Ossur’s acquisition of a competing prosthetic limbs manufacturer, College Park. With our colleagues at the Antitrust Division of the Department of Justice, we announced a new e-filing system for HSR filings and temporarily suspended granting early termination. We sought voluntary extensions from companies. But, in less than two weeks, we were able to resume early termination—back to “new normal”, at least. I anticipate there may be additional challenges; and the FTC will assess constraints in real-time to deal with further disruptions. But we have not sacrificed the thoroughness of our investigations; and we will not.

Second, there is no evidence of a merger “wave”, or that the PNO is overwhelmed with HSR filings. To the contrary, according to Bloomberg, monthly M&A volume hit rock bottom in April – the lowest since 2004. As of last week, the PNO estimates nearly 60% reduction in HSR reported transactions during the past month, compared to the historical average. Press reports indicate that M&A activity is down dramatically because of the crisis. Xerox recently announced it was suspending its hostile bid for Hewlett-Packard ($30 billion); private equity firm Sycamore Partners announced it is walking away from its takeover of Victoria’s Secret ($525 million); and Boeing announced it is backing out of its merger with Embraer ($4.2 billion) — just a few examples of companies, large corporations and private equity firms alike, stopping M&A on their own. (The market is funny like that.)

Slowed M&A during a global pandemic and economic crisis is exactly what you would expect. The financial uncertainty facing companies lowers shareholder and board confidence to dive into a new acquisition or sale. Financing is harder to secure. Due diligence is postponed. Management meetings are cancelled. Agreeing on price is another big challenge. The volatility in stock prices makes valuation difficult, and lessens the value of equity used to acquire. Cash is needed elsewhere, like to pay workers and keep operations running. Lack of access to factories and other assets as a result of travel restrictions and stay-at-home orders similarly make valuation harder. Management can’t even get in a room to negotiate and hammer out the deal because of social distancing (driving a hard bargain on Zoom may not be the same).

Experience bears out those expectations. Consider our last bear market, the financial crisis that took place over a decade ago. Publicly available FTC data show the number of HSR reported transactions dropped off a cliff. During fiscal year 2009, the height of the crisis, HSR reported transactions were down nearly 70% compared to just two years earlier, in fiscal year 2007. Not surprising.

Source: https://www.ftc.gov/site-information/open-government/data-sets

Nor should it be surprising that the current crisis, with all its uncertainty and novelty, appears itself to be slowing down M&A.

So, the antitrust agencies are continuing merger review, and adjusting quickly to the new normal. M&A activity is down, dramatically, on its own. That makes the pandemic an odd excuse to stop M&A. Maybe the concern wasn’t really about the pandemic in the first place? The difference in perspective may depend on one’s general view of the value of M&A. If you think mergers are mostly (or all) bad, and you discount the importance of the market for corporate control, the cost to stopping them all is low. If you don’t, the cost is high.[5]

As a general matter, decades of research and experience tell us that the vast majority of mergers are either pro-competitive or competitively-neutral.[6] But M&A, even dramatically-reduced, also has an important role to play in a moment of economic adjustment. It helps allocate assets in an efficient manner, for example giving those with the wherewithal to operate resources (think companies, or plants) an opportunity that others may be unable to utilize. Consumers benefit if a merger leads to the delivery of products or services that one company could not efficiently provide on its own, and from the innovation and lower prices that better management and integration can provide. Workers benefit, too, as they remain employed by going concerns.[7] It serves no good, including for competition, to let companies that might live, die.[8]

M&A is not the only way in which market forces can help. The antitrust agencies have always recognized pro-competitive benefits to collaboration between competitors during times of crisis.  In 2005, after hurricanes Katrina and Rita, we implemented an expedited five-day review of joint projects between competitors aimed at relief and construction. In 2017, after hurricanes Harvey and Irma, we advised that hospitals could combine resources to meet the health care needs of affected communities and companies could combine distribution networks to ensure goods and services were available. Most recently, in response to the current COVID-19 emergency, we announced an expedited review process for joint ventures. Collaboration can be concerning, so we’re reviewing; but it can also help.

Our nation is going through an unprecedented national crisis, with a horrible economic component that is putting tens of millions out of work and causing a great deal of suffering. Now is a time of great uncertainty, tragedy, and loss; but also of continued hope and solidarity. While merger review is not the top-of-mind issue for many—and it shouldn’t be—American consumers stand to gain from pro-competitive mergers, during and after the current crisis. Those benefits would be wiped out with a draconian ‘no mergers’ policy during the COVID-19 emergency. Might there be anticompetitive merger activity? Of course, which is why FTC staff are working hard to vet potentially anticompetitive mergers and prevent harm to consumers. Let’s let them keep doing their jobs.


[1] The views expressed in this blog post are my own and do not necessarily reflect the views of the Federal Trade Commission or any other commissioner. An abbreviated version of this essay was previously published in the New York Times’ DealBook newsletter. Noah Phillips, The case against banning mergers, N.Y. Times, Apr. 27, 2020, available at https://www.nytimes.com/2020/04/27/business/dealbook/small-business-ppp-loans.html.

[2] The proposal would allow transactions only if a company is already in bankruptcy or is otherwise about to fail.

[3] The “Pandemic Anti-Monopoly Act” proposes a merger moratorium on (1) firms with over $100 million in revenue or market capitalization of over $100 million; (2) PE firms and hedge funds (or entities that are majority-owned by them); (3) businesses that have an exclusive patent on products related to the crisis, such as personal protective equipment; and (4) all HSR reportable transactions.

[4] Hart-Scott-Rodino Antitrust Improvements Act of 1976, 15 U.S.C. § 18a. The antitrust agencies can challenge transactions after they happen, but they are easier to stop beforehand; and Congress designed HSR to give us an opportunity to do so.

[5] Whatever your view, the point is that the COVID-19 crisis doesn’t make sense as a justification for banning M&A. If ban proponents oppose M&A generally, they should come out and say that. And they should level with the public about just how much they propose to ban. The specifics of the proposals are beyond the scope of this essay, but it’s worth noting that the “large companies [gobbling] up . . . small businesses” of which Sen. Warren warns include any firm with $100 million in annual revenue and anyone making a transaction reportable under HSR. $100 million seems like a lot of money to many of us, but the Ohio State University National Center for the Middle Market defines a mid-sized company as having annual revenues between $10 million and $1 billion. Many if not most of the transactions that would be banned look nothing like the kind of acquisitions ban proponents are describing.

[6] As far back as the 1980s, the Horizontal Merger Guidelines reflected this idea, stating: “While challenging competitively harmful mergers, the Department [of Justice Antitrust Division] seeks to avoid unnecessary interference with the larger universe of mergers that are either competitively beneficial or neutral.” Horizontal Merger Guidelines (1982); see also Hovenkamp, Appraising Merger Efficiencies, 24 Geo. Mason L. Rev. 703, 704 (2017) (“we tolerate most mergers because of a background, highly generalized belief that most—or at least many—do produce cost savings or improvements in products, services, or distribution”); Andrade, Mitchell & Stafford, New Evidence and Perspectives on Mergers, 15 J. ECON. PERSPECTIVES 103, 117 (2001) (“We are inclined to defend the traditional view that mergers improve efficiency and that the gains to shareholders at merger announcement accurately reflect improved expectations of future cash flow performance.”).

[7] Jointly with our colleagues at the Antitrust Division of the Department of Justice, we issued a statement last week affirming our commitment to enforcing the antitrust laws against those who seek to exploit the pandemic to engage in anticompetitive conduct in labor markets.

[8] The legal test to make such a showing for an anti-competitive transaction is high. Known as the “failing firm defense”, it is available only to firms that can demonstrate their fundamental inability to compete effectively in the future. The Horizontal Merger Guidelines set forth three elements to establish the defense: (1) the allegedly failing firm would be unable to meet its financial obligations in the near future; (2) it would not be able to reorganize successfully under Chapter 11; and (3) it has made unsuccessful good-faith efforts to elicit reasonable alternative offers that would keep its tangible and intangible assets in the relevant market and pose a less severe danger to competition than the actual merger. Horizontal Merger Guidelines § 11; see also Citizen Publ’g v. United States, 394 U.S. 131, 137-38 (1969). The proponent of the failing firm defense bears the burden to prove each element, and failure to prove a single element is fatal. In re Otto Bock, FTC No. 171-0231, Docket No. 9378 Commission Opinion (Nov. 2019) at 43; see also Citizen Publ’g, 394 U.S. at 138-39.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramsi Woodcock, (Assistant Professor of Law, University of Kentucky; Assistant Professor of Management, Gatton College of Business and Economics).]

Specialists know that the antitrust courses taught in law schools and economics departments have an alter ego in business curricula: the course on business strategy. The two courses cover the same material, but from opposite perspectives. Antitrust courses teach how to end monopolies; strategy courses teach how to construct and maintain them.

Strategy students go off and run businesses, and antitrust students go off and make government policy. That is probably the proper arrangement if the policy the antimonopolists make is domestic. We want the domestic economy to run efficiently, and so we want domestic policymakers to think about monopoly—and its allocative inefficiencies—as something to be discouraged.

The coronavirus, and the shortages it has caused, have shown us that putting the antimonopolists in charge of international policy is, by contrast, a very big mistake.

Because we do not yet have a world government. America’s position, in relation to the rest of the world, is therefore more akin to that of a business navigating a free market than it is to a government seeking to promote efficient interactions among the firms that it governs. To flourish, America must engage in international trade with a view to creating and maintaining monopoly positions for itself, rather than eschewing them in the interest of realizing efficiencies in the global economy. Which is to say: we need strategists, not antimonopolists.

For the global economy is not America, and there is no guarantee that competitive efficiencies will redound to America’s benefit, rather than to those of her competitors. Absent a world government, other countries will pursue monopoly regardless what America does, and unless America acts strategically to build and maintain economic power, America will eventually occupy a position of commercial weakness, with all of the consequences for national security that implies.

When Antimonopolists Make Trade Policy

The free traders who have run American economic policy for more than a generation are antimonopolists playing on a bigger stage. Like their counterparts in domestic policy, they are loyal in the first instance only to the efficiency of the market, not to any particular trader. They are content to establish rules of competitive trading—the antitrust laws in the domestic context, the World Trade Organization in the international context—and then to let the chips fall where they may, even if that means allowing present or future adversaries to, through legitimate means, build up competitive advantages that the United States is unable to overcome.

Strategy is consistent with competition when markets are filled with traders of atomic size, for then no amount of strategy can deliver a competitive advantage to any trader. But global markets, more even than domestic markets, are filled with traders of macroscopic size. Strategy then requires that each trader seek to gain and maintain advantages, undermining competition. The only way antimonopolists could induce the trading behemoth that is America to behave competitively, and to let the chips fall where they may, was to convince America voluntarily to give up strategy, to sacrifice self-interest on the altar of efficient markets.

And so they did.

Thus when the question arose whether to permit American corporations to move their manufacturing operations overseas, or to permit foreign companies to leverage their efficiencies to dominate a domestic industry and ensure that 90% of domestic supply would be imported from overseas, the answer the antimonopolists gave was: “yes.” Because it is efficient. Labor abroad is cheaper than labor at home, and transportation costs low, so efficiency requires that production move overseas, and our own resources be reallocated to more competitive uses.

This is the impeccable logic of static efficiency, of general equilibrium models allocating resources optimally. But it is instructive to recall that the men who perfected this model were not trying to describe a free market, much less international trade. They were trying to create a model that a central planner could use to allocate resources to a state’s subjects. What mattered to them in building the model was the good of the whole, not any particular part. And yet it is to a particular part of the global whole that the United States government is dedicated.

The Strategic Trader

Students of strategy would have taken a very different approach to international trade. Strategy teaches that markets are dynamic, and that businesses must make decisions based not only on the market signals that exist today, but on those that can be made to exist in the future. For the successful strategist, unlike the antimonopolist, identifying a product for which consumers are willing to pay the costs of production is not alone enough to justify bringing the product to market. The strategist must be able to secure a source of supply, or a distribution channel, that competitors cannot easily duplicate, before the strategist will enter.

Why? Because without an advantage in supply, or distribution, competitors will duplicate the product, compete away any markups, and leave the strategist no better off than if he had never undertaken the project at all. Indeed, he may be left bankrupt, if he has sunk costs that competition prevents him from recovering. Unlike the economist, the strategist is interested in survival, because he is a partisan of a part of the market—himself—not the market entire. The strategist understands that survival requires power, and all power rests, to a greater or lesser degree, on monopoly.

The strategist is not therefore a free trader in the international arena, at least not as a matter of principle. The strategist understands that trading from a position of strength can enrich, and trading from a position of weakness can impoverish. And to occupy that position of strength, America must, like any monopolist, control supply. Moreover, in the constantly-innovating markets that characterize industrial economies, markets in which innovation emerges from learning by doing, control over physical supply translates into control over the supply of inventions itself.

The strategist does not permit domestic corporations to offshore manufacturing in any market in which the strategist wishes to participate, because that is unsafe: foreign countries could use control over that supply to extract rents from America, to drive domestic firms to bankruptcy, and to gain control over the supply of inventions.

And, as the new trade theorists belatedly discovered, offshoring prevents the development of the dense, geographically-contiguous, supply networks that confer power over whole product categories, such as the electronics hub in Zhengzhou, where iPhone-maker Foxconn is located.

Or the pharmaceutical hub in Hubei.

Coronavirus and the Failure of Free Trade

Today, America is unprepared for the coming wave of coronavirus cases because the antimonopolists running our trade policy do not understand the importance of controlling supply. There is a shortage of masks, because China makes half of the world’s masks, and the Chinese have cut off supply, the state having forbidden even non-Chinese companies that offshored mask production from shipping home masks for which American customers have paid. Not only that, but in January China bought up most of the world’s existing supply of masks, with free-trade-obsessed governments standing idly by as the clock ticked down to their own domestic outbreaks.  

New York State, which lies at the epicenter of the crisis, has agreed to pay five times the market price for foreign supply. That’s not because the cost of making masks has risen, but because sellers are rationing with price. Which is to say: using their control over supply to beggar the state. Moreover, domestic mask makers report that they cannot ramp up production because of a lack of supply of raw materials, some of which are actually made in Wuhan, China. That’s the kind of problem that does not arise when restrictions on offshoring allow manufacturing hubs to develop domestically.

But a shortage of masks is just the beginning. Once a vaccine is developed, the race will be on to manufacture it, and America controls less than 30% of the manufacturing facilities that supply pharmaceuticals to American markets. Indeed, just about the only virus-relevant industries in which we do not have a real capacity shortage today are food and toilet paper, panic buying notwithstanding. Because fortunately for us antimonopolists could not find a way to offshore California and Oregon. If they could have, they surely would have, since both agriculture and timber are labor-intensive industries.

President Trump’s failed attempt to buy a German drug company working on a coronavirus vaccine shows just how damaging free market ideology has been to national security: as Trump should have anticipated given his resistance to the antimonopolists’ approach to trade, the German government nipped the deal in the bud. When an economic agent has market power, the agent can pick its prices, or refuse to sell at all. Only in general equilibrium fantasy is everything for sale, and at a competitive price to boot.

The trouble is: American policymakers, perhaps more than those in any other part of the world, continue to act as though that fantasy were real.

Failures Left and Right

America’s coronavirus predicament is rich with intellectual irony.

Progressives resist free trade ideology, largely out of concern for the effects of trade on American workers. But they seem not to have realized that in doing so they are actually embracing strategy, at least for the benefit of labor.

As a result, progressives simultaneously reject the approach to industrial organization economics that underpins strategic thinking in business: Joseph Schumpeter’s theory of creative destruction, which holds that strategic behavior by firms seeking to achieve and maintain monopolies is ultimately good for society, because it leads to a technological arms race as firms strive to improve supply, distribution, and indeed product quality, in ways that competitors cannot reproduce.

Even if progressives choose to reject Schumpeter’s argument that strategy makes society better off—a proposition that is particularly suspect at the international level, where the availability of tanks ensures that the creative destruction is not always creative—they have much to learn from his focus on the economics of survival.

By the same token, conservatives embrace Schumpeter in arguing for less antitrust enforcement in domestic markets, all the while advocating free trade at the international level and savaging governments for using dumping and tariffs—which is to say, the tools of monopoly—to strengthen their trading positions. It is deeply peculiar to watch the coronavirus expose conservative economists as pie-in-the-sky internationalists. And yet as the global market for coronavirus necessities seizes up, the ideology that urged us to dispense with producing these goods ourselves, out of faith that we might always somehow rely on the support of the rest of the world, provided through the medium of markets, looks pathetically naive.

The cynic might say that inconsistency has snuck up on both progressives and conservatives because each remains too sympathetic to a different domestic constituency.

Dodging a Bullet

America is lucky that a mere virus exposed the bankruptcy of free trade ideology. Because war could have done that instead. It is difficult to imagine how a country that cannot make medical masks—much less a Macbook—would be able to respond effectively to a sustained military attack from one of the many nations that are closing the technological gap long enjoyed by the United States.

The lesson of the coronavirus is: strategy, not antitrust.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Mark Jamison, (Director and Gunter Professor, Public Utility Research Center, University of Florida and Visiting Scholar with the American Enterprise Institute.).]

The economic impacts of the coronavirus pandemic, and of the government responses to it, are significant and could be staggering, especially for small businesses. Goldman Sachs estimates a potential 24% drop in US GDP for the second quarter of 2020 and a 4% decline for the year. Its small business survey found that a little over half of small businesses might last for less than three months in this economic downturn. Small business employs nearly 60 million people in the US. How many will be out of work this year is anyone’s guess, but the number will be large.

What should small businesses do? First, focus on staying in business because their customers and employees need them to be healthy when the economy begins to recover. That will certainly mean slowing down business activity and decreasing payroll to manage losses, and managing liquidity.

Second, look for opportunities in the present crisis. Consumers are slowing their spending, but they will spend for things they still need and need now. And there will be new demand for things they didn’t need much before, like more transportation of food, support for health needs, and crisis management. Which business sectors will recover first? Those whose downturns represented delayed demand, such as postponed repairs and business travel, rather than evaporated demand, such as luxury items.

Third, they can watch for and take advantage of government support programs. Many programs simply provide low-cost loans, which do not solve the small-business problem of customers not buying: Borrowing money to meet payroll for idle workers simply delays business closure and makes bankruptcy more likely. But some grants and tax breaks are under discussion (see below).

Fourth, they can renegotiate loans and contracts. One of the mistakes lenders made in the past is holding stressed borrowers’ feet to the fire, which only led to more, and more costly loan defaults. At least some lenders have learned. So lenders and some suppliers might be willing to receive some payments rather than none.

What should government do? Unfortunately, Washington seems to think that so-called stimulus spending is the cure for any economic downturn. This isn’t true. I’ll explain why below, but let me first get to what is more productive. 

The major problem is that customers are unable to buy and businesses are unable to produce because of the responses to the coronavirus. Sometimes transactions are impossible, but there are times where buying and selling is simply made more costly by the pandemic and the government responses. So government support for the economy should address these problems directly.

For buyers, government officials should recognize that buying is hard and costly for them. So policies should include improving their abilities to buy during this time. Sales tax holidays, especially on healthcare, food, and transportation would be helpful. 

Waivers of postal fees would make e-commerce cheaper. And temporary support for fixed costs, such as mortgages, would free money for other things. Tax breaks for the gig economy would lower service costs and provide new employment opportunities. And tax credits for durables like home improvements would lower costs of social distancing.

But the better opportunities for government impact are on the business side because small business affects both the supply of services and the incomes of consumers.

For small business policy, my American Enterprise Institute colleagues Glenn Hubbard and Michael Strain have done the most thoughtful work that I have seen. They note that the problems for small businesses are that they do not have enough business activity to meet payroll and other bills. This means that “(t)he goal should be to replace a large portion of the revenue (not just the payroll expenses) those businesses would have generated in the absence of being shut down due to the coronavirus.” 

They suggest policies to replace 80 percent of the small business revenue loss. How? By providing grants in the form of government-backed commercial loans that are forgiven if the business continues and maintains payroll, subject to workers being allowed to quit if they find better opportunities. 

What else might work? Tax breaks that lower business costs. These can be breaks in payroll taxes, marginal income tax rates, equipment purchases, permitting, etc., including tax holidays. Rollback of current business losses would trigger tax refunds that improve businesses finances. 

One of the least useful ideas for small businesses is interest-free loans. These might be great for large businesses who are largely managing their financial positions. But such loans fail to address the basic small business problem of keeping the doors open when customers aren’t buying.

Finally, why doesn’t traditional stimulus work, even in other times of economic downturn? Traditional spending-based stimulus assumes that the economic problem is that people want to build things, but not buy them. That’s not a very good assumption. Especially today, where the problems are the higher cost of buying, or perhaps the impossibility of buying with social distancing, and the higher costs of doing businesses. Keeping businesses in business is the key to supporting the economy. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Brent Skorup, (Senior Research Fellow, Mercatus Center, George Mason University).]

One of the most visible economic effects of the COVID-19 spread is the decrease in airline customers. Alec Stapp alerted me to the recent outrage over “ghost flights,” where airlines fly nearly empty planes to maintain their “slots.” 

The airline industry is unfortunately in economic freefall as governments prohibit and travelers pull back on air travel. When the health and industry crises pass, lawmakers will have an opportunity to evaluate the mistakes of the past when it comes to airport congestion and airspace design.

This issue of ghost flights pops up occasionally and offers a lesson in the problems with government rationing of public resources. In this case, the public resource are airport slots: designated times, say, 15 or 30 minutes, a plane may takeoff or land at an airport. (Last week US and EU regulators temporarily waived the use-it-or-lose it rule for slots to mitigate the embarrassing cost and environmental damage caused by forcing airlines to fly empty planes.)

The slots at major hubs at peak times of day are extremely scarce–there’s only so many hours in a day. Today, slot assignment are administratively rationed in a way that favors large, incumbent airlines. As the Wall Street Journal summarized last year,

For decades, airlines have largely divided runway access between themselves at twice-yearly meetings run by the IATA (an airline trade group).

Airport slots are property. They’re valuable. They can be defined, partitioned, leased, put up as collateral, and, in the US, they can be sold and transferred within or between airports.

You just can’t call slots property. Many lawmakers, regulators, and airline representatives refuse to acknowledge the obvious. Stating that slots are valuable public property would make clear the anticompetitive waste that the 40-year slot assignment experiment generates. 

Like many government programs, the slot rationing began in the US as a temporary program decades ago as a response to congestion at New York airports. Slots are currently used to ration access at LGA, JFK, and DCA. And while they don’t use formal slot rationing, the FAA also rations access at four other busy airports: ORD, Newark, LAX, and SFO.

Fortunately, cracks are starting to form. In 2008, at the tailend of the Bush administration, the FAA proposed to auction some slots in New York City’s three airports. The plan was delayed by litigation from incumbent airlines and an adverse finding from the GAO. With a change in administration, the Obama FAA rescinded the plan in 2009.

Before the Obama FAA recission, the mask slipped a bit in the GAO’s criticism of the slot auction plan: 

FAA’s argument that slots are property proves too much—it suggests that the agency has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

Gulp.

Though the GAO helped scuttle the plan, the damage has been done. The idea has now entered public policy discourse: giving away valuable public property is precisely what’s going on. 

The implicit was made explicit in 2011 when, despite spiking the Bush FAA plan, the Obama FAA auctioned two dozen high-value slots. (The reversal and lack of controversy is puzzling to me.) Delta and US Airways wanted to swap some 160 slots at New York and DC airports. As a condition of the mega-swap, the Obama FAA required they divest 24 slots at those popular airports, which the agency auctioned to new entrants. Seven low-fare airlines bid in the auction and Jetblue and WestJet won the divested slots, paying about $90 million combined

The older fictions are rapidly eroding. There is an active secondary market in slots in some nations and when prices are released it becomes clear that the legacy rationing amounts to public property setasides to insiders. In 2016 it leaked, for instance, that an airline paid £58 million for a pair of take-off and landing slots at Heathrow. Other slot sales are in the tens of millions of dollars.

The 2011 FAA auctions and the loosening of rules globally around slot sales signal that the competition benefits from slot markets are too obvious to ignore. Competition from new entry drives down airfare and increases the number of flights.

For instance, a few months ago researchers used a booking app to scour 50 trillion flight itineraries to see new entrants’ effect on airline ticket prices between 2017 and 2019. As the Wall Street Journal reported, the entry of a low-fare carrier reduced ticket prices by 17% on average. The bigger effect was on output–new entry led to a 30% YoY increase in flights.

It’s becoming harder to justify the legacy view, which allow incumbent airlines to dominate the slot allocations via international conferences and national regulations that require “grandfather” slot usage. In a separate article last year, the Wall Street Journal reported that airlines are reluctantly ceding more power to airports in the assignment of slots. This is another signal in the long-running tug-of-war between airports and airlines. Airports generally want to open slots for new competitors–incumbent airlines do not.

The reason for the change of heart? The Journal says,

Airlines and airports reached the deal in part because of concerns governments should start to sell slots.

Gulp. Ghost flights are a government failure but a rational response to governments withholding the benefits of property from airlines. The slot rationing system encourages flying uneconomical flights, smaller planes, and excess carbon emissions. The COVID-19 crisis allowed the public a glimpse at the dysfunctional system. It won’t be easy, but aviation regulators worldwide need to assess slots policy and airspace access before the administrative rationing system spreads to the emerging urban air mobility and drone delivery markets.

Since the LabMD decision, in which the Eleventh Circuit Court of Appeals told the FTC that its orders were unconstitutionally vague, the FTC has been put on notice that it needs to reconsider how it develops and substantiates its claims in data security enforcement actions brought under Section 5. 

Thus, on January 6, the FTC announced on its blog that it will have “New and improved FTC data security orders: Better guidance for companies, better protection for consumers.” However, the changes the Commission highlights only get to a small part of what we have previously criticized when it comes to their “common law” of data security (see here and here). 

While the new orders do list more specific requirements to help explain what the FTC believes is a “comprehensive data security program”, there is still no legal analysis in either the orders or the complaints that would give companies fair notice of what the law requires. Furthermore, nothing about the underlying FTC process has changed, which means there is still enormous pressure for companies to settle rather than litigate the contours of what “reasonable” data security practices look like. Thus, despite the Commission’s optimism, the recent orders and complaints do little to nothing to remedy the problems that plague the Commission’s data security enforcement program.

The changes

In his blog post, the director of the Bureau of Consumer Protection at the FTC describes how new orders in data security enforcement actions are more specific, with one of the main goals being more guidance to businesses trying to follow the law.

Since the early 2000s, our data security orders had contained fairly standard language. For example, these orders typically required a company to implement a comprehensive information security program subject to a biennial outside assessment. As part of the FTC’s Hearings on Competition and Consumer Protection in the 21st Century, we held a hearing in December 2018 that specifically considered how we might improve our data security orders. We were also mindful of the 11th Circuit’s 2018 LabMD decision, which struck down an FTC data security order as unenforceably vague.

Based on this learning, in 2019 the FTC made significant improvements to its data security orders. These improvements are reflected in seven orders announced this year against an array of diverse companies: ClixSense (pay-to-click survey company), i-Dressup (online games for kids), DealerBuilt (car dealer software provider), D-Link (Internet-connected routers and cameras), Equifax (credit bureau), Retina-X (monitoring app), and Infotrax (service provider for multilevel marketers)…

[T]he orders are more specific. They continue to require that the company implement a comprehensive, process-based data security program, and they require the company to implement specific safeguards to address the problems alleged in the complaint. Examples have included yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption. These requirements not only make the FTC’s expectations clearer to companies, but also improve order enforceability.

Why the FTC’s data security enforcement regime fails to provide fair notice or develop law (and is not like the common law)

While these changes are long overdue, it is just one step in the direction of a much-needed process reform at the FTC in how it prosecutes cases with its unfairness authority, particularly in the realm of data security. It’s helpful to understand exactly why the historical failures of the FTC process are problematic in order to understand why the changes it is undertaking are insufficient.

For instance, Geoffrey Manne and I previously highlighted  the various ways the FTC’s data security consent order regime fails in comparison with the common law: 

In Lord Mansfield’s characterization, “the common law ‘does not consist of particular cases, but of general principles, which are illustrated and explained by those cases.’” Further, the common law is evolutionary in nature, with the outcome of each particular case depending substantially on the precedent laid down in previous cases. The common law thus emerges through the accretion of marginal glosses on general rules, dictated by new circumstances. 

The common law arguably leads to legal rules with at least two substantial benefits—efficiency and predictability or certainty. The repeated adjudication of inefficient or otherwise suboptimal rules results in a system that generally offers marginal improvements to the law. The incentives of parties bringing cases generally means “hard cases,” and thus judicial decisions that have to define both what facts and circumstances violate the law and what facts and circumstances don’t. Thus, a benefit of a “real” common law evolution is that it produces a body of law and analysis that actors can use to determine what conduct they can undertake without risk of liability and what they cannot. 

In the abstract, of course, the FTC’s data security process is neither evolutionary in nature nor does it produce such well-defined rules. Rather, it is a succession of wholly independent cases, without any precedent, narrow in scope, and binding only on the parties to each particular case. Moreover it is generally devoid of analysis of the causal link between conduct and liability and entirely devoid of analysis of which facts do not lead to liability. Like all regulation it tends to be static; the FTC is, after all, an enforcement agency, charged with enforcing the strictures of specific and little-changing pieces of legislation and regulation. For better or worse, much of the FTC’s data security adjudication adheres unerringly to the terms of the regulations it enforces with vanishingly little in the way of gloss or evolution. As such (and, we believe, for worse), the FTC’s process in data security cases tends to reject the ever-evolving “local knowledge” of individual actors and substitutes instead the inherently limited legislative and regulatory pronouncements of the past. 

By contrast, real common law, as a result of its case-by-case, bottom-up process, adapts to changing attributes of society over time, largely absent the knowledge and rent-seeking problems of legislatures or administrative agencies. The mechanism of constant litigation of inefficient rules allows the common law to retain a generally efficient character unmatched by legislation, regulation, or even administrative enforcement. 

Because the common law process depends on the issues selected for litigation and the effects of the decisions resulting from that litigation, both the process by which disputes come to the decision-makers’ attention, as well as (to a lesser extent, because errors will be corrected over time) the incentives and ability of the decision-maker to render welfare-enhancing decisions, determine the value of the common law process. These are decidedly problematic at the FTC.

In our analysis, we found the FTC’s process to be wanting compared to the institution of the common law. The incentives of the administrative complaint process put a relatively larger pressure on companies to settle data security actions brought by the FTC compared to private litigants. This is because the FTC can use its investigatory powers as a public enforcer to bypass the normal discovery process to which private litigants are subject, and over which independent judges have authority. 

In a private court action, plaintiffs can’t engage in discovery unless their complaint survives a motion to dismiss from the defendant. Discovery costs remain a major driver of settlements, so this important judicial review is necessary to make sure there is actually a harm present before putting those costs on defendants. 

Furthermore, the FTC can also bring cases in a Part III adjudicatory process which starts in front of an administrative law judge (ALJ) but is then appealable to the FTC itself. Former Commissioner Joshua Wright noted in 2013 that “in the past nearly twenty years… after the administrative decision was appealed to the Commission, the Commission ruled in favor of FTC staff. In other words, in 100 percent of cases where the ALJ ruled in favor of the FTC, the Commission affirmed; and in 100 percent of the cases in which the ALJ ruled against the FTC, the Commission reversed.” In other words, the FTC nearly always rules in favor of itself on appeal if the ALJ finds there is no case, as it did in LabMD. The combination of investigation costs before any complaint at all and the high likelihood of losing through several stages of litigation makes the intelligent business decision to simply agree to a consent decree.

The results of this asymmetrical process show the FTC has not really been building a common law. In all but two cases (Wyndham and LabMD), the companies who have been targeted for investigation by the FTC on data security enforcement have settled. We also noted how the FTC’s data security orders tended to be nearly identical from case-to-case, reflecting the standards of the FTC’s Safeguards Rule. Since the orders were giving nearly identical—and as LabMD found, vague—remedies in each case, it cannot be said there was a common law developing over time.  

What LabMD addressed and what it didn’t

In its decision, the Eleventh Circuit sidestepped fundamental substantive problems with the FTC’s data security practice (which we have made in both our scholarship and LabMD amicus brief) about notice or substantial injury. Instead, the court decided to assume the FTC had proven its case and focused exclusively on the remedy. 

We will assume arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data-security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.

What the Eleventh Circuit did address, though, was that the remedies the FTC had been routinely applying to businesses through its data enforcement actions lacked the necessary specificity in order to be enforceable through injunctions or cease and desist orders.

In the case at hand, the cease and desist order contains no prohibitions. It does not instruct LabMD to stop committing a specific act or practice. Rather, it commands LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness. This command is unenforceable. Its unenforceability is made clear if we imagine what would take place if the Commission sought the order’s enforcement…

The Commission moves the district court for an order requiring LabMD to show cause why it should not be held in contempt for violating the following injunctive provision:

[T]he respondent shall … establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers…. Such program… shall contain administrative, technical, and physical safeguards appropriate to respondent’s size and complexity, the nature and scope of respondent’s activities, and the sensitivity of the personal information collected from or about consumers….

The Commission’s motion alleges that LabMD’s program failed to implement “x” and is therefore not “reasonably designed.” The court concludes that the Commission’s alleged failure is within the provision’s language and orders LabMD to show cause why it should not be held in contempt.

At the show cause hearing, LabMD calls an expert who testifies that the data-security program LabMD implemented complies with the injunctive provision at issue. The expert testifies that “x” is not a necessary component of a reasonably designed data-security program. The Commission, in response, calls an expert who disagrees. At this point, the district court undertakes to determine which of the two equally qualified experts correctly read the injunctive provision. Nothing in the provision, however, indicates which expert is correct. The provision contains no mention of “x” and is devoid of any meaningful standard informing the court of what constitutes a “reasonably designed” data-security program. The court therefore has no choice but to conclude that the Commission has not proven — and indeed cannot prove — LabMD’s alleged violation by clear and convincing evidence.

In other words, the Eleventh Circuit found that an order requiring a reasonable data security program is not specific enough to make it enforceable. This leaves questions as to whether the FTC’s requirement of a “reasonable data security program” is specific enough to survive a motion to dismiss and/or a fair notice challenge going forward.

Under the Federal Rules of Civil Procedure, a plaintiff must provide “a short and plain statement . . . showing that the pleader is entitled to relief,” Fed. R. Civ. P. 8(a)(2), including “enough facts to state a claim . . . that is plausible on its face.” Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007). “[T]hreadbare recitals of the elements of a cause of action, supported by mere conclusory statements” will not suffice. Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009). In FTC v. D-Link, for instance, the Northern District of California dismissed the unfairness claims because the FTC did not sufficiently plead injury. 

[T]hey make out a mere possibility of injury at best. The FTC does not identify a single incident where a consumer’s financial, medical or other sensitive personal information has been accessed, exposed or misused in any way, or whose IP camera has been compromised by unauthorized parties, or who has suffered any harm or even simple annoyance and inconvenience from the alleged security flaws in the DLS devices. The absence of any concrete facts makes it just as possible that DLS’s devices are not likely to substantially harm consumers, and the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor. 

The fair notice question wasn’t reached in LabMD, though it was in FTC v. Wyndham. But the Third Circuit did not analyze the FTC’s data security regime under the “ascertainable certainty” standard applied to agency interpretation of a statute.

Wyndham’s position is unmistakable: the FTC has not yet declared that cybersecurity practices can be unfair; there is no relevant FTC rule, adjudication or document that merits deference; and the FTC is asking the federal courts to interpret § 45(a) in the first instance to decide whether it prohibits the alleged conduct here. The implication of this position is similarly clear: if the federal courts are to decide whether Wyndham’s conduct was unfair in the first instance under the statute without deferring to any FTC interpretation, then this case involves ordinary judicial interpretation of a civil statute, and the ascertainable certainty standard does not apply. The relevant question is not whether Wyndham had fair notice of the FTC’s interpretation of the statute, but whether Wyndham had fair notice of what the statute itself requires.

In other words, Wyndham boxed itself into a corner arguing that they did not have fair notice that the FTC could bring a data security enforcement action against the under Section 5 unfairness. LabMD, on the other hand, argued they did not have fair notice as to how the FTC would enforce its data security standards. Cf. ICLE-Techfreedom Amicus Brief at 19. The Third Circuit even suggested that under an “ascertainable certainty” standard, the FTC failed to provide fair notice: “we agree with Wyndham that the guidebook could not, on its own, provide ‘ascertainable certainty’ of the FTC’s interpretation of what specific cybersecurity practices fail § 45(n).” Wyndham, 799 F.3d at 256 n.21

Most importantly, the Eleventh Circuit did not actually get to the issue of whether LabMD actually violated the law under the factual record developed in the case. This means there is still no caselaw (aside from the ALJ decision in this case) which would allow a company to learn what is and what is not reasonable data security, or what counts as a substantial injury for the purposes of Section 5 unfairness in data security cases. 

How FTC’s changes fundamentally fail to address its failures of process

The FTC’s new approach to its orders is billed as directly responsive to what the Eleventh Circuit did reach in the LabMD decision, but it leaves so much of what makes the process insufficient in place.

First, it is notable that while the FTC highlights changes to its orders, there is still a lack of legal analysis in the orders that would allow a company to accurately predict whether its data security practices are enough under the law. A listing of what specific companies under consent orders are required to do is helpful. But these consent decrees do not require companies to admit liability or contain anything close to the reasoning that accompanies court opinions or normal agency guidance on complying with the law. 

For instance, the general formulation in these 2019 orders is that the company must “establish, implement, and maintain a comprehensive information/software security program that is designed to protect the security, confidentiality, and integrity of such personal information. To satisfy this requirement, Respondent/Defendant must, at a minimum…” (emphasis added), followed by a list of pretty similar requirements with variation depending on the business. Even if a company does all of the listed requirements but a breach occurs, the FTC is not obligated to find the data security program was legally sufficient. There is no safe harbor or presumptive reasonableness that attaches even for the business subject to the order, nonetheless companies looking for guidance. 

While the FTC does now require more specific things, like “yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption,” there is still no analysis on how to meet the standard of reasonableness the FTC relies upon. In other words, it is not clear that this new approach to orders does anything to increase fair notice to companies as to what the FTC requires under Section 5 unfairness.

Second, nothing about the underlying process has really changed. The FTC can still investigate and prosecute cases through administrative law courts with itself as initial court of appeal. This makes the FTC the police, prosecutor, and judge in its own case. In the case of LabMD, who actually won after many appeals, this process ended in bankruptcy. It is no surprise that since the LabMD decision, each of the FTC’s data security enforcement cases have been settled with consent orders, just as they were before the Eleventh Circuit opinion. 

Unfortunately, if the FTC really wants to evolve its data security process like the common law, it needs to engage in an actual common law process. Without caselaw on the facts necessary to establish substantial injury, “unreasonable” data security practices, and causation, there will continue to be more questions than answers about what the law requires. And without changes to the process, the FTC will continue to be able to strong-arm companies into consent decrees.

Big Ink vs. Bigger Tech

Ramsi Woodcock —  30 December 2019

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Ramsi Woodcock, Assistant Professor, College of Law, and Assistant Professor, Department of Management at Gatton College of Business & Economics, University of Kentucky.

When in 2011 Paul Krugman attacked the press for bending over backwards to give equal billing to conservative experts on social security, even though the conservatives were plainly wrong, I celebrated. Social security isn’t the biggest part of the government’s budget, and calls to privatize it in order to save the country from bankruptcy were blatant fear mongering. Why should the press report those calls with a neutrality that could mislead readers into thinking the position reasonable?

Journalists’ ethic of balanced reporting looked, at the time, like gross negligence at best, and deceit at worst. But lost in the pathos of the moment was the rationale behind that ethic, which is not so much to ensure that the truth gets into print as to prevent the press from making policy. For if journalists do not practice balance, then they ultimately decide the angle to take.

And journalists, like the rest of us, will choose their own.

The dark underbelly of the engaged journalism unleashed by progressives like Krugman has nowhere been more starkly exposed than in the unfolding assault of journalists, operating as a special interest, on Google, Facebook, and Amazon, three companies that writers believe have decimated their earnings over the past decade.

In story after story, journalists have manufactured an antitrust movement aimed at breaking up these companies, even though virtually no expert in antitrust law or economics, on either the right or the left, can find an antitrust case against them, and virtually no expert would place any of these three companies at the top of the genuinely long list of monopolies in America that are due for an antitrust reckoning.

Bitter ledes

Headlines alone tell the story. We have: “What Happens After Amazon’s Domination Is Complete? Its Bookstore Offers Clues”; “Be Afraid, Jeff Bezos, Be Very Afraid”; “How Should Big Tech Be Reined In? Here Are 4 Prominent Ideas”;  “The Case Against Google”; and “Powerful Coalition Pushes Back on Anti-Tech Fervor.”

My favorite is: “It’s Time to Break Up Facebook.” Unlike the others, it belongs to an Op-Ed, so a bias is appropriate. Not appropriate, however, is the howler, contained in the article’s body, that “a host of legal scholars like Lina Khan, Barry Lynn and Ganesh Sitaraman are plotting a way forward” toward breakup. Lina Khan has never held an academic appointment. Barry Lynn does not even have a law degree. And Ganesh Sitaraman’s academic specialty is constitutional law, not antitrust. But editors let it through anyway.

As this unguarded moment shows, the press has treated these and other members of a small network of activists and legal scholars who operate on antitrust’s fringes as representative of scholarly sentiment regarding antitrust action. The only real antitrust scholar among them is Tim Wu, who, when you look closely at his public statements, has actually gone no further than to call for Facebook to unwind its acquisitions of Instagram and WhatsApp.

In more sober moments, the press has acknowledged that the law does not support antitrust attacks on the tech giants. But instead of helping readers to understand why, the press instead presents this as a failure of the law. “To Take Down Big Tech,” read one headline in The New York Times, “They First Need to Reinvent the Law.” I have documented further instances of unbalanced reporting here.

This is not to say that we don’t need more antitrust in America. Herbert Hovenkamp, who the New York Times once recognized as  “the dean of American antitrust law,” but has since downgraded to “an antitrust expert” after he came out against the breakup movement, has advocated stronger monopsony enforcement across labor markets. Einer Elhauge at Harvard is pushing to prevent index funds from inadvertently generating oligopolies in markets ranging from airlines to pharmacies. NYU economist Thomas Philippon has called for deconcentration of banking. Yale’s Fiona Morton has pointed to rising markups across the economy as a sign of lax antitrust enforcement. Jonathan Baker has argued with great sophistication for more antitrust enforcement in general.

But no serious antitrust scholar has traced America’s concentration problem to the tech giants.

Advertising monopolies old and new

So why does the press have an axe to grind with the tech giants? The answer lies in the creative destruction wrought by Amazon on the publishing industry, and Google and Facebook upon the newspaper industry.

Newspapers were probably the most durable monopolies of the 20th century, so lucrative that Warren Buffett famously picked them as his preferred example of businesses with “moats” around them. But that wasn’t because readers were willing to pay top dollar for newspapers’ reporting. Instead, that was because, incongruously for organizations dedicated to exposing propaganda of all forms on their front pages, newspapers have long striven to fill every other available inch of newsprint with that particular kind of corporate propaganda known as commercial advertising.

It was a lucrative arrangement. Newspapers exhibit powerful network effects, meaning that the more people read a paper the more advertisers want to advertise in it. As a result, many American cities came to have but one major newspaper monopolizing the local advertising market.

One such local paper, the Lorain Journal of Lorain, Ohio, sparked a case that has since become part of the standard antitrust curriculum in law schools. The paper tried to leverage its monopoly to destroy a local radio station that was competing for its advertising business. The Supreme Court affirmed liability for monopolization.

In the event, neither radio nor television ultimately undermined newspapers’ advertising monopolies. But the internet is different. Radio, television, and newspaper advertising can coexist, because they can target only groups, and often not the same ones, minimizing competition between them. The internet, by contrast, reaches individuals, making it strictly superior to group-based advertising. The internet also lets at least some firms target virtually all individuals in the country, allowing those firms to compete with all comers.

You might think that newspapers, which quickly became an important web destination, were perfectly positioned to exploit the new functionality. But being a destination turned out to be a problem. Consumers reveal far more valuable information about themselves to web gateways, like search and social media, than to particular destinations, like newspaper websites. But consumer data is the key to targeted advertising.

That gave Google and Facebook a competitive advantage, and because these companies also enjoy network effects—search and social media get better the more people use them—they inherited the newspapers’ old advertising monopolies.

That was a catastrophe for journalists, whose earnings and employment prospects plummeted. It was also a catastrophe for the public, because newspapers have a tradition of plowing their monopoly profits into investigative journalism that protects democracy, whereas Google and Facebook have instead invested their profits in new technologies like self-driving cars and cryptocurrencies.

The catastrophe of countervailing power

Amazon has found itself in journalists’ crosshairs for disrupting another industry that feeds writers: publishing. Book distribution was Amazon’s first big market, and Amazon won it, driving most brick and mortar booksellers to bankruptcy. Publishing, long dominated by a few big houses that used their power to extract high wholesale prices from booksellers, some of the profit from which they passed on to authors as royalties, now faced a distribution industry that was even more concentrated and powerful than was publishing. The Department of Justice stamped out a desperate attempt by publishers to cartelize in response, and profits, and author royalties, have continued to fall.

Journalists, of course, are writers, and the disruption of publishing, taken together with the disruption of news, have left journalists with the impression that they have nowhere to turn to escape the new economy.

The abuse of antitrust

Except antitrust.

Unschooled in the fine points of antitrust policy, it seems obvious to them that the Armageddon in newspapers and publishing is a problem of monopoly and that antitrust enforcers should do something about it.  

Only it isn’t and they shouldn’t. The courts have gone to great lengths over the past 130 years to distinguish between doing harm to competition, which is prohibited by the antitrust laws, and doing harm to competitors, which is not.

Disrupting markets by introducing new technologies that make products better is no antitrust violation, even if doing so does drive legacy firms into bankruptcy, and throws their employees out of work and into the streets. Because disruption is really the only thing capitalism has going for it. Disruption is the mechanism by which market economies generate technological advances and improve living standards in the long run. The antitrust laws are not there to preserve old monopolies and oligopolies such as those long enjoyed by newspapers and publishers.

In fact, by tearing down barriers to market entry, the antitrust laws strive to do the opposite: to speed the destruction and replacement of legacy monopolies with new and more innovative ones.

That’s why the entire antitrust establishment has stayed on the sidelines regarding the tech fight. It’s hard to think of three companies that have more obviously risen to prominence over the past generation by disrupting markets using superior technologies than Amazon, Google, and Facebook. It may be possible to find an anticompetitive practice here or there—I certainly have—but no serious antitrust scholar thinks the heart of these firms’ continued dominance lies other than in their technical savvy. The nuclear option of breaking up these firms just makes no sense.

Indeed, the disruption inflicted by these firms on newspapers and publishing is a measure of the extent to which these firms have improved book distribution and advertising, just as the vast disruption created by the industrial revolution was a symptom of the extraordinary technological advances of that period. Few people, and not even Karl Marx, thought that the solution to those disruptions lay with Ned Ludd. The solution to the disruption wrought by Google, Amazon, and Facebook today similarly does not lie in using the antitrust laws to smash the machines.

Governments eventually learned to address the disruption created by the original industrial revolution not by breaking up the big firms that brought that revolution about, but by using tax and transfer, and rate regulation, to ensure that the winners share their gains with the losers. However the press’s campaign turns out, rate regulation, not antitrust, is ultimately the approach that government will take to Amazon, Google, and Facebook if these companies continue to grow in power. Because we don’t have to decide between social justice and technological advance. We can have both. And voters will demand it.

The anti-progress wing of the progressive movement

Alas, smashing the machines is precisely what journalists and their supporters are demanding in calling for the breakup of Amazon, Google, and Facebook. Zephyr Teachout, for example, recently told an audience at Columbia Law School that she would ban targeted advertising except for newspapers. That would restore newspapers’ old advertising monopolies, but also make targeted advertising less effective, for the same reason that Google and Facebook are the preferred choice of advertisers today. (Of course, making advertising more effective might not be a good thing. More on this below.)

This contempt for technological advance has been coupled with a broader anti-intellectualism, best captured by an extraordinary remark made by Barry Lynn, director of the pro-breakup Open Markets Institute, and sometime advocate for the Author’s Guild. The Times quotes him saying that because the antitrust laws once contained a presumption against mergers to market shares in excess of 25%, all policymakers have to do to get antitrust right is “be able to count to four. We don’t need economists to help us count to four.”

But size really is not a good measure of monopoly power. Ask Nokia, which controlled more than half the market for cell phones in 2007, on the eve of Apple’s introduction of the iPhone, but saw its share fall almost to zero by 2012. Or Walmart, the nation’s largest retailer and a monopolist in many smaller retail markets, which nevertheless saw its stock fall after Amazon announced one-day shipping.

Journalists themselves acknowledge that size does not always translate into power when they wring their hands about the Amazon-driven financial troubles of large retailers like Macy’s. Determining whether a market lacks competition really does require more than counting the number of big firms in the market.

I keep waiting for a devastating critique of arguments that Amazon operates in highly competitive markets to emerge from the big tech breakup movement. But that’s impossible for a movement that rejects economics as a corporate plot. Indeed, even an economist as pro-antitrust as Thomas Philippon, who advocates a return to antitrust’s mid-20th century golden age of massive breakups of firms like Alcoa and AT&T, affirms in a new book that American retail is actually a bright spot in an otherwise concentrated economy.

But you won’t find journalists highlighting that. The headline of a Times column promoting Philippon’s book? “Big Business Is Overcharging you $5000 a Year.” I tend to agree. But given all the anti-tech fervor in the press, Philippon’s chapter on why the tech giants are probably not an antitrust problem ought to get a mention somewhere in the column. It doesn’t.

John Maynard Keynes famously observed that “though no one will believe it—economics is a technical and difficult subject.” So too antitrust. A failure to appreciate the field’s technical difficulty is manifest also in Democratic presidential candidate Elizabeth Warren’s antitrust proposals, which were heavily influenced by breakup advocates.

Warren has argued that no large firm should be able to compete on its own platforms, not seeming to realize that doing business means competing on your own platforms. To show up to work in the morning in your own office space is to compete on a platform, your office, from which you exclude competitors. The rule that large firms (defined by Warren as those with more than $25 billion in revenues) cannot compete on their own platforms would just make doing large amounts of business illegal, a result that Warren no doubt does not desire.

The power of the press

The press’s campaign against Amazon, Google, and Facebook is working. Because while they may not be as well financed as Amazon, Google, or Facebook, writers can offer their friends something more valuable than money: publicity.

That appears to have induced a slew of politicians, including both Senator Warren on the left and Senator Josh Hawley on the right, to pander to breakup advocates. The House antitrust investigation into the tech giants, led by a congressman who is simultaneously championing legislation advocated by the News Media Alliance, a newspaper trade group, to give newspapers an exemption from the antitrust laws, may also have similar roots. So too the investigations announced by dozens of elected state attorneys general.

The investigations recently opened by the FTC and Department of Justice may signal no more than a desire not to look idle while so many others act. Which is why the press has the power to turn fiction into reality. Moreover, under the current Administration, the Department of Justice has already undertaken two suspiciously partisan antitrust investigations, and President Trump has made clear his hatred for the liberal bastions that are Amazon, Google and Facebook. The fact that the press has made antitrust action against the tech giants a progressive cause provides convenient cover for the President to take down some enemies.

The future of the news

Rate regulation of Amazon, Google, or Facebook is the likely long-term resolution of concerns about these firms’ power. But that won’t bring back newspapers, which henceforth will always play the loom to Google and Facebook’s textile mills, at least in the advertising market.

Journalists and their defenders, like Teachout, have been pushing to restore newspapers’ old monopolies by government fiat. No doubt that would make existing newspapers, and their staffs, very happy. But what is good for Big News is not necessarily good for journalism in the long run.

The silver lining to the disruption of newspapers’ old advertising monopolies is that it has created an opportunity for newspapers to wean themselves off a funding source that has always made little sense for organizations dedicated to helping Americans make informed, independent decisions, free of the manipulation of others.

For advertising has always had a manipulative function, alongside its function of disseminating product information to consumers. And, as I have argued elsewhere, now that the vast amounts of product information available for free on the internet have made advertising obsolete as a source of product information, manipulation is now advertising’s only real remaining function.

Manipulation causes consumers to buy products they don’t really want, giving firms that advertise a competitive advantage that they don’t deserve. That makes for an antitrust problem, this time with real consequences not just for competitors, but also for technological advance, as manipulative advertising drives dollars away from superior products toward advertised products, and away from investment in innovation and toward investment in consumer seduction.

The solution is to ban all advertising, targeted or not, rather than to give newspapers an advertising monopoly. And to give journalism the state subsidies that, like all public goods, from defense to highways, are journalism’s genuine due. The BBC provides a model of how that can be done without fear of government influence.

Indeed, Teachout’s proposed newspaper advertising monopoly is itself just a government subsidy, but a subsidy extracted through an advertising medium that harms consumers. Direct government subsidization achieves the same result, without the collateral consumer harm.

The press’s brazen advocacy of antitrust action against the tech giants, without making clear how much the press itself has to gain from that action, and the utter absence of any expert support for this approach, represents an abdication by the press of its responsibility to create an informed citizenry that is every bit as profound as the press’s lapses on social security a decade ago.

I’m glad we still have social security. But I’m also starting to miss balanced journalism.

1/3/2020: Editor’s note – this post was edited for clarification and minor copy edits.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Geoffrey A. Manne, president and founder of the International Center for Law & Economics, and Alec Stapp, Research Fellow at the International Center for Law & Economics.

Source: The Economist

Is there a relationship between concentrated economic power and political power? Do big firms have success influencing politicians and regulators to a degree that smaller firms — or even coalitions of small firms — could only dream of? That seems to be the narrative that some activists, journalists, and scholars are pushing of late. And, to be fair, it makes some intuitive sense (before you look at the data). The biggest firms have the most resources — how could they not have an advantage in the political arena?

The argument that corporate power leads to political power faces at least four significant challenges, however. First, the little empirical research there is does not support the claim. Second, there is almost no relationship between market capitalization (a proxy for economic power) and lobbying expenditures (a, admittedly weak, proxy for political power). Third, the absolute level of spending on lobbying is surprisingly low in the US given the potential benefits from rent-seeking (this is known as the Tullock paradox). Lastly, the proposed remedy for this supposed problem is to make antitrust more political — an intervention that is likely to make the problem worse rather than better (assuming there is a problem to begin with).

The claims that political power follows economic power

The claim that large firms or industry concentration causes political power (and thus that under-enforcement of antitrust laws is a key threat to our democratic system of government) is often repeated, and accepted as a matter of faith. Take, for example, Robert Reich’s March 2019 Senate testimony on “Does America Have a Monopoly Problem?”:

These massive corporations also possess substantial political clout. That’s one reason they’re consolidating: They don’t just seek economic power; they also seek political power.

Antitrust laws were supposed to stop what’s been going on.

* * *

[S]uch large size and gigantic capitalization translate into political power. They allow vast sums to be spent on lobbying, political campaigns, and public persuasion. (emphasis added)

Similarly, in an article in August of 2019 for The Guardian, law professor Ganesh Sitaraman argued there is a tight relationship between economic power and political power:

[R]eformers recognized that concentrated economic power — in any form — was a threat to freedom and democracy. Concentrated economic power not only allowed for localized oppression, especially of workers in their daily lives, it also made it more likely that big corporations and wealthy people wouldn’t be subject to the rule of law or democratic controls. Reformers’ answer to the concentration of economic power was threefold: break up economic power, rein it in through regulation, and tax it.

It was the reformers of the Gilded Age and Progressive Era who invented America’s antitrust laws — from the Sherman Antitrust Act of 1890 to the Clayton Act and Federal Trade Commission Acts of the early 20th century. Whether it was Republican trust-buster Teddy Roosevelt or liberal supreme court justice Louis Brandeis, courageous leaders in this era understood that when companies grow too powerful they threatened not just the economy but democratic government as well. Break-ups were a way to prevent the agglomeration of economic power in the first place, and promote an economic democracy, not just a political democracy. (emphasis added)

Luigi Zingales made a similar argument in his 2017 paper “Towards a Political Theory of the Firm”:

[T]he interaction of concentrated corporate power and politics is a threat to the functioning of the free market economy and to the economic prosperity it can generate, and a threat to democracy as well. (emphasis added)

The assumption that economic power leads to political power is not a new one. Not only, as Zingales points out, have political thinkers since Adam Smith asserted versions of the same, but more modern social scientists have continued the claims with varying (but always indeterminate) degrees of quantification. Zingales quotes Adolf Berle and Gardiner Means’ 1932 book, The Modern Corporation and Private Property, for example:

The rise of the modern corporation has brought a concentration of economic power which can compete on equal terms with the modern state — economic power versus political power, each strong in its own field. 

Russell Pittman (an economist at the DOJ Antitrust Division) argued in 1988 that rent-seeking activities would be undertaken only by firms in highly concentrated industries because:

if the industry in question is unconcentrated, then the firm may decide that the level of benefits accruing to the industry will be unaffected by its own level of contributions, so that the benefits may be enjoyed without incurrence of the costs. Such a calculation may be made by other firms in the industry, of course, with the result that a free-rider problem prevents firms individually from making political contributions, even if it is in their collective interest to do so.

For the most part the claims are virtually entirely theoretical and their support anecdotal. Reich, for example, supports his claim with two thin anecdotes from which he draws a firm (but, in fact, unsupported) conclusion: 

To take one example, although the European Union filed fined [sic] Google a record $2.7 billion for forcing search engine users into its own shopping platforms, American antitrust authorities have not moved against the company.

Why not?… We can’t be sure why the FTC chose not to pursue Google. After all, section 5 of the Federal Trade Commission Act of 1914 gives the Commission broad authority to prevent unfair acts or practices. One distinct possibility concerns Google’s political power. It has one of the biggest lobbying powerhouses in Washington, and the firm gives generously to Democrats as well as Republicans.

A clearer example of an abuse of power was revealed last November when the New York Times reported that Facebook executives withheld evidence of Russian activity on their platform far longer than previously disclosed.

Even more disturbing, Facebook employed a political opposition research firm to discredit critics. How long will it be before Facebook uses its own data and platform against critics? Or before potential critics are silenced even by the possibility? As the Times’s investigation made clear, economic power cannot be separated from political power. (emphasis added)

The conclusion — that “economic power cannot be separated from political power” — simply does not follow from the alleged evidence. 

The relationship between economic power and political power is extremely weak

Few of these assertions of the relationship between economic and political power are backed by empirical evidence. Pittman’s 1988 paper is empirical (as is his previous 1977 paper looking at the relationship between industry concentration and contributions to Nixon’s re-election campaign), but it is also in direct contradiction to several other empirical studies (Zardkoohi (1985); Munger (1988); Esty and Caves (1983)) that find no correlation between concentration and political influence; Pittman’s 1988 paper is indeed a response to those papers, in part. 

In fact, as one study (Grier, Muger & Roberts (1991)) summarizes the evidence:

[O]f ten empirical investigations by six different authors/teams…, relatively few of the studies find a positive, significant relation between contributions/level of political activity and concentration, though a variety of measures of both are used…. 

There is little to recommend most of these studies as conclusive one way or the other on the question of interest. Each one suffers from a sample selection or estimation problem that renders its results suspect. (emphasis added)

And, as they point out, there is good reason to question the underlying theory of a direct correlation between concentration and political influence:

[L]egislation or regulation favorable to an industry is from the perspective of a given firm a public good, and therefore subject to Olson’s collective action problem. Concentrated industries should suffer less from this difficulty, since their sparse numbers make bargaining cheaper…. [But at the same time,] concentration itself may affect demand, suggesting that the predicted correlation between concentration and political activity may be ambiguous, or even negative. 

* * *

The only conclusion that seems possible is that the question of the correct relation between the structure of an industry and its observed level of political activity cannot be resolved theoretically. While it may be true that firms in a concentrated industry can more cheaply solve the collective action problem that inheres in political action, they are also less likely to need to do so than their more competitive brethren…. As is so often the case, the interesting question is empirical: who is right? (emphasis added)

The results of Grier, Muger & Roberts (1991)’s own empirical study are ambiguous at best (and relate only to political participation, not success, and thus not actual political power):

[A]re concentrated industries more or less likely to be politically active? Numerous previous studies have addressed this topic, but their methods are not comparable and their results are flatly contradictory. 

On the side of predicting a positive correlation between concentration and political activity is the theory that Olson’s “free rider” problem has more bite the larger the number of participants and the smaller their respective individual benefits. Opposing this view is the claim that it precisely because such industries are concentrated that they have less need for government intervention. They can act on their own to gamer the benefits of cartelization that less concentrated industries can secure only through political activity. 

Our results indicate that both sides are right, over some range of concentration. The relation between political activity and concentration is a polynomial of degree 2, rising and then falling, achieving a peak at a four-firm concentration ratio slightly below 0.5. (emphasis added)

Despite all of this, Zingales (like others) explicitly claims that there is a clear and direct relationship between economic power and political power:

In the last three decades in the United States, the power of corporations to shape the rules of the game has become stronger… [because] the size and market share of companies has increased, which reduces the competition across conflicting interests in the same sector and makes corporations more powerful vis-à-vis consumers’ interest.

But a quick look at the empirical data continues to call this assertion into serious question. Indeed, if we look at the lobbying expenditures of the top 50 companies in the US by market capitalization, we see an extremely weak (at best) relationship between firm size and political power (as proxied by lobbying expenditures):

Of course, once again, this says little about the effectiveness of efforts to exercise political power, which could, in theory, correlate with market power but not expenditures. Yet the evidence on this suggests that, while concentration “increases both [political] activity and success…, [n]either firm size nor industry size has a robust influence on political activity or success.” (emphasis added). Of course there are enormous and well-known problems with measuring industry concentration, and it’s not clear that even this attribute is well-correlated with political activity or success (and, interestingly for the argument that profits are a big part of the story because firms in more concentrated industries from lax antitrust realize higher profits have more money to spend on political influence, even concentration in the Esty and Caves study is not correlated with political expenditures.)

Indeed, a couple of examples show the wide range of lobbying expenditures for a given firm size. Costco, which currently has a market cap of $130 billion, has spent only $210,000 on lobbying so far in 2019. By contrast, Amgen, which has a $144 billion market cap, has spent $8.54 million, or more than 40 times as much. As shown in the chart above, this variance is the norm. 

However, discussing the relative differences between these companies is less important than pointing out the absolute levels of expenditure. Spending eight and a half million dollars per year would not be prohibitive for literally thousands of firms in the US. If access is this cheap, what’s going on here?

Why is there so little money in US politics?

The Tullock paradox asks why, if the return to rent-seeking is so high — which it plausibly is because the government spends trillions of dollars each year — is so little money spent on influencing policymakers?

Considering the value of public policies at stake and the reputed influence of campaign contributors in policymaking, Gordon Tullock (1972) asked, why is there so little money in U.S. politics? In 1972, when Tullock raised this question, campaign spending was about $200 million. Assuming a reasonable rate of return, such an investment could have yielded at most $250-300 million over time, a sum dwarfed by the hundreds of billions of dollars worth of public expenditures and regulatory costs supposedly at stake.

A recent article by Scott Alexander updated the numbers for 2019 and compared the total to the $12 billion US almond industry:

[A]ll donations to all candidates, all lobbying, all think tanks, all advocacy organizations, the Washington Post, Vox, Mic, Mashable, Gawker, and Tumblr, combined, are still worth a little bit less than the almond industry.

Maybe it’s because spending money on donations, lobbying, think tanks, journalism and advocacy is ineffective on net (i.e., spending by one group is counterbalanced by spending by another group) and businesses know it?

In his paper on elections, Ansolabehere focuses on the corporate perspective. He argues that money neither makes a candidate much more likely to win, nor buys much influence with a candidate who does win. Corporations know this, which is why they don’t bother spending more. (emphasis added)

To his credit, Zingales acknowledges this issue:

To the extent that US corporations are exercising political influence, it seems that they are choosing less-visible but perhaps more effective ways. In fact, since Gordon Tullock’s (1972) famous article, it has been a puzzle in political science why there is so little money in politics (as discussed in this journal by Ansolabehere, de Figueiredo, and Snyder 2003).

So, what are these “less-visible but perhaps more effective” ways? Unfortunately, the evidence in support of this claim is anecdotal and unconvincing. As noted above, Reich offers only speculation and extremely weak anecdotal assertions. Meanwhile, Zingales tells the story of Robert (mistakenly identified in the paper as “Richard”) Rubin pushing through repeal of Glass-Steagall to benefit Citigroup, then getting hired for $15 million a year when he left the government. Assuming the implication is actually true, is that amount really beyond the reach of all but the largest companies? How many banks with an interest in the repeal of Glass-Steagall were really unlikely at the time to be able to credibly offer future compensation because they would be out of business? Very few, and no doubt some of the biggest and most powerful were arguably at greater risk of bankruptcy than some of the smaller banks.

Maybe only big companies have an interest in doing this kind of thing because they have more to lose? But in concentrated industries they also have more to lose by conferring the benefit on their competitors. And it’s hard to make the repeal or passage of a law, say, apply only to you and not everyone else in the industry. Maybe they collude? Perhaps, but is there any evidence of this? Zingales offers only pure speculation here, as well. For example, why was the US Google investigation dropped but not the EU one? Clearly because of White House visits, says Zingales. OK — but how much do these visits cost firms? If that’s the source of political power, it surely doesn’t require monopoly profits to obtain it. And it’s virtually impossible that direct relationships of this kind are beyond the reach of coalitions of smaller firms, or even small firms, full stop.  

In any case, the political power explanation turns mostly on doling out favors in exchange for individuals’ payoffs — which just aren’t that expensive, and it’s doubtful that the size of a firm correlates with the quality of its one-on-one influence brokering, except to the extent that causation might run the other way — which would be an indictment not of size but of politics. Of course, in the Hobbesian world of political influence brokering, as in the Hobbesian world of pre-political society, size alone is not determinative so long as alliances can be made or outcomes turn on things other than size (e.g., weapons in the pre-Hobbesian world; family connections in the world of political influence)

The Noerr–Pennington doctrine is highly relevant here as well. In Noerr, the Court ruled that “no violation of the [Sherman] Act can be predicated upon mere attempts to influence the passage or enforcement of laws” and “[j]oint efforts to influence public officials do not violate the antitrust laws even though intended to eliminate competition.” This would seem to explain, among other things, the existence of trade associations and other entities used by coalitions of small (and large) firms to influence the policymaking process.

If what matters for influence peddling is ultimately individual relationships and lobbying power, why aren’t the biggest firms in the world the lobbying firms and consultant shops? Why is Rubin selling out for $15 million a year if the benefit to Citigroup is in the billions? And, if concentration is the culprit, why isn’t it plausibly also the solution? It isn’t only the state that keeps the power of big companies in check; it’s other big companies, too. What Henry G. Manne said in his testimony on the Industrial Reorganization Act of 1973 remains true today: 

There is simply no correlation between the concentration ratio in an industry, or the size of its firms, and the effectiveness of the industry in the halls of Government.

In addition to the data presented earlier, this analysis would be incomplete if it did not mention the role of advocacy groups in influencing outcomes, the importance and size of large foundations, the role of unions, and the role of individual relationships.

Maybe voters matter more than money?

The National Rifle Association spends very little on direct lobbying efforts (less than $10 million over the most recent two-year cycle). The organization’s total annual budget is around $400 million. In the grand scheme of things, these are not overwhelming resources. But the NRA is widely-regarded as one of the most powerful political groups in the country, particularly within the Republican Party. How could this be? In short, maybe it’s not Sturm Ruger, Remington Outdoor, and Smith & Wesson — the three largest gun manufacturers in the US — that influence gun regulations; maybe it’s the highly-motivated voters who like to buy guns. 

The NRA has 5.5 million members, many of whom vote in primaries with gun rights as one of their top issues  — if not the top issue. And with low turnout in primaries — only 8.7% of all registered voters participated in 2018 Republican primaries — a candidate seeking the Republican nomination all but has to secure an endorsement from the NRA. On this issue at least, the deciding factor is the intensity of voter preferences, not the magnitude of campaign donations from rent-seeking corporations.

The NRA is not the only counterexample to arguments like those from Zingales. Auto dealers are a constituency that is powerful not necessarily due to its raw size but through its dispersed nature. At the state level, almost every political district has an auto dealership (and the owners are some of the wealthiest and best-connected individuals in the area). It’s no surprise then that most states ban the direct sale of cars from manufacturers (i.e., you have to go through a dealer). This results in higher prices for consumers and lower output for manufacturers. But the auto dealership industry is not highly concentrated at the national level. The dealers don’t need to spend millions of dollars lobbying federal policymakers for special protections; they can do it on the local level — on a state-by-state basis — for much less money (and without merging into consolidated national chains).

Another, more recent, case highlights the factors besides money that may affect political decisions. President Trump has been highly critical of Jeff Bezos and the Washington Post (which Bezos owns) since the beginning of his administration because he views the newspaper as a political enemy. In October, Microsoft beat out Amazon for a $10 billion contract to provide cloud infrastructure for the Department of Defense (DoD). Now, Amazon is suing the government, claiming that Trump improperly influenced the competitive bidding process and cost the company a fair shot at the contract. This case is a good example of how money may not be determinative at the margin, and also how multiple “monopolies” may have conflicting incentives and we don’t know how they net out.

Politicizing antitrust will only make this problem worse

At the FTC’s “Hearings on Competition and Consumer Protection in the 21st Century,” Barry Lynn of the Open Markets Institute advocated using antitrust to counter the political power of economically powerful firms:

[T]he main practical goal of antimonopoly is to extend checks and balances into the political economy. The foremost goal is not and must never be efficiency. Markets are made, they do not exist in any platonic ether. The making of markets is a political and moral act.

In other words, the goal of breaking up economic power is not to increase economic benefits but to decrease political influence. 

But as the author of one of the empirical analyses of the relationship between economic and political power notes the asserted “solution” to the unsupported “problem” of excess political influence by economically powerful firms — more and easier antitrust enforcement — may actually make the alleged problem worse:

Economic rents may be obtained through the process of market competition or be obtained by resorting to governmental protection. Rational firms choose the least costly alternative. Collusion to obtain governmental protection will be less costly, the higher the concentration, ceteris paribus. However, high concentration in itself is neither necessary nor sufficient to induce governmental protection.

The result that rent-seeking activity is triggered when firms are affected by government regulation has a clear implication: to reduce rent-seeking waste, governmental interference in the market place needs to be attenuated. Pittman’s suggested approach, however, is “to maintain a vigorous antitrust policy” (p. 181). In fact, a more strict antitrust policy may exacerbate rent-seeking. For example, the firms which will be affected by a vigorous application of antitrust laws would have incentive to seek moderation (or rents) from Congress or from the enforcement officials.

Rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration. And imbuing antitrust with an ill-defined set of vague political objectives (as many proponents of these arguments desire), would also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so. 

And if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? With an expanded basis for increased enforcement, the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might find that we end up with even more concentration because the exceptions could subsume the rules. All of which of course highlights the fundamental, underlying irony of claims that we need to diminish the economic content of antitrust in order to reduce the political power of private firms: If you make antitrust more political, you’ll get less democratic, more politically determined, results.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?

It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.

A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”

The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.

The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.

Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.

In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.

Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.

Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.

But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.

Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.

Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.

The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.

Source: Benedict Evans

[N]ew combinations are, as a rule, embodied, as it were, in new firms which generally do not arise out of the old ones but start producing beside them; … in general it is not the owner of stagecoaches who builds railways. – Joseph Schumpeter, January 1934

Elizabeth Warren wants to break up the tech giants — Facebook, Google, Amazon, and Apple — claiming they have too much power and represent a danger to our democracy. As part of our response to her proposal, we shared a couple of headlines from 2007 claiming that MySpace had an unassailable monopoly in the social media market.

Tommaso Valletti, the chief economist of the Directorate-General for Competition (DG COMP) of the European Commission, said, in what we assume was a reference to our posts, “they go on and on with that single example to claim that [Facebook] and [Google] are not a problem 15 years later … That’s not what I would call an empirical regularity.”

We appreciate the invitation to show that prematurely dubbing companies “unassailable monopolies” is indeed an empirical regularity.

It’s Tough to Make Predictions, Especially About the Future of Competition in Tech

No one is immune to this phenomenon. Antitrust regulators often take a static view of competition, failing to anticipate dynamic technological forces that will upend market structure and competition.

Scientists and academics make a different kind of error. They are driven by the need to satisfy their curiosity rather than shareholders. Upon inventing a new technology or discovering a new scientific truth, academics often fail to see the commercial implications of their findings.

Maybe the titans of industry don’t make these kinds of mistakes because they have skin in the game? The profit and loss statement is certainly a merciless master. But it does not give CEOs the power of premonition. Corporate executives hailed as visionaries in one era often become blinded by their success, failing to see impending threats to their company’s core value propositions.

Furthermore, it’s often hard as outside observers to tell after the fact whether business leaders just didn’t see a tidal wave of disruption coming or, worse, they did see it coming and were unable to steer their bureaucratic, slow-moving ships to safety. Either way, the outcome is the same.

Here’s the pattern we observe over and over: extreme success in one context makes it difficult to predict how and when the next paradigm shift will occur in the market. Incumbents become less innovative as they get lulled into stagnation by high profit margins in established lines of business. (This is essentially the thesis of Clay Christensen’s The Innovator’s Dilemma).

Even if the anti-tech populists are powerless to make predictions, history does offer us some guidance about the future. We have seen time and again that apparently unassailable monopolists are quite effectively assailed by technological forces beyond their control.

PCs

Source: Horace Dediu

Jan 1977: Commodore PET released

Jun 1977: Apple II released

Aug 1977: TRS-80 released

Feb 1978: “I.B.M. Says F.T.C. Has Ended Its Typewriter Monopoly Study” (NYT)

Mobile

Source: Comscore

Mar 2000: Palm Pilot IPO’s at $53 billion

Sep 2006: “Everyone’s always asking me when Apple will come out with a cellphone. My answer is, ‘Probably never.’” – David Pogue (NYT)

Apr 2007: “There’s no chance that the iPhone is going to get any significant market share.” Ballmer (USA TODAY)

Jun 2007: iPhone released

Nov 2007: “Nokia: One Billion Customers—Can Anyone Catch the Cell Phone King?” (Forbes)

Sep 2013: “Microsoft CEO Ballmer Bids Emotional Farewell to Wall Street” (Reuters)

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

Search

Source: Distilled

Mar 1998: “How Yahoo! Won the Search Wars” (Fortune)

Once upon a time, Yahoo! was an Internet search site with mediocre technology. Now it has a market cap of $2.8 billion. Some people say it’s the next America Online.

Sep 1998: Google founded

Instant Messaging

Sep 2000: “AOL Quietly Linking AIM, ICQ” (ZDNet)

AOL’s dominance of instant messaging technology, the kind of real-time e-mail that also lets users know when others are online, has emerged as a major concern of regulators scrutinizing the company’s planned merger with Time Warner Inc. (twx). Competitors to Instant Messenger, such as Microsoft Corp. (msft) and Yahoo! Inc. (yhoo), have been pressing the Federal Communications Commission to force AOL to make its services compatible with competitors’.

Dec 2000: “AOL’s Instant Messaging Monopoly?” (Wired)

Dec 2015: Report for the European Parliament

There have been isolated examples, as in the case of obligations of the merged AOL / Time Warner to make AOL Instant Messenger interoperable with competing messaging services. These obligations on AOL are widely viewed as having been a dismal failure.

Oct 2017: AOL shuts down AIM

Jan 2019: “Zuckerberg Plans to Integrate WhatsApp, Instagram and Facebook Messenger” (NYT)

Retail

Source: Seeking Alpha

May 1997: Amazon IPO

Mar 1998: American Booksellers Association files antitrust suit against Borders, B&N

Feb 2005: Amazon Prime launches

Jul 2006: “Breaking the Chain: The Antitrust Case Against Wal-Mart” (Harper’s)

Feb 2011: “Borders Files for Bankruptcy” (NYT)

Social

Feb 2004: Facebook founded

Jan 2007: “MySpace Is a Natural Monopoly” (TechNewsWorld)

Seventy percent of Yahoo 360 users, for example, also use other social networking sites — MySpace in particular. Ditto for Facebook, Windows Live Spaces and Friendster … This presents an obvious, long-term business challenge to the competitors. If they cannot build up a large base of unique users, they will always be on MySpace’s periphery.

Feb 2007: “Will Myspace Ever Lose Its Monopoly?” (Guardian)

Jun 2011: “Myspace Sold for $35m in Spectacular Fall from $12bn Heyday” (Guardian)

Music

Source: RIAA

Dec 2003: “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.” – Steve Jobs (Rolling Stone)

Apr 2006: Spotify founded

Jul 2009: “Apple’s iPhone and iPod Monopolies Must Go” (PC World)

Jun 2015: Apple Music announced

Video

Source: OnlineMBAPrograms

Apr 2003: Netflix reaches one million subscribers for its DVD-by-mail service

Mar 2005: FTC blocks Blockbuster/Hollywood Video merger

Sep 2006: Amazon launches Prime Video

Jan 2007: Netflix streaming launches

Oct 2007: Hulu launches

May 2010: Hollywood Video’s parent company files for bankruptcy

Sep 2010: Blockbuster files for bankruptcy

The Only Winning Move Is Not to Play

Predicting the future of competition in the tech industry is such a fraught endeavor that even articles about how hard it is to make predictions include incorrect predictions. The authors just cannot help themselves. A March 2012 BBC article “The Future of Technology… Who Knows?” derided the naysayers who predicted doom for Apple’s retail store strategy. Its kicker?

And that is why when you read that the Blackberry is doomed, or that Microsoft will never make an impression on mobile phones, or that Apple will soon dominate the connected TV market, you need to take it all with a pinch of salt.

But Blackberry was doomed and Microsoft never made an impression on mobile phones. (Half credit for Apple TV, which currently has a 15% market share).

Nobel Prize-winning economist Paul Krugman wrote a piece for Red Herring magazine (seriously) in June 1998 with the title “Why most economists’ predictions are wrong.” Headline-be-damned, near the end of the article he made the following prediction:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Robert Metcalfe himself predicted in a 1995 column that the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” After pledging to “eat his words” if the prediction did not come true, “in front of an audience, he put that particular column into a blender, poured in some water, and proceeded to eat the resulting frappe with a spoon.”

A Change Is Gonna Come

Benedict Evans, a venture capitalist at Andreessen Horowitz, has the best summary of why competition in tech is especially difficult to predict:

IBM, Microsoft and Nokia were not beaten by companies doing what they did, but better. They were beaten by companies that moved the playing field and made their core competitive assets irrelevant. The same will apply to Facebook (and Google, Amazon and Apple).

Elsewhere, Evans tried to reassure his audience that we will not be stuck with the current crop of tech giants forever:

With each cycle in tech, companies find ways to build a moat and make a monopoly. Then people look at the moat and think it’s invulnerable. They’re generally right. IBM still dominates mainframes and Microsoft still dominates PC operating systems and productivity software. But… It’s not that someone works out how to cross the moat. It’s that the castle becomes irrelevant. IBM didn’t lose mainframes and Microsoft didn’t lose PC operating systems. Instead, those stopped being ways to dominate tech. PCs made IBM just another big tech company. Mobile and the web made Microsoft just another big tech company. This will happen to Google or Amazon as well. Unless you think tech progress is over and there’ll be no more cycles … It is deeply counter-intuitive to say ‘something we cannot predict is certain to happen’. But this is nonetheless what’s happened to overturn pretty much every tech monopoly so far.

If this time is different — or if there are more false negatives than false positives in the monopoly prediction game — then the advocates for breaking up Big Tech should try to make that argument instead of falling back on “big is bad” rhetoric. As for us, we’ll bet that we have not yet reached the end of history — tech progress is far from over.

 

The gist of these arguments is simple. The Amazon / Whole Foods merger would lead to the exclusion of competitors, with Amazon leveraging its swaths of data and pricing below costs. All of this begs a simple question: have these prophecies come to pass?

The problem with antitrust populism is not just that it leads to unfounded predictions regarding the negative effects of a given business practice. It also ignores the significant gains which consumers may reap from these practices. The Amazon / Whole foods offers a case in point.

Continue Reading...