Search Results For miles

By Thomas Hazlett

The Apple e-books case is throwback to Dr. Miles, the 1911 Supreme Court decision that managed to misinterpret the economics of competition and so thwart productive activity for over a century. The active debate here at TOTM reveals why.

The District Court and Second Circuit have employed a per se rule to find that the Apple e-books agreement with five major publishers constituted a violation of Section 1 of the Sherman Act. Citing the active cooperation in contract negotiations involving multiple horizontal competitors (publishers) and the Apple offer, which appears to have raised prices paid for e-books, the conclusion that this is a case of horizontal collusion appears a slam dunk to some. “Try as one may,” writes Jonathan Jacobson, “it is hard to find an easier antitrust case than United States v. Apple.”

I’m guessing that that is what Charles Evans Hughes thought about the Dr. Miles case in 1911.

Upon scrutiny, the apparent simplicity in either instance evaporates. Dr. Miles has been revised as per GTE Sylvania, Leegin, and (thanks, Keith Hylton) Business Electronics v. Sharp Electronics. Let’s here look at the pending Apple dispute.

First, the Second Circuit verdict was not only a split decision on application of the per se rule, the dissent ably stated a case for why the Apple e-books deal should be regarded as pro-competitive and, thus, legal.

Second, the price increase cited as determinative occurred in a two-sided market; the fact asserted does not establish a monopolistic restriction of output. Further analysis, as called for under the rule of reason, is needed to flesh out the totality of the circumstances and the net impact of the Apple-publisher agreement on consumer welfare. That includes evidence regarding what happens to total revenues as market structure and prices change.

Third, a new entrant emerged as per the actions undertaken — the agreements pointedly did not “lack…. any redeeming virtue” (Northwest Wholesale Stationers, 1985), the justification for per se illegality. The fact that a new platform — Apple challenging Amazon’s e-book dominance — was both cause and effect of the alleged anti-competitive behavior is a textbook example of ancillarity. The “naked restraints” that publishers might have imposed had Apple not brought new products and alternative content distribution channels into the mix thus dressed up. It is argued by some that the clothes were skimpy. But that fashion statement is what a rule of reason analysis is needed to determine.

Fourth, the successful market foray that came about in the two-sided e-book market is a competitive victory not to be trifled. As the Supreme Court determined in Leegin: A “per se rule cannot be justified by the possibility of higher prices absent a further showing of anticompetitive conduct. The antitrust laws are designed to protect interbrand competition from which lower prices can later result.” The Supreme Court need here overturn U.S. v. Apple as decided by the Second Circuit in order that the “later result” be reasonably examined.

Fifth, lock-in is avoided with a rule of reason. As the Supreme Court said in Leegin:

As courts gain experience considering the effects of these restraints by applying the rule of reason… they can establish the litigation structure to ensure the rule operates to eliminate anticompetitive restraints….

The lock-in, conversely, comes with per se rules that nip the analysis in the bud, assuming simplicity where complexity obtains.

Sixth, Judge Denise Cote, who issued the District Court ruling against Apple, shows why the rule of reason is needed to counter her per se approach:

Here we have every necessary component: with Apple’s active encouragement and assistance, the Publisher Defendants agreed to work together to eliminate retail price competition and raise e-book prices, and again with Apple’s knowing and active participation, they brought their scheme to fruition.

But that cannot be “every necessary component.” It is not in Apple’s interest to raise prices, but to lower prices paid. Something more has to be going on. Indeed, in raising prices the judge unwittingly cites an unarguable pro-competitive aspect of Apple’s foray: It is competing with Amazon and bidding resources from a rival. Indeed, the rival is, arguably, an incumbent with market power. This cannot be the end of the analysis. That it is constitutes a throwback to the anti-competitive per se rule of Dr. Miles.

Seventh, in oral arguments at the Second Circuit, Judge Raymond J. Lohier, Jr. directed a question to Justice Department counsel, asking how Apple and the publishers “could have broken Amazon’s monopoly of the e-book market without violating antitrust laws.” The DOJ attorney responded, according to an article in The New Yorker, by advising that

Apple could have let the competition among companies play out naturally without pursuing explicit strategies to push prices higher—or it could have sued, or complained to the Justice Department and to federal regulatory authorities.

But the DOJ itself brought no complaint against Amazon — it, instead, sued Apple. And the admonition that an aggressive innovator should sit back and let things “play out naturally” is exactly what will kill efficiency enhancing “creative destruction.” Moreover, the government’s view that Apple “pursued an explicit strategy to push prices higher” fails to acknowledge that Apple was the buyer. Such as it was, Apple’s effort was to compete, luring content suppliers from a rival. The response of the government is to recommend, on the one hand, litigation it will not itself pursue and, on the other, passive acceptance that avoids market disruption. It displays the error, as Judge Jacobs’ Second Circuit dissent puts it, “That antitrust law is offended by gloves off competition.” Why might innovation not be well served by this policy?

Eighth, the choice of rule of reason does not let Apple escape scrutiny, but applies it to both sides of the argument. It adds important policy symmetry. Dr. Miles impeded efficient market activity for nearly a century. The creation of new platforms in Internet markets ought not to have such handicaps. It should be recalled that, in introducing its iTunes platform and its vertically linked iPod music players, circa 2002, the innovative Apple likewise faced attack from competition policy makers (more in Europe, indeed, than the U.S.). Happily, progress in the law had loosened barriers to business model innovation, and the revolutionary ecosystem was allowed to launch. Key to that progressive step was the bulk bargain struck with music labels. Richard Epstein thinks that such industry-wide dealing now endangers Apple’s more recent platform launch. Perhaps. But there is no reason to jump to that conclusion, and much to find out before we embrace it.

Dan Mitchell is the co-founder of the Center for Freedom and Prosperity.

In an ideal world, the discussion and debate about how (or if) to tax e-cigarettes, heat-not-burn, and other tobacco harm-reduction products would be guided by science. Policy makers would confer with experts, analyze evidence, and craft prudent and sensible laws and regulations.

In the real world, however, politicians are guided by other factors.

There are two things to understand, both of which are based on my conversations with policy staff in Washington and elsewhere.

First, this is a battle over tax revenue. Politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.

This is very much akin to the concern that electric cars and fuel-efficient cars will lead to a loss of money from excise taxes on gasoline.

In the case of fuel taxes, politicians are anxiously looking at other sources of revenue, such as miles-driven levies. Their main goal is to maintain – or preferably increase – the amount of money that is diverted to the redistributive state so that politicians can reward various interest groups.

In the case of tobacco, a reduction in the number of smokers (or the tax-driven propensity of smokers to seek out black-market cigarettes) is leading politicians to concoct new schemes for taxing e-cigarettes and related non-combustible products.

Second, this is a quasi-ideological fight. Not about capitalism versus socialism, or big government versus small government. It’s basically a fight over paternalism, or a battle over goals.

For all intents and purposes, the question is whether lawmakers should seek to simultaneously discourage both tobacco use and vaping because both carry some risk (and perhaps because both are considered vices for the lower classes)? Or should they welcome vaping since it leads to harm reduction as smokers shift to a dramatically safer way of consuming nicotine?

In statistics, researchers presumably always recognize the dangers of certain types of mistakes, known as Type I errors (also known as a “false positive”) and Type II errors (also known as a “false negative”).

How does this relate to smoking, vaping, and taxes?

Simply stated, both sides of the fight are focused on a key goal and secondary issues are pushed aside. In other words, tradeoffs are being ignored.

The advocates of high taxes on e-cigarettes and other non-combustible products are fixated on the possibility that vaping will entice some people into the market. Maybe vaping wil even act as a gateway to smoking. So, they want high taxes on vaping, akin to high taxes on tobacco, even though the net result is that this leads many smokers to stick with cigarettes instead of making a switch to less harmful products.

On the other side of the debate are those focused on overall public health. They see emerging non-combustible products as very effective ways of promoting harm reduction. Is it possible that e-cigarettes may be tempting to some people who otherwise would never try tobacco? Yes, that’s possible, but it’s easily offset by the very large benefits that accrue as smokers become vapers.

For all intents and purposes, the fight over the taxation of vaping is similar to other ideological fights.

The old joke in Washington is that a conservative is someone who will jail 99 innocent people in order to put one crook in prison and a liberal is someone who will free 99 guilty people to prevent one innocent person from being convicted (or, if you prefer, a conservative will deny 99 poor people to catch one welfare fraudster and a liberal will line the pockets of 99 fraudsters to make sure one genuinely poor person gets money).

The vaping fight hasn’t quite reached this stage, but the battle lines are very familiar. At some point in the future, observers may joke that one side is willing to accept more smoking if one teenager forgoes vaping while the other side is willing to have lots of vapers if it means one less smoker.

Having explained the real drivers of this debate, I’ll close by injecting my two cents and explaining why the paternalists are wrong. But rather than focus on libertarian-type arguments about personal liberty, I’ll rely on three points, all of which are based on conventional cost-benefit analysis and the sensible approach to excise taxation.

  • First, tax policy should focus on incentivizing a switch and not punishing those who chose a less harmful products. The goal should be harm reduction rather than revenue maximization.
  • Second, low tax burdens also translate into lower long-run spending burdens because a shift to vaping means a reduction in overall healthcare costs related to smoking cigarettes.
  • Third, it makes no sense to impose punitive “sin taxes” on behaviors that are much less, well, sinful. There’s a big difference in the health and fiscal impact of cigarettes compared to the alternatives.

One final point is that this issue has a reverse-class-warfare component. Anti-smoking activists generally have succeeded in stigmatizing cigarette consumption and most smokers are now disproportionately from the lower-income community. For better (harm reduction) or worse (elitism), low-income smokers are generally treated with disdain for their lifestyle choices.  

It is not an explicit policy, but that disdain now seems to extend to any form of nicotine consumption, even though the health effects of vaping are vastly lower.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Steve Cernak, (Partner, Bona Law).]

The antitrust laws have not been suspended during the current COVID-19 crisis. But based on questions received from clients plus others discussed with other practitioners, the changed economic conditions have raised some new questions and put a new slant on some old ones. 

Under antitrust law’s flexible rule of reason standard, courts and enforcers consider the competitive effect of most actions under current and expected economic conditions. Because those conditions have changed drastically, at least temporarily, perhaps the antitrust assessments of certain actions will be different. Also, in a crisis, good businesses consider new options and reconsider others that had been rejected under the old conditions. So antitrust practitioners and enforcers need to be prepared for new questions and reconsiderations of others under new facts. Here are some that might cross their desks.

Benchmarking

Benchmarking had its antitrust moment a few years ago as practitioners discovered and began to worry about this form of communication with competitors. Both before and since then, the comparison of processes and metrics to industry bests to determine where improvement efforts should be concentrated has not raised serious antitrust issues – if done properly. Appropriate topic choice and implementation, often involving counsel review and third-party collection, should stay the same during this crisis. Companies implementing new processes might be tempted to reach out to competitors to learn best practices. Any of those companies unfamiliar with the right way to benchmark should get up to speed. Counsel must be prepared to help clients quickly, but properly, benchmark some suddenly important activities, like methods for deep-cleaning workplaces.

Joint ventures

Joint ventures where competitors work together to accomplish a task that neither could alone, or accomplish it more efficiently, have always received a receptive antitrust review. Often, those joint efforts have been temporary. Properly structured ones have always required the companies to remain competitors outside the joint venture. Joint efforts among competitors that did not make sense before the crisis might make perfect sense during it. For instance, a company whose distribution warehouse has been shut down by a shelter in place order might be able to use a competitor’s distribution assets to continue to get goods to the market. 

Some joint ventures of competitors have received special antitrust assurances for decades. The National Cooperative Research and Production Act of 1993 was originally passed in 1984 to protect research joint ventures of competitors. It was later extended to certain joint production efforts and standard development organizations. The law confirms that certain joint ventures of competitors will be judged under the rule of reason. If the parties file a very short notice with the DOJ Antitrust Division and FTC, they also will receive favorable treatment regarding damages and attorney’s fees in any antitrust lawsuit. For example, competitors cooperating on the development of new virus treatments might be able to use NCRPA to protect joint research and even production of the cure. 

Mergers

Horizontal mergers that permanently combine the assets of two competitors are unlikely to be justified under the antitrust laws by small transitory blips in the economic landscape. A huge crisis, however, might be so large and create such long-lasting effects that certain mergers suddenly might make sense, both on business and antitrust grounds. That rationale was used during the most recent economic crisis to justify several large mergers of banks although other large industrial mergers considered at the same time were abandoned for various reasons. It is not yet clear if that reasoning is present in any industry now. 

Remote communication among competitors

On a much smaller but more immediate scale, the new forms of communication being used while so many of us are physically separated have raised questions about the usual antitrust advice regarding communication with competitors. Antitrust practitioners have long advised clients about how to prepare and conduct an in-person meeting of competitors, say at a trade association convention. That same advice would seem to apply if, with the in-person convention cancelled, the meeting will be held via Teams or Zoom. And don’t forget: The reminders that the same rules apply to the cocktail party at the bar after the meeting should also be given for the virtual version conducted via Remo.co

Pricing and brand Management

Since at least the time when the Dr. Miles Medical Co. was selling its “restorative nervine,” manufacturers have been concerned about how their products were resold by retailers. Antitrust law has provided manufacturers considerable freedom for some time to impose non-price restraints on retailers to protect brand reputations; however, manufacturers must consider and impose those restraints before a crisis hits. For instance, a “no sale for resale” provision in place before the crisis would give a manufacturer of hand sanitizer another tool to use now to try to prevent bulk sales of the product that will be immediately resold on the street. 

Federal antitrust law has provided manufacturers considerable freedom to impose maximum price restraints. Even the states whose laws prevent minimum price restraints do not seem as concerned about maximum ones. But again, if a manufacturer is concerned that some consumer will blame it, not just the retailer, for a sudden skyrocketing price for a product in short supply, some sort of restraints must be in place before the crisis. Certain platforms are invoking their standard policies to prevent such actions by resellers on their platforms. 

Regulatory hurdles

While antitrust law is focused on actions by private parties that might prevent markets from properly working to serve consumers, the same rationales apply to unnecessary government interference in the market. The current health crisis has turned the spotlight back on certificate of need laws, a form of “brother may I?” government regulation that can allow current competitors to stifle entry by new competitors. Similarly, regulations that have slowed the use of telemedicine have been at least temporarily waived

Conclusion

Solving the current health crisis and rebuilding the economy will take the best efforts of both our public institutions and private companies. Antitrust law as currently written and enforced can and should continue to play a role in aligning incentives so we need not rely on “the benevolence of the butcher” for our dinner and other necessities. Instead, proper application of antitrust law can allow companies to do their part to (reviving a slogan helpful in a prior national crisis) keep America rolling.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.

“As usual,” Lord Macaulay wrote in his history of England, “many persons” were “disposed to clamour against the innovation, simply because it was an innovation.” They objected that the express rides would corrupt traditional horsemanship, throw saddlers and boatmen out of work, bankrupt the roadside taverns, and force travelers to sit with children and the disabled. “It was gravely recommended,” reported Macaulay, by various towns and companies, that “no public coach should be permitted to have more than four horses, to start oftener that once a week, or to go more than thirty miles a day.”

Macaulay used the episode to offer his contemporaries a warning. Although “we smile at these things,” he said, “our descendants, when they read the history of the opposition offered by cupidity and prejudice to the improvements of the nineteenth century, may smile in their turn.” Macaulay wanted the smart set to take a wider view of history.

They rarely do. It is not in their nature. As Schumpeter understood, the “intellectual group” cannot help attacking “the foundations of capitalist society.” “It lives on criticism and its whole position depends on criticism that stings.”

An aspiring intellectual would do well to avoid restraint or good cheer. Better to build on a foundation of panic and indignation. Want to sell books and appear on television? Announce the “death” of this or a “crisis” over that. Want to seem fashionable among other writers, artists, and academics? Denounce greed and rail against “the system.”

New technology is always a good target. When a lantern inventor obtained a patent to light London, observed Macaulay, “the cause of darkness was not left undefended.” The learned technophobes have been especially vexed lately. The largest tech companies, they protest, are manipulating us.

Facebook, The New Republic declares, “remade the internet in its hideous image.” The New Yorker wonders whether the platform is going to “break democracy.”

Apple is no better. “Have smartphones destroyed a generation?” asks The Atlantic in a cover-story headline. The article’s author, Jean Twenge, says smartphones have made the young less independent, more reclusive, and more depressed. She claims that today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis in decades.” “Much of this deterioration,” she contends, “can be traced to their phones.”

And then there’s Amazon. It’s too efficient. Alex Salkever worries in Fortune that “too many clicks, too much time spent, and too much money spent on Amazon” is “bad for our collective financial, psychological, and physical health.”

Here’s a rule of thumb for the refined cultural critic to ponder. When the talking points you use to convey your depth and perspicacity match those of a sermonizing Republican senator, start worrying that your pseudo-profound TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn fears of novelty and change.

Enter Josh Hawley, freshman GOP senator from Missouri. Hawley claims that Facebook is a “digital drug” that “dulls” attention spans and “frays” relationships. He speculates about whether social media is causing teenage girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists, is “ever more sophisticated exploitation of people.” He scolds the tech companies for failing to produce products that—in his judgment—“enrich lives” and “strengthen society.”

As for the stuff the industry does make, Hawley wants it changed. He has introduced a bill to ban infinite scrolling, music and video autoplay, and the use of “badges and other awards” (gamification) on social media. The bill also requires defaults that limit a user’s time on a platform to 30 minutes a day. A user could opt out of this restriction, but only for a month at a stretch.

The available evidence does not bear out the notion that highbrow magazines, let alone Josh Hawley, should redesign tech products and police how people use their time. You’d probably have to pay someone around $500 to stay off Facebook for a year. Getting her to forego using Amazon would cost even more. And Google is worth more still—perhaps thousands of dollars per user per year. These figures are of course quite rough, but that just proves the point: the consumer surplus created by the internet is inestimable.

Is technology making teenagers sad? Probably not. A recent study tracked the social-media use, along with the wellbeing, of around ten-thousand British children for almost a decade. “In more than half of the thousands of statistical models we tested,” the study’s authors write, “we found nothing more than random statistical noise.” Although there were some small links between teenage girls’ mood and their social-media use, the connections were “miniscule” and too “trivial” to “inform personal parenting decisions.” “It’s probably best,” the researchers conclude, “to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing.”

One could head the other way, in fact, and argue that technology is making children smarter. Surfing the web and playing video games might broaden their attention spans and improve their abstract thinking.

Is Facebook a threat to democracy? Not yet. The memes that Russian trolls distributed during the 2016 election were clumsy, garish, illiterate piffle. Most of it was the kind of thing that only an Alex Jones fan or a QAnon conspiracist would take seriously. And sure enough, one study finds that only a tiny fraction of voters, most of them older conservatives, read and spread the material. It appears, in other words, that the Russian fake news and propaganda just bounced around among a few wingnuts whose support for Donald Trump was never in doubt.

Over time, it is fair to say, the known costs and benefits of the latest technological innovations could change. New data and further study might reveal that the handwringers are on to something. But there’s good news: if you have fears, doubts, or objections, nothing stops you from acting on them. If you believe that Facebook’s behavior is intolerable, or that its impact on society is malign, stop using it. If you think Amazon is undermining small businesses, shop more at local stores. If you fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you suspect that everything has gone pear-shaped since the Industrial Revolution started, throw out your refrigerator and stop going to the dentist.

We now hit the crux of the intellectuals’ (and Josh Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim, weak, and base. You lack the wits to use an iPhone on your own terms. You lack the self-control to post, “like”, and share in moderation (or the discipline to make your children follow suit). You lack the virtue to abstain from the pleasures of Prime-membership consumerism.

One AI researcher digs to the root. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or ‘I’m not on social media,’” she tells Vox. No one wields the “privilege” epithet quite like the modern privileged do. It is one of the remarkable features of our time. Pundits and professors use the word to announce, albeit unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection from too much information, too much choice, too much freedom.

There’s nothing crazy about wanting the new aristocrats of the mind to shepherd everyone else. Noblesse oblige is a venerable concept. The lords care for the peasants, the king cares for the lords, God cares for the king. But that is not our arrangement. Our forebears embraced the Enlightenment. They began with the assumption that citizens are autonomous. They got suspicious whenever the holders of political power started trying to tell those citizens what they can and cannot do.

Algorithms might one day expose, and play on, our innate lack of free will so much that serious legal and societal adjustments are needed. That, however, is a remote and hypothetical issue, one likely to fall on a generation, yet unborn, who will smile in their turn at our qualms. (Before you place much weight on more dramatic predictions, consider that the great Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)

The question today is more mundane: do voters crave moral direction from their betters? Are they clamoring to be viewed as lowly creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly capable of debasing themselves accordingly through their choice of political representatives. Judging from Congress’s flat response to Hawley’s bill, the electorate is not quite there yet.

In the meantime, the great and the good might reevaluate their campaign to infantilize their less fortunate brothers and sisters. Lecturing people about how helpless they are is not deep. It’s not cool. It’s condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned and priggish.

In 1816 The Times of London warned “every parent against exposing his daughter to so fatal a contagion” as . . . the waltz. “The novelty is one deserving of severe reprobation,” Britain’s paper of record intoned, “and we trust it will never again be tolerated in any moral English society.”

There was a time, Lord Macaulay felt sure, when some brahmin or other looked down his nose at the plough and the alphabet.

Will Rinehart is the Director of Technology and Innovation Policy at the American Action Forum

When Amazon and Whole Foods announced they were pursuing a $13.7 billion deal in June of last year, the grocer was struggling. Whole Foods might have been an early leader in organic food, but with that success came competition. Walmart began selling organics in 2014 while Costcost and Safeway revamped product lines for changing consumer preferences. By 2017, same-store sales at Whole Foods had declined for the previous two years and Barclays estimated that the company had lost about 3 percent of their foot traffic between 2015 and 2017. Simply put, Whole Foods wasn’t well managed enough to compete.   

Those against the merger aligned in opposition to Amazon’s expansion into the retail space. In the New York Times, Lina Khan argued that “Buying Whole Foods will enable Amazon to leverage and amplify the extraordinary power it enjoys in online markets and delivery, making an even greater share of commerce part of its fief.” Marshall Steinbaum, fellow and research director at the Roosevelt Institute, suggested that “Whole Food’s supply chain will enable Amazon to strengthen its online retail monopoly.” While Stacy Mitchell, Co-director at the Institute for Local Self-Reliance, charged Amazon with ambitions to “control the underlying infrastructure of the economy.”

Yet, it wasn’t that Amazon was trying to expand its “retail monopoly” reach or lay the groundwork to “control the underlying infrastructure of the economy.” Rather, the company was trying to “invigorate its nearly decadelong push into groceries” and saw Whole Foods as the best way to jump into the market. The lasting legacy of the merger won’t be in forceclosing competition in the grocery space but in opening a new front in it.

Consider how the entire market has shifted since the deal was finalized:

  • In September 2017, Albertsons Companies, which owns over 2,200 grocery stores, acquired the prepared meal startup Plated.
  • On October 3, 2017, Walmart bought Parcel, “a technology based, same-day and last-mile delivery company specializing in perishable and non-perishable delivery to customers in New York City.”
  • In December of 2017, Target acquired Shipt, a same-day delivery service platform, for $550 million in one of the companies largest deals.
  • In mid April of 2018, Walmart announced a deal with Postmates to extend its online delivery to more than 100 metro areas.
  • In May, Kroger announced that it had tapped the UK-based Ocado to pack online orders using the Ocado Smart Platform, which is staffed by robots.
  • Earlier this month, Walmart announced the launch of Alphabot, an automation system to help fill orders, that it has developed in collaboration with Alert Innovation.  
  • And just last Tuesday, the online seller of bulk items, Boxed, revealed that it had closed a funding round that totalled $111 million.

These high profiles deals have been coupled with high profile investments as well. InVia Robotics, a startup that provides fulfillment centers with automated robotics technology, raised $20 million in a series B round. French startup Exotec raised $17.7 million after debuting an automated robot called the Skypod to help with e-commerce warehouses. Bossa Nova Robotics closed $29 million to help expand it robots in grocery stores and large retailers. And Albertsons Companies created a fund with venture capital firm Greycroft to invest in the grocery sector.

Did the Amazon deal kick off this M&A activity? Josh Hix, chief executive of Plated, thinks so. As he told the New York Times, “The pace of follow-ups went from “This is interesting, and we’ll be in New York again in five months’ to ‘This is really interesting, and how’s tomorrow at 9 a.m. look for another call?’”

Indeed, what could not be predicted, but what seems to have happened, was a grocery tech M&A wave. The Amazon-Whole Foods deal brought with it an expectation of a coming technological shock to the industry. While we are still at the early stages of it, traditional retail outlets took the entry as a sign of Amazon’s willingness to invest in technology undergirding grocery stores. It might take a decade or so to be profitable, but retail logistics will soon be powered by robots.

Three broad lessons are worth heeding.

First, few were focusing on the trajectory of the market and the potential for consumers to win, but it could prove the most important echo of this deal. Grocery shopping remains a holdout to online retail and for good reason. Food is perishable, fragile, and heavy. Products require different temperatures and need varying levels of care when handled. The margins at thin but the fixed costs are high. Getting delivery prices low enough and at the right time to match the cost of an in-person grocery experience is just difficult. But if any company is able to crack that nut, then it will be a boon for customers.

Remember, the productivity growth experienced in the US from 1995 to 2000 was largely caused by retail improvements. In a highly cited report on the topic, McKinsey Global Institute singled out the retail sector’s logistical advancements for contributing a fourth of that growth. Of that sector, Walmart accounted for a sixth of productivity gains. Jason Furman even wrote in 2005 that, “There is little dispute that Wal-Mart’s price reductions have benefited the 120 million American workers employed outside of the retail sector,” calling the chain a “progressive success story.” If the Amazon-Whole Foods deal was able to kick off a new wave of innovation, it would be another progressive success story.         

Second, the negativity that surrounded the deal at its announcement made Whole Foods seem like an innocent player, but it is important to recall that they were hemorrhaging and were looking to exit. Throughout the 2010s, the company lost its market leading edge as others began to offer the same kinds of services and products. Still, the company was able to sell near the top of its value to Amazon because it was able to court so many suitors. Given all of these features, Whole Foods could have been using the exit as a mechanism to appropriate another firm’s rent.

Finally, this deal reiterates the need for regulatory humility. Almost immediately after the Amazon-Whole Foods merger was closed, prices at the store dropped and competitors struck a flurry of deals. Investments continue and many in the grocery retail space are bracing for a wave of enhancement to take hold. Even some of the most fierce critics of deal will have to admit there is a lot of uncertainty. It is unclear what business model will make the most sense in the long run, how these technologies will ultimately become embedded into production processes, and how consumers will benefit. Combined, these features underscore the difficulty, but the necessity, in implementing dynamic insights into antitrust institutions.

Retrospectives like this symposium offer a chance to understand what the discussion missed at the time and what is needed to better understand innovation and competition in markets. While it might be too soon to close the book on this case, the impact can already be felt in the positions others are taking in response. In the end, the deal probably won’t be remembered for extending Amazon’s dominance into another market because that is a phantom concern. Rather, it will probably be best remembered as the spark that drove traditional retail outlets to modernize their logistics and fulfillment efforts.  

This morning a diverse group of more than 75 academics, scholars, and civil society organizations — including ICLE and several of its academic affiliates — published a set of seven “Principles for Lawmakers” on liability for user-generated content online, aimed at guiding discussions around potential amendments to Section 230 of the Communications Decency Act of 1996

I have reproduced the principles below, and they are available online (along with the list of signatories) here

Section 230 holds those who create content online responsible for that content and, controversially today, protects online intermediaries from liability for content generated by third parties except in specific circumstances. Advocates on both the political right and left have recently begun to argue more ardently for the repeal or at least reform of Section 230.

There is always good reason to consider whether decades-old laws, especially those aimed at rapidly evolving industries, should be updated to reflect both changed circumstances as well as new learning. But discussions over whether and how to reform (or repeal) Section 230 have, thus far, offered far more heat than light. 

Indeed, later today President Trump will hold a “social media summit” at the White House to which he has apparently invited a number of right-wing political firebrands — but no Internet policy experts or scholars and no representatives of social media firms. Nothing about the event suggests it will produce — or even aim at — meaningful analysis of the issues. Trump himself has already concluded about social media platforms that “[w]hat they are doing is wrong and possibly illegal.” On the basis of that (legally baseless) conclusion, “a lot of things are being looked at right now.” This is not how good policy decisions are made. 

The principles we published today are intended to foster an environment in which discussion over these contentious questions may actually be fruitfully pursued. But they also sound a cautionary note. As we write in the preamble to the principles:

[W]e value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform. 

As the diversity of signatories to these principles demonstrates, there is room for reasoned discussion among Section 230 advocates and skeptics alike. For some, these principles represent a significant move from their initial, hard-line positions, undertaken in the face of the very real risk that if they give an inch others will latch on to their concessions to take a mile. They should be commended for their willingness nevertheless to engage seriously with the issues — and, challenged with de-politicized, sincere, and serious discussion, to change their minds. 

Everyone who thinks seriously about the issues implicated by Section 230 can — or should — agree that it has been instrumental in the birth and growth of the Internet as we know it: both the immense good and the unintended bad. That recognition does not lead inexorably to one conclusion or another regarding the future of Section 230, however. 

Ensuring that what comes next successfully confronts the problems without negating the benefits starts with the recognition that both costs and benefits exist, and that navigating the trade-offs is a fraught endeavor that absolutely will not be accomplished by romanticized assumptions and simplistic solutions. Efforts to update Section 230 should not deny its past successes, ignore reasonable concerns, nor co-opt the process to score costly political points. 

But that’s just the start. It’s incumbent upon those seeking to reform Section 230 to offer ideas and proposals that reflect the reality and the complexity of the world they seek to regulate. The basic principles we published today offer a set of minimum reasonable guardrails for those efforts. Adherence to these principles would allow plenty of scope for potential reforms while helping to ensure that they don’t throw out the baby with the bathwater. 

Accepting, for example, the reality that a “neutral” presentation of content is impossible (Principle #4) and that platform moderation is complicated and also essential for civil discourse (Principle #3) means admitting that a preferred standard of content moderation is just that — a preference. It may be defensible to impose that preference on all platforms by operation of law, and these principles do not obviate such a position. But they do demand that such an opinion be rigorously defended. It is insufficient simply to call for “neutrality” or any other standard (“all ideological voices must be equally represented,” e.g.) without making a valid case for what would be gained and why the inevitable costs and trade-offs would nevertheless be worthwhile. 

All of us who drafted and/or signed these principles are willing to come to the table to discuss good-faith efforts to reform Section 230 that recognize and reasonably account for such trade-offs. It remains to be seen whether we can wrest the process from those who would use it to promote their static, unrealistic, and politicized vision at the expense of everyone else.


Liability for User-Generated Content Online:

Principles for Lawmakers

Policymakers have expressed concern about both harmful online speech and the content moderation practices of tech companies. Section 230, enacted as part of the bipartisan Communications Decency Act of 1996, says that Internet services, or “intermediaries,” are not liable for illegal third-party content except with respect to intellectual property, federal criminal prosecutions, communications privacy (ECPA), and sex trafficking (FOSTA). Of course, Internet services remain responsible for content they themselves create. 

As civil society organizations, academics, and other experts who study the regulation of user-generated content, we value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall. We hope the following principles help any policymakers considering amendments to Section 230. 

Principle #1: Content creators bear primary responsibility for their speech and actions.

Content creators — including online services themselves — bear primary responsibility for their own content and actions. Section 230 has never interfered with holding content creators liable. Instead, Section 230 restricts only who can be liable for the harmful content created by others.

Law enforcement online is as important as it is offline. If policymakers believe existing law does not adequately deter bad actors online, they should (i) invest more in the enforcement of existing laws, and (ii) identify and remove obstacles to the enforcement of existing laws. Importantly, while anonymity online can certainly constrain the ability to hold users accountable for their content and actions, courts and litigants have tools to pierce anonymity. And in the rare situation where truly egregious online conduct simply isn’t covered by existing criminal law, the law could be expanded. But if policymakers want to avoid chilling American entrepreneurship, it’s crucial to avoid imposing criminal liability on online intermediaries or their executives for unlawful user-generated content.

Principle #2: Any new intermediary liability must not target constitutionally protected speech. 

The government shouldn’t require — or coerce — intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship — or even avoid offering speech forums altogether. 

Principle #3: The law shouldn’t discourage Internet services from moderating content. 

To flourish, the Internet requires that site managers have the ability to remove legal but objectionable content — including content that would be protected under the First Amendment from censorship by the government. If Internet services could not prohibit harassment, pornography, racial slurs, and other lawful but offensive or damaging material, they couldn’t facilitate civil discourse. Even when Internet services have the ability to moderate content, their moderation efforts will always be imperfect given the vast scale of even relatively small sites and the speed with which content is posted. Section 230 ensures that Internet services can carry out this socially beneficial but error-prone work without exposing themselves to increased liability; penalizing them for imperfect content moderation or second-guessing their decision-making will only discourage them from trying in the first place. This vital principle should remain intact.

Principle #4: Section 230 does not, and should not, require “neutrality.” 

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, deprioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content. 

Principle #5: We need a uniform national legal standard. 

Most Internet services cannot publish content on a state-by-state basis, so state-by-state variations in liability would force compliance with the most restrictive legal standard. In its current form, Section 230 prevents this dilemma by setting a consistent national standard — which includes potential liability under the uniform body of federal criminal law. Internet services, especially smaller companies and new entrants, would find it difficult, if not impossible, to manage the costs and legal risks of facing potential liability under state civil law, or of bearing the risk of prosecution under state criminal law. 

Principle #6: We must continue to promote innovation on the Internet.

Section 230 encourages innovation in Internet services, especially by smaller services and start-ups who need the most protection from potentially crushing liability. The law must continue to protect intermediaries not merely from liability, but from having to defend against excessive, often-meritless suits — what one court called “death by ten thousand duck-bites.” Without such protection, compliance, implementation, and litigation costs could strangle smaller companies before they even emerge, while larger, incumbent technology companies would be much better positioned to absorb these costs. Any proposal to reform Section 230 that is calibrated to what might be possible for the Internet giants will necessarily mis-calibrate the law for smaller services.

Principle #7: Section 230 should apply equally across a broad spectrum of online services.

Section 230 applies to services that users never interact with directly. The further removed an Internet service — such as a DDOS protection provider or domain name registrar — is from an offending user’s content or actions, the more blunt its tools to combat objectionable content become. Unlike social media companies or other user-facing services, infrastructure providers cannot take measures like removing individual posts or comments. Instead, they can only shutter entire sites or services, thus risking significant collateral damage to inoffensive or harmless content. Requirements drafted with user-facing services in mind will likely not work for these non-user-facing services.

I had the pleasure last month of hosting the first of a new annual roundtable discussion series on closing the rural digital divide through the University of Nebraska’s Space, Cyber, and Telecom Law Program. The purpose of the roundtable was to convene a diverse group of stakeholders — from farmers to federal regulators; from small municipal ISPs to billion dollar app developers — for a discussion of the on-the-ground reality of closing the rural digital divide.

The impetus behind the roundtable was, quite simply, that in my five years living in Nebraska I have consistently found that the discussions that we have here about the digital divide in rural America are wholly unlike those that the federally-focused policy crowd has back in DC. Every conversation I have with rural stakeholders further reinforces my belief that those of us who approach the rural digital divide from the “DC perspective” fail to appreciate the challenges that rural America faces or the drive, innovation, and resourcefulness that rural stakeholders bring to the issue when DC isn’t looking. So I wanted to bring these disparate groups together to see what was driving this disconnect, and what to do about it.

The unfortunate reality of the rural digital divide is that it is an existential concern for much of America. At the same time, the positive news is that closing this divide has become an all-hands-on-deck effort for stakeholders in rural America, one that defies caricatured political, technological, and industry divides. I have never seen as much agreement and goodwill among stakeholders in any telecom community as when I speak to rural stakeholders about digital divides. I am far from an expert in rural broadband issues — and I don’t mean to hold myself out as one — but as I have engaged with those who are, I am increasingly convinced that there are far more and far better ideas about closing the rural digital divide to be found outside the beltway than within.

The practical reality is that most policy discussions about the rural digital divide over the past decade have been largely irrelevant to the realities on the ground: The legal and policy frameworks focus on the wrong things, and participants in these discussions at the federal level rarely understand the challenges that define the rural divide. As a result, stakeholders almost always fall back on advocating stale, entrenched, viewpoints that have little relevance to the on-the-ground needs. (To their credit, both Chairman Pai and Commissioner Carr have demonstrated a longstanding interest in understanding the rural digital divide — an interest that is recognized and appreciated by almost every rural stakeholder I speak to.)

Framing Things Wrong

It is important to begin by recognizing that contemporary discussion about the digital divide is framed in terms of, and addressed alongside, longstanding federal Universal Service policy. This policy, which has its roots in the 20th century project of ensuring that all Americans had access to basic telephone service, is enshrined in the first words of the Communications Act of 1934. It has not significantly evolved from its origins in the analog telephone system — and that’s a problem.

A brief history of Universal Service

The Communications Act established the FCC

for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States … a rapid, efficient, Nation-wide, and world-wide wire and radio communication service ….

The historic goal of “universal service” has been to ensure that anyone in the country is able to connect to the public switched telephone network. In the telephone age, that network provided only one primary last-mile service: transmitting basic voice communications from the customer’s telephone to the carrier’s switch. Once at the switch various other services could be offered — but providing them didn’t require more than a basic analog voice circuit to the customer’s home.

For most of the 20th century, this form of universal service was ensured by fiat and cost recovery. Regulated telephone carriers (that is, primarily, the Bell operating companies under the umbrella of AT&T) were required by the FCC to provide service to all comers, at published rates, no matter the cost of providing that service. In exchange, the carriers were allowed to recover the cost of providing service to high-cost areas through the regulated rates charged to all customers. That is, the cost of ensuring universal service was spread across and subsidized by the entire rate base.

This system fell apart following the break-up of AT&T in the 1980s. The separation of long distance from local exchange service meant that the main form of cross subsidy — from long distance to local callers — could no longer be handled implicitly. Moreover, as competitive exchange services began entering the market, they tended to compete first, and most, over the high-revenue customers who had supported the rate base. To accommodate these changes, the FCC transitioned from a model of implicit cross-subsidies to one of explicit cross-subsidies, introducing long distance access charges and termination fees that were regulated to ensure money continued to flow to support local exchange carriers’ costs of providing services to high-cost users.

The 1996 Telecom Act forced even more dramatic change. The goal of the 1996 Telecom Act was to introduce competition throughout the telecom ecosystem — but the traditional cross-subsidy model doesn’t work in a competitive market. So the 1996 Telecom Act further evolved the FCC’s universal service mechanism, establishing the Universal Service Fund (USF), funded by fees charged to all telecommunications carriers, which would be apportioned to cover the costs incurred by eligible telecommunications carriers in providing high-cost (and other “universal”) services.

The problematic framing of Universal Service

For present purposes, we need not delve into these mechanisms. Rather, the very point of this post is that the interminable debates about these mechanisms — who pays into the USF and how much; who gets paid out of the fund and how much; and what services and technologies the fund covers — simply don’t match the policy challenges of closing the digital divide.

What the 1996 Telecom Act does offer is a statement of the purposes of Universal Service. In 47 USC 254(b)(3), the Act states the purpose of ensuring “Access in rural and high cost areas”:

Consumers in all regions of the Nation, including low-income consumers and those in rural, insular, and high cost areas, should have access to telecommunications and information services … that are reasonably comparable to those services provided in urban areas ….

This is a problematic framing. (I would actually call it patently offensive…). It is a framing that made sense in the telephone era, when ensuring last-mile service meant providing only basic voice telephone service. In that era, having any service meant having all service, and the primary obstacles to overcome were the high-cost of service to remote areas and the lower revenues expected from lower-income areas. But its implicit suggestion is that the goal of federal policy should be to make rural America look like urban America.

Today universal service, at least from the perspective of closing the digital divide, means something different, however. The technological needs of rural America are different than those of urban America; the technological needs of poor and lower-income America are different than those of rich America. Framing the goal in terms of making sure rural and lower-income America have access to the same services as urban and wealthy America is, by definition, not responsive to (or respectful of) the needs of those who are on the wrong side of one of this country’s many digital divides. Indeed, that goal almost certainly distracts from and misallocates resources that could be better leveraged towards closing these divides.

The Demands of Rural Broadband

Rural broadband needs are simultaneously both more and less demanding than the services we typically focus on when discussing universal service. The services that we fund, and the way that we approach how to close digital divides, needs to be based in the first instance on the actual needs of the community that connectivity is meant to serve. Take just two of the prototypical examples: precision and automated farming, and telemedicine.

Assessing rural broadband needs

Precision agriculture requires different networks than does watching Netflix, web surfing, or playing video games. Farms with hundreds or thousands of sensors and other devices per acre can put significant load on networks — but not in terms of bandwidth. The load is instead measured in terms of packets and connections per second. Provisioning networks to handle lots of small packets is very different from provisioning them to handle other, more-typical (to the DC crowd), use cases.

On the other end of the agricultural spectrum, many farms don’t own their own combines. Combines cost upwards of a million dollars. One modern combine is sufficient to tend to several hundred acres in a given farming season. It is common for many farmers to hire someone who owns a combine to service their fields. During harvest season, for instance, one combine service may operate on a dozen farms during harvest season. Prior to operation, modern precision systems need to download a great deal of GIS, mapping, weather, crop, and other data. High-speed Internet can literally mean the difference between letting a combine sit idle for many days of a harvest season while it downloads data and servicing enough fields to cover the debt payments on a million dollar piece of equipment.

Going to the other extreme, rural health care relies upon Internet connectivity — but not in the ways it is usually discussed. The stories one hears on the ground aren’t about the need for particularly high-speed connections or specialized low-latency connections to allow remote doctors to control surgical robots. While tele-surgery and access to highly specialized doctors are important applications of telemedicine, the urgent needs today are far more modest: simple video consultations with primary care physicians for routine care, requiring only a moderate-speed Internet connection capable of basic video conferencing. In reality, literally megabits per second (not even 10 mbps) can mean the difference between a remote primary care physician being able to provide basic health services to a rural community and that community going entirely unserved by a doctor.

Efforts to run gigabit connections and dedicated fiber to rural health care facilities may be a great long-term vision — but the on-the-ground need could be served by a reliable 4G wireless connection or DSL line. (Again, to their credit, this is a point that Chairman Pai and Commissioner Carr have been highlighting in their recent travels through rural parts of the country.)

Of course, rural America faces many of the same digital divides faced elsewhere. Even in the wealthiest cities in Nebraska, for instance, significant numbers of students are eligible for free or reduced price school lunches — a metric that corresponds with income — and rely on anchor institutions for Internet access. The problem is worse in much of rural Nebraska, where there may simply be no Internet access at all.

Addressing rural broadband needs

Two things in particular have struck me as I have spoken to rural stakeholders about the digital divide. The first is that this is an “all hands on deck” problem. Everyone I speak to understands the importance of the issue. Everyone is willing to work with and learn from others. Everyone is willing to commit resources and capital to improve upon the status quo, including by undertaking experiments and incurring risks.

The discussions I have in DC, however, including with and among key participants in the DC policy firmament, are fundamentally different. These discussions focus on tweaking contribution factors and cost models to protect or secure revenues; they are, in short, missing the forest for the trees. Meanwhile, the discussion on the ground focuses on how to actually deploy service and overcome obstacles. No amount of cost-model tweaking will do much at all to accomplish either of these.

The second striking, and rather counterintuitive, thing that I have often heard is that closing the rural digital divide isn’t (just) about money. I’ve heard several times the lament that we need to stop throwing more money at the problem and start thinking about where the money we already have needs to go. Another version of this is that it isn’t about the money, it’s about the business case. Money can influence a decision whether to execute upon a project for which there is a business case — but it rarely creates a business case where there isn’t one. And where it has created a business case, that case was often for building out relatively unimportant networks while increasing the opportunity costs of building out more important networks. The networks we need to build are different from those envisioned by the 1996 Telecom Act or FCC efforts to contort that Act to fund Internet build-out.

Rural Broadband Investment

There is, in fact, a third particularly striking thing I have gleaned from speaking with rural stakeholders, and rural providers in particular: They don’t really care about net neutrality, and don’t see it as helpful to closing the digital divide.  

Rural providers, it must be noted, are generally “pro net neutrality,” in the sense that they don’t think that ISPs should interfere with traffic going over their networks; in the sense that they don’t have any plans themselves to engage in “non-neutral” conduct; and also in the sense that they don’t see a business case for such conduct.

But they are also wary of Title II regulation, or of other rules that are potentially burdensome or that introduce uncertainty into their business. They are particularly concerned that Title II regulation opens the door to — and thus creates significant uncertainty about the possibility of — other forms of significant federal regulation of their businesses.

More than anything else, they want to stop thinking, talking, and worrying about net neutrality regulations. Ultimately, the past decade of fights about net neutrality has meant little other than regulatory cost and uncertainty for them, which makes planning and investment difficult — hardly a boon to closing the digital divide.

The basic theory of the Wheeler-era FCC’s net neutrality regulations was the virtuous cycle — that net neutrality rules gave edge providers the certainty they needed in order to invest in developing new applications that, in turn, would drive demand for, and thus buildout of, new networks. But carriers need certainty, too, if they are going to invest capital in building these networks. Rural ISPs are looking for the business case to justify new builds. Increasing uncertainty has only negative effects on the business case for closing the rural digital divide.

Most crucially, the logic of the virtuous cycle is virtually irrelevant to driving demand for closing the digital divide. Edge innovation isn’t going to create so much more value that users will suddenly demand that networks be built; rather, the applications justifying this demand already exist, and most have existed for many years. What stands in the way of the build-out required to service under- or un-served rural areas is the business case for building these (expensive) networks. And the uncertainty and cost associated with net neutrality only exacerbate this problem.

Indeed, rural markets are an area where the virtuous cycle very likely turns in the other direction. Rural communities are actually hotbeds of innovation. And they know their needs far better than Silicon Valley edge companies, so they are likely to build apps and services that better cater to the unique needs of rural America. But these apps and services aren’t going to be built unless their developers have access to the broadband connections needed to build and maintain them, and, most important of all, unless users have access to the broadband connections needed to actually make use of them. The upshot is that, in rural markets, connectivity precedes and drives the supply of edge services not, as the Wheeler-era virtuous cycle would have it, the other way around.

The effect of Washington’s obsession with net neutrality these past many years has been to increase uncertainty and reduce the business case for building new networks. And its detrimental effects continue today with politicized and showboating efforts to to invoke the Congressional Review Act in order to make a political display of the 2017 Restoring Internet Freedom Order. Back in the real world, however, none of this helps to provide rural communities with the type of broadband services they actually need, and the effect is only to worsen the rural digital divide, both politically and technologically.

The Road Ahead …?

The story told above is not a happy one. Closing digital divides, and especially closing the rural digital divide, is one of the most important legal, social, and policy challenges this country faces. Yet the discussion about these issues in DC reflects little of the on-the-ground reality. Rather advocates in DC attack a strawman of the rural digital divide, using it as a foil to protect and advocate for their pet agendas. If anything, the discussion in DC distracts attention and diverts resources from productive ideas.

To end on a more positive note, some are beginning to recognize the importance and direness of the situation. I have noted several times the work of Chairman Pai and Commissioner Carr. Indeed, the first time I met Chairman Pai was when I had the opportunity to accompany him, back when he was Commissioner Pai, on a visit through Diller, Nebraska (pop. 287). More recently, there has been bipartisan recognition of the need for new thinking about the rural digital divide. In February, for instance, a group of Democratic senators asked President Trump to prioritize rural broadband in his infrastructure plans. And the following month Congress enacted, and the President signed, legislation that among other things funded a $600 million pilot program to award grants and loans for rural broadband built out through the Department of Agriculture’s Rural Utilities Service. But both of these efforts rely too heavily on throwing money at the rural divide (speaking of the recent legislation, the head of one Nebraska-based carrier building out service in rural areas lamented that it’s just another effort to give carriers cheap money, which doesn’t do much to help close the divide!). It is, nonetheless, good to see urgent calls for and an interest in experimenting with new ways to deliver assistance in closing the rural digital divide. We need more of this sort of bipartisan thinking and willingness to experiment with new modes of meeting this challenge — and less advocacy for stale, entrenched, viewpoints that have little relevance to the on-the-ground reality of rural America.

Geoffrey A. Manne is the President & Founder at the International Center for Law & Economics. Kristian Stout is the Associate Director of Innovation Policy at the International Center for Law & Economics.

The submissions in this symposium thus far highlight, in different ways, what must be considered the key lesson of the Amazon/Whole Foods merger: It has brought about immense and largely unforeseen (in its particulars, at least) competition — and that competition has been remarkably successful in driving innovations that will likely bring immense benefits to consumers and the economy as a whole.

Both before and after the merger was announced, claims of the coming retail apocalypse — the demise of brick-and-mortar retail at Amazon’s hands — were legion. Grocery stores were just the next notch on Amazon’s belt, and a stepping stone to world domination.

What actually happened in the year following the merger is nearly the opposite: Competition among grocery stores has been more fierce than ever. “Offline” retailers are expanding — and innovating — to meet Amazon’s challenge, and many of them are booming. Disruption is never neat and tidy, but, in addition to saving Whole Foods from potential oblivion, the merger seems to have lit a fire under the rest of the industry.

This result should not be surprising to anyone who understands the nature of the competitive process. But it does highlight an important lesson: competition often comes from unexpected quarters and evolves in unpredictable ways, emerging precisely out of the kinds of adversity opponents of the merger bemoaned. Even when critics were right about some of the potential effects of the merger (lower prices, for example), they were absolutely wrong about the allegedly disastrous consequences they claimed would result.

Of course, one must always be careful drawing lessons from limited data, and a year is not very long in the scheme of things — and certainly not in the grand (and fascinating) history of the grocery store. But the signs thus far are remarkably telling.

Change is the rule in the retail grocery industry (as in every other competitive market)

The ultimate consequences of the Amazon/Whole Foods merger won’t be known for quite some time. Nor will it follow the exact same patterns as previous retail disruptions. Yet there will undoubtedly be some commonality, as there has been in the past. Among other things, the history of the grocery business is intimately tied up with the history of A&P, as William Ruhlman recounts in his fascinating book, Grocery, and as Tim Muris and Jon Nuechterlein discuss, focusing on the antitrust angle, in their article, Antitrust in the Internet Era: The Legacy of United States v. A&P. The main takeaway from that saga is, as Muris & Nuechterlein write, that:

Increasingly integrated and efficient retailers — first A&P, then “big box” brick-and-mortar stores, and now online retailers — have challenged traditional retail models by offering consumers lower prices and greater convenience. For decades, critics on the right and left have reacted to such disruption by urging Congress, the courts, and the enforcement agencies to stop these American success stories by revising antitrust doctrine to protect small businesses rather than the interests of consumers. Using antitrust law to punish pro-competitive behavior makes no more sense today than it did when the government attacked A&P for cutting consumers too good a deal on groceries.

Just as Amazon is feared today, and Walmart was reviled in the 90s and 2000s, A&P was loathed in the first half of the twentieth century for its role in decimating small business. A&P grew to the size it did — at one point the largest retailer in the world — by driving down both costs and prices. That is to say, just as Walmart and Amazon do, A&P discovered the waste in the distribution and retailing system and found ways to better deliver goods and services to consumers on their own terms.

For all the hand wringing (and, of course, antitrust action) surrounding grocery stores in the past (including the misguided FTC action challenging the Whole Foods/Wild Oats merger in 2007), history has demonstrated that the grocery industry is constantly evolving toward better methods of distribution that meet customers’ idiosyncratic — and likewise evolving — preferences. Frequently, this has led to well-established methods of retailing being abandoned, as when the model of having separate vendors for meat, baked goods, dry goods, etc., gave way to the first centralized supermarkets.

What we are witnessing now — and what Amazon/Whole Foods is really emblematic of — is yet another growth spurt in the industry, one where consumer demand for both a high degree of convenience (e.g., same-day delivery) is coupled with the ability to provision fresh, unique goods (e.g., organic, locally-sourced, etc).

But… is this time different — because, you know, Amazon?

Notwithstanding some advocates’ preference for treating digital and analog retailing as distinct “markets,” what’s really happening in the brick-and-mortar world is that retailers understand that, in terms of reaching customers, there is only one “retail” market.

Traditionally offline retailers, like Walmart and Target, as well as supermarkets like Kroger and Giant, were among the landrush to integrate tech-startup, on-demand technologies following the close of the merger. At the same time, online stalwarts have emerged as surprise players in the once staid grocery market. Google, most important among them, has been establishing partnerships with offline retailers in order to provide the digital interfaces to facilitate the online marketing and on-demand delivery needs of the traditionally offline companies.

All of this activity may have been spurred on by the merger, but it is part and parcel of the age-old competitive process — efforts by industry to try to anticipate how consumers, competitors, and… everyone else will behave going forward, and to capture more of the market when they do.

Moreover, it is exemplary of the nature of the grocery industry’s particular evolution, and, although, again, the merger may have served as a proximate trigger for the flurry of activity, the integration of offline and online retailing was basically an inevitability given the development of commerce and technology over the last two decades.

Whatever the very long-term consequences of the merger (and Steve Horwitz, among others, has suggested one plausible consequence: the “hollowing out” of the traditional supermarket, leaving fresh and prepared foods behind in stores and moving dry goods and housewares online), the short-term consequences seem extremely telling.

In short, the death of brick-and-mortar retail is, as Dirk Auer put it (beating us to the punch), greatly exaggerated.

For all the talk of retail dying, the stores that are actually dying are the ones that fail to cater to their customers, not the ones that happen to be offline. In fact, as one article puts it:

Right now, there are at least a dozen new companies in the midst of opening hundreds of new retail stores. And why are they doing this? Because the stores they currently have are making money hand over fist.

You’ve probably heard some of the names: Allbirds, Casper, Birchbox, Boll & Branch. According to real-estate data company CoStar Group, these online-first stores have increased their retail space tenfold over the last five years. Warby Parker is averaging $3,000 per square foot of retail space, which is almost as good as Tiffany’s (!). (Emphasis added)

For every failing Sears store (the chain closed some 250 Sears and Kmart stores in 2017), there are several other retail outlets opening: Last year some 4,000 more retail stores opened than closed.

The same thing is happening in grocery, as well. It’s not that all brick-and-mortar groceries are shuttering; it’s that the un-dynamic, unsuccessful ones are. That’s not a cause for concern; it’s a cause for celebration. As the author of a February 2018 industry analysis notes:

“Retailers die but retail does not,” Cook said about the ongoing evolution in the grocery space. “There’s just churn as retailers are either disrupted by new business models, or they go out of fashion.”

* * *

The most successful grocers today have well-known private labels, fresh food at affordable prices and digital platforms that allow for shopping online, Cook said.

“We’ve got two types of classes in grocery right now: One is about offering the best goods at the lowest prices — it’s a price play that’s targeted at the families in America shopping on a budget, so someone like Aldi is a part of that,” he said. “Another avenue of success is offering a great shopping experience to shoppers who aren’t as price sensitive — those leaders are Whole Foods and Wegmans.” (Emphasis added)

Competition through innovation — and not just online, and not just by Amazon

To be sure, the rise of e-commerce has put pressure on offline retail’s old business models, and it has required it to stake out its comparative advantage, offering services and “experiences” that online retailers can’t easily match.

But the fundamental market reality brought on by the Internet, the emergence of e-commerce, and the blossoming of Amazon in particular is expanded competition. Lackluster retail outlets, particularly in small or remote towns — the ones that some neo-Brandeisians want to preserve at all costs — could, at one time, coast on the protection afforded by geographic isolation (a protection that has, of course, long been under assault by Walmart). But e-commerce can reach everywhere a delivery service can reach, which is to say everywhere — and you don’t even have to drive the 20 miles to Walmart.

Not only that, e-commerce promises not just the local food market’s few thousand products, or even Walmart’s hundreds of thousands, but virtually every product sold virtually anywhere in the world. Amazon — which directly sells only about 30% of the products sold through its platform, and, as a platform, accounts for less than half of e-commerce — has about 500 million products listed.

The point is this: Amazon’s biggest effect on retail isn’t that it’s overpowering its closest brick-and-mortar rivals, decimating the last vestiges of competition, and moving all sales online (after all, e-commerce is still only some 10% of retail sales); it’s that the company is bringing competition to places that haven’t seen very much of it, and picking off the weak and complacent competitors — much to everyone’s benefit.

Those retailers that do survive the alleged “retail apocalypse” will be those that figure out how to offer something better or different than Amazon, with or without Whole Foods:

The truth is that the bigger Amazon gets, the more opportunity it creates for fresh, local alternatives. The more Amazon pushes robot-powered efficiency, the more space there is for warm and individualized service. The more that people interact with Amazon through its AI-based assistant Alexa, the more they will crave the insight and personal connection of fellow humans.

“The idea that everybody needs to be terrified of Amazon is completely wrong,” says Brian Spaly, who co-founded two e-commerce-centric startups, Bonobos (menswear) and Trunk Club (a wardrobe-in-a-box service), which sold to Walmart and Nordstrom, respectively, for nine-figure sums. “Everybody needs to figure out what makes them special and use those weapons to compete.” (Emphasis added)

To put it into an antitrust context, “post-merger product repositioning,” although perpetually (and wrongfully) disregarded by proponents of stronger merger enforcement, is the nature of the beast. Competition need not — indeed, rarely does — replicate the status quo; it evolves beyond it. Amazon combined with Whole Foods isn’t offering exactly what the companies separately offered pre-merger — and their competitors aren’t doing so, either. That’s a good thing, and it creates new opportunities and new mechanisms for fulfilling consumer preferences.

Some of that means further shifting the mix of retail sales that take place in physical stores and online — and even blurring the lines between them, such that online purchases may be picked up at a physical store, or product samples may be browsed, handled, and sized in a retail store and then ordered online and shipped.

But whatever the extent of the slow transition to online services, where grocery retailers think they can still compete offline (which is to say, everywhere), their investments have increased — substantially — since the merger. Take Aldi, for example:

Discounters like Aldi, known for its no-frills stores and highly coveted private label, have put pressure on traditional grocers, which are also trying to prepare for an e-commerce future likely to be remade by Amazon and its acquisition of Whole Foods.

* * *

Aldi, with currently some 1,600 U.S. stores, has said it will invest $3.4 billion in order to up its U.S. store count to 2,500 by 2022. The additional stores would make Aldi the third-biggest seller of food in the U.S. behind Walmart and Kroger. (Emphasis added)

The country’s biggest grocery chains, Walmart and Kroger — each of them substantially larger than even Amazon and Whole Foods combined — are likewise expanding, not contracting, in the face of the new competition the merger has brought. And this is happening not just online, but in physical stores, as well. Take Walmart, for example:

Walmart is making headway in its pitched battle against Amazon, with online sales soaring in the most recent quarter.

Those sales leaped 40 percent in the U.S., a sign that Walmart’s aggressive moves to bolster its e-commerce business by ramping up fashion, adding thousands of new choices, and scooping up other, niche sites, is paying off.

Physical stores held their own as well during the company’s latest quarter, which spanned May through July. Sales at locations open at least a year rose 4.5 percent, the biggest uptick in more than a decade, as shoppers flocked to their local Walmart to pick up groceries, clothing and seasonal items.

Not only did more customers head to their stores, increasing foot traffic 2.2 percent, they spent more money while they were there. (Emphasis added)

These stores aren’t just maintaining the status quo despite the merger; they are also improving their services to reflect the actual and expected increase in competition.

And this positive effect is reflected in the retail labor landscape as well. Despite the bold, Chicken-Little assertions by some critics, Amazon’s effect on retail labor — and the effect of the Whole Foods merger on grocery store labor in particular — hasn’t been to decimate the market. Instead, employment has expanded significantly in 2018, “largely due to a resurgence in two categories that had been contracting, retail and manufacturing. [In fact,] retailers added an average of 12,000 [workers] each month this year.”

But really. Are we just missing an unstoppable monopoly in its incipiency?

All of the foregoing good news notwithstanding, it could be the case that we are complacently snoozing while an unstoppable future monopoly is in its incipiency. Perhaps Amazon is using its already incredible online power to build a path to success that none will be able to rival. Lina Khan, for one, would have you believe that.

But the evidence we have does not suggest that this is at all a realistic concern.

In the first place, in the one area where you could conceivably cite to Amazon’s supremacy — online retail — it doesn’t even remotely behave like a monopolist. Aside from the obvious fact that it has consistently worked to deliver more output and lower prices (which the Fed Chairman has speculated may be contributing to low inflation), Amazon has (to outward appearances, at the very least) worked very hard to deliver a superior customer experience. It vets and monitors merchants in order to prevent fraud, and when it happens Amazon eats the cost of fraud-related losses. And Amazon is well known for its generous return policy. These are not the practices of a complacent monopolist selling to customers with no, or even few, other purchasing alternatives.

But more importantly, there is nothing that Amazon can do that competitors cannot also do, despite the bare assertions of critics.

Last year, for example, Marshall Steinbaum asserted that, with respect to the Whole Foods merger, and the potential for Instacart and Wegman’s to compete with Amazon for grocery delivery,  

Instacart has nowhere near the existing infrastructure or access to capital to make that viable… There’s increasingly no plausible way around Amazon. Wegmans is not going to front an all-out assault on Amazon in e-commerce. Walmart is only now doing it, and only just. Amazon is already dominant and already anticompetitive. There’s also, dare I say it, the threat of antitrust… I think it’s fair to say the agencies have been favorable to Amazon in the past and would-be competitors might assume they will be going forward.

This account is hard to take seriously, particularly since it’s predicated on the idea that there are monopoly profits to be earned in the grocery delivery space. Why exactly wouldn’t Wegman’s (or someone else) partner with Instacart in order to realize some portion of those profits — to innovate to offer enhanced services (over and above its already uniquely pleasant grocery-shopping experience)?

And just one year later, it appears that Amazon’s competitors do indeed intend an “all-out assault on Amazon” in precisely this fashion.

German discount giant Aldi has stepped up to bolster Instacart’s deal with Wegman’s and is using the company’s services to help it succeed in its massive push into the US market.

And meanwhile, on the direct investment front, Instacart has managed to raise $200M in new funding to help it expand its operations, boosting its valuation to a whopping $4.2 billion. This is a “vote of confidence from the venture capital community” and “a far cry from the uncertainty swirling around the grocery startup after the Amazon-Whole Foods deal last year.”

Thus just a single year’s worth of investment and expanded activity — especially coming as it has in the immediate aftermath of the Amazon/Whole Foods merger — fully rebuts Steinbaum’s absurd claim (echoed by others) that “[t]here’s increasingly no plausible way around Amazon.”

And the reason for Instacart’s success is (or should be, to anyone paying attention) entirely predictable, and tied to the reason that the Amazon/Whole Foods merger should continue to be welcomed rather than reviled: competition. Instacart’s success is tied to that of Kroger and Aldi and every other grocer and retailer threatened by competition from Amazon. For now, at least, these stores see same-day delivery of fresh produce and other perishables as key to their ability to take on and even best the combined Amazon/Whole Foods. And it’s been working:

The first and most obvious impact was the pressure that supermarkets like Kroger felt when Amazon began lowering prices at Whole Foods. However, after having its shares rattled by Amazon, Kroger was able to regain investor confidence by partnering with Instacart and several other grocery delivery services, allowing it to outpace Amazon over the past three months. (Emphasis added)

Where exactly the next challenge from Amazon will arise, and from whence exactly the competitive response will emerge, is uncertain. But what we’ve seen thus far should reassure us that both the challenge and the response will happen. Before we pronounce the death of retail, the end of living wages, and the destruction of democracy at Amazon’s hands — and seek out the antitrust laws to thwart every social ill critics can conjure — we should review the tape every so often. And, so far, it seems to suggest that the alarms are dramatically premature.

On February 18, the Federal Communications Commission (FCC) voted three-to-two in favor of a notice of proposed rulemaking (NPRM) fancifully entitled “Expanding Consumers’ Video Navigation Choices, MB Docket No. 16-42; Commercial Availability of Navigation Devices, CS Docket No. 97-80”.  The NPRM, in the words of the FCC’s press release, will “create a framework for providing innovators, device manufacturers, and app developers the information they need to develop new technologies, reflecting the many ways consumers access their subscription video programming today.”  In reality, far from “creating a framework” to spawn new technologies, this NPRM promotes uncalled-for regulatory intrusion into an increasingly dynamic and efficient market, and (as Free State Foundation President Randy May aptly put it) represents FCC “regulatory policy run amok.”  (A November 2015 article by Heritage Foundation scholar James Gattuso provides a broader and more detailed critical assessment of FCC regulation of the video programming market.)

In particular, as one legal analyst summarizes, the NPRM would require “multichannel video programming distributors (MVPDs) providers to make ‘three core information streams’ available to companies developing devices or applications that might compete with MVPDs’ set-top boxes.”  (MVPDs are providers of pay television services, such as cable television, direct broadcast satellite, fiberoptic cable (e.g., Verizon FIOS), and competitive local exchange companies.)  Specifically:

The NPRM would require the MVPDs to supply the following to potential competitors: 1) information about what programming is available to the consumer, such as the channel listing and video-on-demand lineup; 2) information about what a device is allowed to do with content, such as recording; and 3) the content itself.

This new regulatory requirement makes no sense and threatens to undermine competition and innovation.  As telecommunications scholar Randy May recently emphasized, the FCC itself adopted a rule in June 2015 establishing a presumption that local video markets, on a nationwide basis, are subject to “effective competition.”  Indeed, May noted that the FCC conceded in a February 2 appellate brief that there has been a “transformation” of the multichannel video marketplace, that “consumers have alternatives to cable,” and that “cable’s market share has sharply declined.”  (May embellished the point in stating that today “consumers  may choose among a multitude of services and devices, such as Netflix, Hulu, Amazon Fire TV, Google Chromecast, Apple TV, and Roku.  And they are doing so in exponentially increasing numbers.”)  This means that regulation in this area is inappropriate, since it is well accepted that economic regulation of competitive markets is unjustifiable.  (Indeed, even imperfectly competitive markets and monopolies should not be regulated unless regulation is likely to yield welfare outcomes that are superior to reliance on market forces, a test that is not easily met – a point made by such renowned economists as James Buchanan and A.E. Kahn.)  What’s more, FCC regulation, which tends to slow innovation, is likely to prove especially harmful in the MVPD sector, given that market’s record of rapid and continuous welfare-enhancing introduction of new and improved services.

The February 18 dissenting statements by Commissioner Ajit Pai and Commissioner Michael O’Rielly further explored the NPRM’s serious deficiencies.

After summarizing the harm caused by prior FCC regulation of video set-top boxes, Commissioner Pai stated:

[W]e are en route to eliminating the need for a set-top box. An app can turn your iPad or Android phone into a navigation device. MVPDs have deployed these apps and are in the process of developing more advanced ones.  The Commission should be encouraging these efforts.  

But this proposal would do precisely the opposite.  It would divert the industry’s energies away from app development and toward the long-term slog of complying with the Commission’s new regulatory scheme for unwanted hardware.  And the Notice goes further; it actually proposes imposing a number of regulations that would discourage the development and deployment of MVPD apps.  That’s not what the American people want. I’m confident that most consumers would rather eliminate the set-top box altogether than embrace a complex regulatory scheme that will require them to have another box in their home and won’t take effect for at least three years.

Commissioner O’Rielly focused on the economic and technical harm imposed by the NPRM, as well as its legal deficiencies:

[The proposal] could open multichannel video programming distributor (“MVPD”) networks to serious security vulnerabilities, exposing them to potential network damage and content theft.  It could strip content producers of their rights to control the distribution and presentation of their content.  It could ultimately subject over-the-top (“OTT”) providers to the same regime. . . .  Worst of all, it would certainly devalue the content produced by programmers large and small, by enabling anyone capable of writing a compliant app to turn on a free stream of video content painstakingly cobbled together by an MVPD at great expense – the ultimate free-rider problem.  MVPDs, broadcasters, and independent programmers alike would all lose some incentives to keep doing what they do, and some would opt for the sidelines, leaving consumers with fewer video options. . . .

The statutory authority on which this fantasy rests is equally as far-fetched. The section that discusses authority will long live as a testament to the level of absurdity that can be achieved in four short paragraphs when two defenseless statutes fall down a rabbit hole into a land where words have no meaning.  While billed as an attempt to enhance competition in the set top box market, the item shoots miles beyond that narrow frame on the very first page, redefining statutory terms plainly referring to hardware, such as “navigation device,” “interactive communications equipment,” and “other equipment” to mean either hardware or software (including apps). I don’t know how much clearer the terms “device” or “equipment” could be in their intent to reference tangible, physical hardware. If those words don’t work to restrict the Commission, are there any that ever could? And I don’t think that anyone here believes for a second that [the statutory language discussed] could ever have made it out of a single Congressional committee in 2014 if the members had known it would be interpreted to allow the FCC to force MVPDs to stream all of their content for free to any app developer willing to jump through a few hoops.

In sum, the FCC’s February 18 NPRM flies in the face of sound economics, video programming distribution history, and reasonable statutory construction.  If finalized, it will further reduce innovation, economic efficiency, and consumer welfare.  The three commissioners who voted for this misbegotten proposal should rethink their position and act to end this unwarranted proposed regulatory intrusion into a competitive and innovative marketplace.

Remember when net neutrality wasn’t going to involve rate regulation and it was crazy to say that it would? Or that it wouldn’t lead to regulation of edge providers? Or that it was only about the last mile and not interconnection? Well, if the early petitions and complaints are a preview of more to come, the Open Internet Order may end up having the FCC regulating rates for interconnection and extending the reach of its privacy rules to edge providers.

On Monday, Consumer Watchdog petitioned the FCC to not only apply Customer Proprietary Network Information (CPNI) rules originally meant for telephone companies to ISPs, but to also start a rulemaking to require edge providers to honor Do Not Track requests in order to “promote broadband deployment” under Section 706. Of course, we warned of this possibility in our joint ICLE-TechFreedom legal comments:

For instance, it is not clear why the FCC could not, through Section 706, mandate “network level” copyright enforcement schemes or the DNS blocking that was at the heart of the Stop Online Piracy Act (SOPA). . . Thus, it would appear that Section 706, as re-interpreted by the FCC, would, under the D.C. Circuit’s Verizon decision, allow the FCC sweeping power to regulate the Internet up to and including (but not beyond) the process of “communications” on end-user devices. This could include not only copyright regulation but everything from cybersecurity to privacy to technical standards. (emphasis added).

While the merits of Do Not Track are debatable, it is worth noting that privacy regulation can go too far and actually drastically change the Internet ecosystem. In fact, it is actually a plausible scenario that overregulating data collection online could lead to the greater use of paywalls to access content.  This may actually be a greater threat to Internet Openness than anything ISPs have done.

And then yesterday, the first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnectionshakespeare. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road.

This was another predictable development that many critics of the Open Internet Order warned about: there was no way to really avoid rate regulation once the FCC reclassified ISPs. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication. Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

In sum, while neither of these actions were contemplated by the FCC (they claim), such abstract rules are going to lead to random complaints like these, and companies are going to have to use the “ask FCC permission” process to try to figure out beforehand whether they should be investing or whether they’re going to be slammed. As Geoff Manne said in Wired:

That’s right—this new regime, which credits itself with preserving “permissionless innovation,” just put a bullet in its head. It puts innovators on notice, and ensures that the FCC has the authority (if it holds up in court) to enforce its vague rule against whatever it finds objectionable.

I mean, I don’t wanna brag or nothin, but it seems to me that we critics have been right so far. The reclassification of broadband Internet service as Title II has had the (supposedly) unintended consequence of sweeping in far more (both in scope of application and rules) than was supposedly bargained for. Hopefully the FCC rejects the petition and the complaint and reverses this course before it breaks the Internet.

There is a consensus in America that we need to control health care costs and improve the delivery of health care. After a long debate on health care reform and careful scrutiny of health care markets, there seems to be agreement that the unintegrated, “siloed approach” to health care is inefficient, costly, and contrary to the goal of improving care. But some antitrust enforcers — most notably the FTC — are standing in the way.

Enlightened health care providers are responding to this consensus by entering into transactions that will lead to greater clinical and financial integration, facilitating a movement from volume-based to value-based delivery of care. Any many aspects of the Affordable Care Act encourage this path to integration. Yet when the market seeks to address these critical concerns about our health care system, the FTC and some state Attorneys General take positions diametrically opposed to sound national health care policy as adopted by Congress and implemented by the Department of Health and Human Services.

To be sure, not all state antitrust enforcers stand in the way of health care reform. For example, many states including New York, Pennsylvania and Massachusetts, seem to be willing to permit hospital mergers even in concentrated markets with an agreement for continued regulation. At the same time, however, the FTC has been aggressively challenging integration, taking the stance that hospital mergers will raise prices by giving those hospitals greater leverage in negotiations.

The distance between HHS and the FTC in DC is about 6 blocks, but in healthcare policy they seem to be are miles apart.

The FTC’s skepticism about integration is an old story. As I have discussed previously, during the last decade the agency challenged more than 30 physician collaborations even though those cases lacked any evidence that the collaborations led to higher prices. And, when physicians asked for advice on collaborations, it took the Commission on average more than 436 days to respond to those requests (about as long as it took Congress to debate and enact the Affordable Care Act).

The FTC is on a recent winning streak in challenging hospital mergers. But those were primarily simple cases with direct competition between hospitals in the same market with very high levels of concentration. The courts did not struggle long in these cases, because the competitive harm appeared straightforward.

Far more controversial is when a hospital acquires a physician practice. This type of vertical integration seems precisely what the advocates for health care reform are crying out for. The lack of integration between physicians and hospitals is a core to the problems in health care delivery. But the antitrust law is entirely solicitous of these types of vertical mergers. There has not been a vertical merger successfully challenged in the courts since 1980 – the days of reruns of the TV show Dr. Kildare. And even the supposedly pro-enforcement Obama Administration has not gone to court to challenge a vertical merger, and the Obama FTC has not even secured a merger consent under a vertical theory.

The case in which the FTC has decided to “bet the house” is its challenge to St. Luke’s Health System’s acquisition of Saltzer Medical Group in Nampa, Idaho.

St. Luke’s operates the largest hospital in Boise, and Saltzer is the largest physician practice in Nampa, roughly 20-miles away. But rather than recognizing that this was a vertical affiliation designed to integrate care and to promote a transition to a system in which the provider takes the risk of overutilization, the FTC characterized the transaction as purely horizontal – no different from the merger of two hospitals. In that manner, the FTC sought to paint concentration levels it designed to assure victory.

But back to the reasons why integration is essential. It is undisputed that provider integration is the key to improving American health care. Americans pay substantially more than any other industrialized nation for health care services, 17.2 percent of gross domestic product. Furthermore, these higher costs are not associated with better overall care or greater access for patients. As noted during the debate on the Affordable Care Act, the American health care system’s higher costs and lower quality and access are mostly associated with the usage of a fee-for-service system that pays for each individual medical service, and the “siloed approach” to medicine in which providers work autonomously and do not coordinate to improve patient outcomes.

In order to lower health care costs and improve care, many providers have sought to transform health care into a value-based, patient-centered approach. To institute such a health care initiative, medical staff, physicians, and hospitals must clinically integrate and align their financial incentives. Integrated providers utilize financial risk, share electronic records and data, and implement quality measures in order to provide the best patient care.

The most effective means of ensuring full-scale integration is through a tight affiliation, most often achieved through a merger. Unlike contractual arrangements that are costly, time-sensitive, and complicated by an outdated health care regulatory structure, integrated affiliations ensure that entities can effectively combine and promote structural change throughout the newly formed organization.

For nearly five weeks of trial in Boise St. Luke’s and the FTC fought these conflicting visions of integration and health care policy. Ultimately, the court decided the supposed Nampa primary care physician market posited by the FTC would become far more concentrated, and the merger would substantially lessen competition for “Adult Primary Care Services” by raising prices in Nampa. As such, the district court ordered an immediate divestiture.

Rarely, however, has an antitrust court expressed such anguish at its decision. The district court readily “applauded [St. Luke’s] for its efforts to improve the delivery of healthcare.” It acknowledged the positive impact the merger would have on health care within the region. The court further noted that Saltzer had attempted to coordinate with other providers via loose affiliations but had failed to reap any benefits. Due to Saltzer’s lack of integration, Saltzer physicians had limited “the number of Medicaid or uninsured patients they could accept.”

According to the district court, the combination of St. Luke’s and Saltzer would “improve the quality of medical care.” Along with utilizing the same electronic medical records system and giving the Saltzer physicians access to sophisticated quality metrics designed to improve their practices, the parties would improve care by abandoning fee-for-service payment for all employed physicians and institute population health management reimbursing the physicians via risk-based payment initiatives.

As noted by the district court, these stated efficiencies would improve patient outcomes “if left intact.” Along with improving coordination and quality of care, the merger, as noted by an amicus brief submitted by the International Center for Law & Economics and the Medicaid Defense Fund to the Ninth Circuit, has also already expanded access to Medicaid and uninsured patients by ensuring previously constrained Saltzer physicians can offer services to the most needy.

The court ultimately was not persuaded by the demonstrated procompetitive benefits. Instead, the district court relied on the FTC’s misguided arguments and determined that the stated efficiencies were not “merger-specific,” because such efficiencies could potentially be achieved via other organizational structures. The district court did not analyze the potential success of substitute structures in achieving the stated efficiencies; instead, it relied on the mere existence of alternative provider structures. As a result, as ICLE and the Medicaid Defense Fund point out:

By placing the ultimate burden of proving efficiencies on the Appellants and applying a narrow, impractical view of merger specificity, the court has wrongfully denied application of known procompetitive efficiencies. In fact, under the court’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to oppose untested, theoretical less restrictive structural alternatives.

Notably, the district court’s divestiture order has been stayed by the Ninth Circuit. The appeal on the merits is expected to be heard some time this autumn. Along with reviewing the relevant geographic market and usage of divestiture as a remedy, the Ninth Circuit will also analyze the lower court’s analysis of the merger’s procompetitive efficiencies. For now, the stay order is a limited victory for underserved patients and the merging defendants. While such a ruling is not determinative of the Ninth Circuit’s decision on the merits, it does demonstrate that the merging parties have at least a reasonable possibility of success.

As one might imagine, the Ninth Circuit decision is of great importance to the antitrust and health care reform community. If the district court’s ruling is upheld, it could provide a deterrent to health care providers from further integrating via mergers, a precedent antithetical to the very goals of health care reform. However, if the Ninth Circuit finds the merger does not substantially lessen competition, then precompetitive vertical integration is less likely to be derailed by misapplication of the antitrust laws. The importance and impact of such a decision on American patients cannot be understated.

A number of blockbuster mergers have received (often negative) attention from media and competition authorities in recent months. From the recently challenged Staples-Office Depot merger to the abandoned Comcast-Time Warner merger to the heavily scrutinized Aetna-Humana merger (among many others), there has been a wave of potential mega-mergers throughout the economy—many of them met with regulatory resistance. We’ve discussed several of these mergers at TOTM (see, e.g., here, here, here and here).

Many reporters, analysts, and even competition authorities have adopted various degrees of the usual stance that big is bad, and bigger is even badder. But worse yet, once this presumption applies, agencies have been skeptical of claimed efficiencies, placing a heightened burden on the merging parties to prove them and often ignoring them altogether. And, of course (and perhaps even worse still), there is the perennial problem of (often questionable) market definition — which tanked the Sysco/US Foods merger and which undergirds the FTC’s challenge of the Staples/Office Depot merger.

All of these issues are at play in the proposed acquisition of British aluminum can manufacturer Rexam PLC by American can manufacturer Ball Corp., which has likewise drawn the attention of competition authorities around the world — including those in Brazil, the European Union, and the United States.

But the Ball/Rexam merger has met with some important regulatory successes. Just recently the members of CADE, Brazil’s competition authority, unanimously approved the merger with limited divestitures. The most recent reports also indicate that the EU will likely approve it, as well. It’s now largely down to the FTC, which should approve the merger and not kill it or over-burden it with required divestitures on the basis of questionable antitrust economics.

The proposed merger raises a number of interesting issues in the surprisingly complex beverage container market. But this merger merits regulatory approval.

The International Center for Law & Economics recently released a research paper entitled, The Ball-Rexam Merger: The Case for a Competitive Can Market. The white paper offers an in-depth assessment of the economics of the beverage packaging industry; the place of the Ball-Rexam merger within this remarkably complex, global market; and the likely competitive effects of the deal.

The upshot is that the proposed merger is unlikely to have anticompetitive effects, and any competitive concerns that do arise can be readily addressed by a few targeted divestitures.

The bottom line

The production and distribution of aluminum cans is a surprisingly dynamic industry, characterized by evolving technology, shifting demand, complex bargaining dynamics, and significant changes in the costs of production and distribution. Despite the superficial appearance that the proposed merger will increase concentration in aluminum can manufacturing, we conclude that a proper understanding of the marketplace dynamics suggests that the merger is unlikely to have actual anticompetitive effects.

All told, and as we summarize in our Executive Summary, we found at least seven specific reasons for this conclusion:

  1. Because the appropriately defined product market includes not only stand-alone can manufacturers, but also vertically integrated beverage companies, as well as plastic and glass packaging manufacturers, the actual increase in concentration from the merger will be substantially less than suggested by the change in the number of nationwide aluminum can manufacturers.
  2. Moreover, in nearly all of the relevant geographic markets (which are much smaller than the typically nationwide markets from which concentration numbers are derived), the merger will not affect market concentration at all.
  3. While beverage packaging isn’t a typical, rapidly evolving, high-technology market, technological change is occurring. Coupled with shifting consumer demand (often driven by powerful beverage company marketing efforts), and considerable (and increasing) buyer power, historical beverage packaging market shares may have little predictive value going forward.
  4. The key importance of transportation costs and the effects of current input prices suggest that expanding demand can be effectively met only by expanding the geographic scope of production and by economizing on aluminum supply costs. These, in turn, suggest that increasing overall market concentration is consistent with increased, rather than decreased, competitiveness.
  5. The markets in which Ball and Rexam operate are dominated by a few large customers, who are themselves direct competitors in the upstream marketplace. These companies have shown a remarkable willingness and ability to invest in competing packaging supply capacity and to exert their substantial buyer power to discipline prices.
  6. For this same reason, complaints leveled against the proposed merger by these beverage giants — which are as much competitors as they are customers of the merging companies — should be viewed with skepticism.
  7. Finally, the merger should generate significant managerial and overhead efficiencies, and the merged firm’s expanded geographic footprint should allow it to service larger geographic areas for its multinational customers, thus lowering transaction costs and increasing its value to these customers.

Distinguishing Ardagh: The interchangeability of aluminum and glass

An important potential sticking point for the FTC’s review of the merger is its recent decision to challenge the Ardagh-Saint Gobain merger. The cases are superficially similar, in that they both involve beverage packaging. But Ardagh should not stand as a model for the Commission’s treatment of Ball/Rexam. The FTC made a number of mistakes in Ardagh (including market definition and the treatment of efficiencies — the latter of which brought out a strenuous dissent from Commissioner Wright). But even on its own (questionable) terms, Ardagh shouldn’t mean trouble for Ball/Rexam.

As we noted in our December 1st letter to the FTC on the Ball/Rexam merger, and as we discuss in detail in the paper, the situation in the aluminum can market is quite different than the (alleged) market for “(1) the manufacture and sale of glass containers to Brewers; and (2) the manufacture and sale of glass containers to Distillers” at issue in Ardagh.

Importantly, the FTC found (almost certainly incorrectly, at least for the brewers) that other container types (e.g., plastic bottles and aluminum cans) were not part of the relevant product market in Ardagh. But in the markets in which aluminum cans are a primary form of packaging (most notably, soda and beer), our research indicates that glass, plastic, and aluminum are most definitely substitutes.

The Big Four beverage companies (Coca-Cola, PepsiCo, Anheuser-Busch InBev, and MillerCoors), which collectively make up 80% of the U.S. market for Ball and Rexam, are all vertically integrated to some degree, and provide much of their own supply of containers (a situation significantly different than the distillers in Ardagh). These companies exert powerful price discipline on the aluminum packaging market by, among other things, increasing (or threatening to increase) their own container manufacturing capacity, sponsoring new entry, and shifting production (and, via marketing, consumer demand) to competing packaging types.

For soda, Ardagh is obviously inapposite, as soda packaging wasn’t at issue there. But the FTC’s conclusion in Ardagh that aluminum cans (which in fact make up 56% of the beer packaging market) don’t compete with glass bottles for beer packaging is also suspect.

For aluminum can manufacturers Ball and Rexam, aluminum can’t be excluded from the market (obviously), and much of the beer in the U.S. that is packaged in aluminum is quite clearly also packaged in glass. The FTC claimed in Ardagh that glass and aluminum are consumed in distinct situations, so they don’t exert price pressure on each other. But that ignores the considerable ability of beer manufacturers to influence consumption choices, as well as the reality that consumer preferences for each type of container (whether driven by beer company marketing efforts or not) are merging, with cost considerations dominating other factors.

In fact, consumers consume beer in both packaging types largely interchangeably (with a few limited exceptions — e.g., poolside drinking demands aluminum or plastic), and beer manufacturers readily switch between the two types of packaging as the relative production costs shift.

Craft brewers, to take one important example, are rapidly switching to aluminum from glass, despite a supposed stigma surrounding canned beers. Some craft brewers (particularly the larger ones) do package at least some of their beers in both types of containers, or simultaneously package some of their beers in glass and some of their beers in cans, while for many craft brewers it’s one or the other. Yet there’s no indication that craft beer consumption has fallen off because consumers won’t drink beer from cans in some situations — and obviously the prospect of this outcome hasn’t stopped craft brewers from abandoning bottles entirely in favor of more economical cans, nor has it induced them, as a general rule, to offer both types of packaging.

A very short time ago it might have seemed that aluminum wasn’t in the same market as glass for craft beer packaging. But, as recent trends have borne out, that differentiation wasn’t primarily a function of consumer preference (either at the brewer or end-consumer level). Rather, it was a function of bottling/canning costs (until recently the machinery required for canning was prohibitively expensive), materials costs (at various times glass has been cheaper than aluminum, depending on volume), and transportation costs (which cut against glass, but the relative attractiveness of different packaging materials is importantly a function of variable transportation costs). To be sure, consumer preference isn’t irrelevant, but the ease with which brewers have shifted consumer preferences suggests that it isn’t a strong constraint.

Transportation costs are key

Transportation costs, in fact, are a key part of the story — and of the conclusion that the Ball/Rexam merger is unlikely to have anticompetitive effects. First of all, transporting empty cans (or bottles, for that matter) is tremendously inefficient — which means that the relevant geographic markets for assessing the competitive effects of the Ball/Rexam merger are essentially the largely non-overlapping 200 mile circles around the companies’ manufacturing facilities. Because there are very few markets in which the two companies both have plants, the merger doesn’t change the extent of competition in the vast majority of relevant geographic markets.

But transportation costs are also relevant to the interchangeability of packaging materials. Glass is more expensive to transport than aluminum, and this is true not just for empty bottles, but for full ones, of course. So, among other things, by switching to cans (even if it entails up-front cost), smaller breweries can expand their geographic reach, potentially expanding sales enough to more than cover switching costs. The merger would further lower the costs of cans (and thus of geographic expansion) by enabling beverage companies to transact with a single company across a wider geographic range.

The reality is that the most important factor in packaging choice is cost, and that the packaging alternatives are functionally interchangeable. As a result, and given that the direct consumers of beverage packaging are beverage companies rather than end-consumers, relatively small cost changes readily spur changes in packaging choices. While there are some switching costs that might impede these shifts, they are readily overcome. For large beverage companies that already use multiple types and sizes of packaging for the same product, the costs are trivial: They already have packaging designs, marketing materials, distribution facilities and the like in place. For smaller companies, a shift can be more difficult, but innovations in labeling, mobile canning/bottling facilities, outsourced distribution and the like significantly reduce these costs.  

“There’s a great future in plastics”

All of this is even more true for plastic — even in the beer market. In fact, in 2010, 10% of the beer consumed in Europe was sold in plastic bottles, as was 15% of all beer consumed in South Korea. We weren’t able to find reliable numbers for the U.S., but particularly for cheaper beers, U.S. brewers are increasingly moving to plastic. And plastic bottles are the norm at stadiums and arenas. Whatever the exact numbers, clearly plastic holds a small fraction of the beer container market compared to glass and aluminum. But that number is just as clearly growing, and as cost considerations impel them (and technology enables them), giant, powerful brewers like AB InBev and MillerCoors are certainly willing and able to push consumers toward plastic.

Meanwhile soda companies like Coca-cola and Pepsi have successfully moved their markets so that today a majority of packaged soda is sold in plastic containers. There’s no evidence that this shift came about as a result of end-consumer demand, nor that the shift to plastic was delayed by a lack of demand elasticity; rather, it was primarily a function of these companies’ ability to realize bigger profits on sales in plastic containers (not least because they own their own plastic packaging production facilities).

And while it’s not at issue in Ball/Rexam because spirits are rarely sold in aluminum packaging, the FTC’s conclusion in Ardagh that

[n]on-glass packaging materials, such as plastic containers, are not in this relevant product market because not enough spirits customers would switch to non-glass packaging materials to make a SSNIP in glass containers to spirits customers unprofitable for a hypothetical monopolist

is highly suspect — which suggests the Commission may have gotten it wrong in other ways, too. For example, as one report notes:

But the most noteworthy inroads against glass have been made in distilled liquor. In terms of total units, plastic containers, almost all of them polyethylene terephthalate (PET), have surpassed glass and now hold a 56% share, which is projected to rise to 69% by 2017.

True, most of this must be tiny-volume airplane bottles, but by no means all of it is, and it’s clear that the cost advantages of plastic are driving a shift in distilled liquor packaging, as well. Some high-end brands are even moving to plastic. Whatever resistance (and this true for beer, too) that may have existed in the past because of glass’s “image,” is breaking down: Don’t forget that even high-quality wines are now often sold with screw-tops or even in boxes — something that was once thought impossible.

The overall point is that the beverage packaging market faced by can makers like Ball and Rexam is remarkably complex, and, crucially, the presence of powerful, vertically integrated customers means that past or current demand by end-users is a poor indicator of what the market will look like in the future as input costs and other considerations faced by these companies shift. Right now, for example, over 50% of the world’s soda is packaged in plastic bottles, and this margin is set to increase: The global plastic packaging market (not limited to just beverages) is expected to grow at a CAGR of 5.2% between 2014 and 2020, while aluminum packaging is expected to grow at just 2.9%.

A note on efficiencies

As noted above, the proposed Ball/Rexam merger also holds out the promise of substantial efficiencies (estimated at $300 million by the merging parties, due mainly to decreased transportation costs). There is a risk, however, that the FTC may effectively disregard those efficiencies, as it did in Ardagh (and in St. Luke’s before it), by saddling them with a higher burden of proof than it requires of its own prima facie claims. If the goal of antitrust law is to promote consumer welfare, competition authorities can’t ignore efficiencies in merger analysis.

In his Ardagh dissent, Commissioner Wright noted that:

Even when the same burden of proof is applied to anticompetitive effects and efficiencies, of course, reasonable minds can and often do differ when identifying and quantifying cognizable efficiencies as appears to have occurred in this case.  My own analysis of cognizable efficiencies in this matter indicates they are significant.   In my view, a critical issue highlighted by this case is whether, when, and to what extent the Commission will credit efficiencies generally, as well as whether the burden faced by the parties in establishing that proffered efficiencies are cognizable under the Merger Guidelines is higher than the burden of proof facing the agencies in establishing anticompetitive effects. After reviewing the record evidence on both anticompetitive effects and efficiencies in this case, my own view is that it would be impossible to come to the conclusions about each set forth in the Complaint and by the Commission — and particularly the conclusion that cognizable efficiencies are nearly zero — without applying asymmetric burdens.

The Commission shouldn’t make the same mistake here. In fact, here, where can manufacturers are squeezed between powerful companies both upstream (e.g., Alcoa) and downstream (e.g., AB InBev), and where transportation costs limit the opportunities for expanding the customer base of any particular plant, the ability to capitalize on economies of scale and geographic scope is essential to independent manufacturers’ abilities to efficiently meet rising demand.

Read our complete assessment of the merger’s effect here.