Archives For truth on the market

At this point, only the most masochistic and cynical among DC’s policy elite actually desire for the net neutrality conflict to continue. And yet, despite claims that net neutrality principles are critical to protecting consumers, passage of the current Congressional Review Act (“CRA”) disapproval resolution in Congress would undermine consumer protection and promise only to drag out the fight even longer.

The CRA resolution is primarily intended to roll back the FCC’s re-re-classification of broadband as a Title I service under the Communications Act in the Restoring Internet Freedom Order (“RIFO”). The CRA allows Congress to vote to repeal rules recently adopted by federal agencies; upon a successful CRA vote, the rules are rescinded and the agency is prohibited from adopting substantially similar rules in the future.

But, as TechFreedom has noted, it’s not completely clear that a CRA on a regulatory classification decision will work quite the way Congress intends it and could just trigger more litigation cycles, largely because it is unclear what parts of the RIFO are actually “rules” subject to the CRA. Harold Feld has written a critique of TechFreedom’s position, arguing, in effect, that of course the RIFO is a rule; TechFreedom responded with a pretty devastating rejoinder.

But this exchange really demonstrates TechFreedom’s central argument: It is sufficiently unclear how or whether the CRA will apply to the various provisions of the RIFO, such that the only things the CRA is guaranteed to do are 1) to strip consumers of certain important protections — it would take away the FCC’s transparency requirements for ISPs, and imperil privacy protections currently ensured by the FTC — while 2) prolonging the already interminable litigation and political back-and-forth over net neutrality.

The CRA is political theater

The CRA resolution effort is not about good Internet regulatory policy; rather, it’s pure political opportunism ahead of the midterms. Democrats have recognized net neutrality as a good wedge issue because of its low political opportunity cost. The highest-impact costs of over-regulating broadband through classification decisions are hard to see: Rather than bad things happening, the costs arrive in the form of good things not happening. Eventually those costs work their way to customers through higher access prices or less service — especially in rural areas most in need of it — but even these effects take time to show up and, when they do, are difficult to pin on any particular net neutrality decision, including the CRA resolution. Thus, measured in electoral time scales, prolonging net neutrality as a painful political issue — even though actual resolution of the process by legislation would be the sensible course — offers tremendous upside for political challengers and little cost.  

The truth is, there is widespread agreement that net neutrality issues need to be addressed by Congress: A constant back and forth between the FCC (and across its own administrations) and the courts runs counter to the interests of consumers, broadband companies, and edge providers alike. Virtually whatever that legislative solution ends up looking like, it would be an improvement over the unstable status quo.

There have been various proposals from Republicans and Democrats — many of which contain provisions that are likely bad ideas — but in the end, a bill passed with bipartisan input should have the virtue of capturing an open public debate on the issue. Legislation won’t be perfect, but it will be tremendously better than the advocacy playground that net neutrality has become.

What would the CRA accomplish?

Regardless of what one thinks of the substantive merits of TechFreedom’s arguments on the CRA and the arcana of legislative language distinguishing between agency “rules” and “orders,” if the CRA resolution is successful (a prospect that is a bit more likely following the Senate vote to pass it) what follows is pretty clear.

The only certain result of the the CRA resolution becoming law would be to void the transparency provisions that the FCC introduced in the RIFO — the one part of the Order that is pretty clearly a “rule” subject to CRA review — and it would disable the FCC from offering another transparency rule in its place. Everything else is going to end up — surprise! — before the courts, which would serve only to keep the issues surrounding net neutrality unsettled for another several years. (A cynic might suggest that this is, in fact, the goal of net neutrality proponents, for whom net neutrality has been and continues to have important political valence.)

And if the CRA resolution withstands the inevitable legal challenge to its rescision of the rest of the RIFO, it would also (once again) remove broadband privacy from the FTC’s purview, placing it back into the FCC’s lap — which is already prohibited from adopting privacy rules following last year’s successful CRA resolution undoing the Wheeler FCC’s broadband privacy regulations. The result is that we could be left without any broadband privacy regulator at all — presumably not the outcome strong net neutrality proponents want — but they persevere nonetheless.

Moreover, TechFreedom’s argument that the CRA may not apply to all parts of the RIFO could have a major effect on whether or not Congress is even accomplishing anything at all (other than scoring political points) with this vote. It could be the case that the CRA applies only to “rules” and not “orders,” or it could be the case that even if the CRA does apply to the RIFO, its passage would not force the FCC to revive the abrogated 2015 Open Internet Order, as proponents of the CRA vote hope.

Whatever one thinks of these arguments, however, they are based on a sound reading of the law and present substantial enough questions to sustain lengthy court challenges. Thus, far from a CRA vote actually putting to rest the net neutrality issue, it is likely to spawn litigation that will drag out the classification uncertainty question for at least another year (and probably more, with appeals).

Stop playing net neutrality games — they aren’t fun

Congress needs to stop trying to score easy political points on this issue while avoiding the hard and divisive work of reaching a compromise on actual net neutrality legislation. Despite how the CRA is presented in the popular media, a CRA vote is the furthest thing from a simple vote for net neutrality: It’s a political calculation to avoid accountability.

I had the pleasure last month of hosting the first of a new annual roundtable discussion series on closing the rural digital divide through the University of Nebraska’s Space, Cyber, and Telecom Law Program. The purpose of the roundtable was to convene a diverse group of stakeholders — from farmers to federal regulators; from small municipal ISPs to billion dollar app developers — for a discussion of the on-the-ground reality of closing the rural digital divide.

The impetus behind the roundtable was, quite simply, that in my five years living in Nebraska I have consistently found that the discussions that we have here about the digital divide in rural America are wholly unlike those that the federally-focused policy crowd has back in DC. Every conversation I have with rural stakeholders further reinforces my belief that those of us who approach the rural digital divide from the “DC perspective” fail to appreciate the challenges that rural America faces or the drive, innovation, and resourcefulness that rural stakeholders bring to the issue when DC isn’t looking. So I wanted to bring these disparate groups together to see what was driving this disconnect, and what to do about it.

The unfortunate reality of the rural digital divide is that it is an existential concern for much of America. At the same time, the positive news is that closing this divide has become an all-hands-on-deck effort for stakeholders in rural America, one that defies caricatured political, technological, and industry divides. I have never seen as much agreement and goodwill among stakeholders in any telecom community as when I speak to rural stakeholders about digital divides. I am far from an expert in rural broadband issues — and I don’t mean to hold myself out as one — but as I have engaged with those who are, I am increasingly convinced that there are far more and far better ideas about closing the rural digital divide to be found outside the beltway than within.

The practical reality is that most policy discussions about the rural digital divide over the past decade have been largely irrelevant to the realities on the ground: The legal and policy frameworks focus on the wrong things, and participants in these discussions at the federal level rarely understand the challenges that define the rural divide. As a result, stakeholders almost always fall back on advocating stale, entrenched, viewpoints that have little relevance to the on-the-ground needs. (To their credit, both Chairman Pai and Commissioner Carr have demonstrated a longstanding interest in understanding the rural digital divide — an interest that is recognized and appreciated by almost every rural stakeholder I speak to.)

Framing Things Wrong

It is important to begin by recognizing that contemporary discussion about the digital divide is framed in terms of, and addressed alongside, longstanding federal Universal Service policy. This policy, which has its roots in the 20th century project of ensuring that all Americans had access to basic telephone service, is enshrined in the first words of the Communications Act of 1934. It has not significantly evolved from its origins in the analog telephone system — and that’s a problem.

A brief history of Universal Service

The Communications Act established the FCC

for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States … a rapid, efficient, Nation-wide, and world-wide wire and radio communication service ….

The historic goal of “universal service” has been to ensure that anyone in the country is able to connect to the public switched telephone network. In the telephone age, that network provided only one primary last-mile service: transmitting basic voice communications from the customer’s telephone to the carrier’s switch. Once at the switch various other services could be offered — but providing them didn’t require more than a basic analog voice circuit to the customer’s home.

For most of the 20th century, this form of universal service was ensured by fiat and cost recovery. Regulated telephone carriers (that is, primarily, the Bell operating companies under the umbrella of AT&T) were required by the FCC to provide service to all comers, at published rates, no matter the cost of providing that service. In exchange, the carriers were allowed to recover the cost of providing service to high-cost areas through the regulated rates charged to all customers. That is, the cost of ensuring universal service was spread across and subsidized by the entire rate base.

This system fell apart following the break-up of AT&T in the 1980s. The separation of long distance from local exchange service meant that the main form of cross subsidy — from long distance to local callers — could no longer be handled implicitly. Moreover, as competitive exchange services began entering the market, they tended to compete first, and most, over the high-revenue customers who had supported the rate base. To accommodate these changes, the FCC transitioned from a model of implicit cross-subsidies to one of explicit cross-subsidies, introducing long distance access charges and termination fees that were regulated to ensure money continued to flow to support local exchange carriers’ costs of providing services to high-cost users.

The 1996 Telecom Act forced even more dramatic change. The goal of the 1996 Telecom Act was to introduce competition throughout the telecom ecosystem — but the traditional cross-subsidy model doesn’t work in a competitive market. So the 1996 Telecom Act further evolved the FCC’s universal service mechanism, establishing the Universal Service Fund (USF), funded by fees charged to all telecommunications carriers, which would be apportioned to cover the costs incurred by eligible telecommunications carriers in providing high-cost (and other “universal”) services.

The problematic framing of Universal Service

For present purposes, we need not delve into these mechanisms. Rather, the very point of this post is that the interminable debates about these mechanisms — who pays into the USF and how much; who gets paid out of the fund and how much; and what services and technologies the fund covers — simply don’t match the policy challenges of closing the digital divide.

What the 1996 Telecom Act does offer is a statement of the purposes of Universal Service. In 47 USC 254(b)(3), the Act states the purpose of ensuring “Access in rural and high cost areas”:

Consumers in all regions of the Nation, including low-income consumers and those in rural, insular, and high cost areas, should have access to telecommunications and information services … that are reasonably comparable to those services provided in urban areas ….

This is a problematic framing. (I would actually call it patently offensive…). It is a framing that made sense in the telephone era, when ensuring last-mile service meant providing only basic voice telephone service. In that era, having any service meant having all service, and the primary obstacles to overcome were the high-cost of service to remote areas and the lower revenues expected from lower-income areas. But its implicit suggestion is that the goal of federal policy should be to make rural America look like urban America.

Today universal service, at least from the perspective of closing the digital divide, means something different, however. The technological needs of rural America are different than those of urban America; the technological needs of poor and lower-income America are different than those of rich America. Framing the goal in terms of making sure rural and lower-income America have access to the same services as urban and wealthy America is, by definition, not responsive to (or respectful of) the needs of those who are on the wrong side of one of this country’s many digital divides. Indeed, that goal almost certainly distracts from and misallocates resources that could be better leveraged towards closing these divides.

The Demands of Rural Broadband

Rural broadband needs are simultaneously both more and less demanding than the services we typically focus on when discussing universal service. The services that we fund, and the way that we approach how to close digital divides, needs to be based in the first instance on the actual needs of the community that connectivity is meant to serve. Take just two of the prototypical examples: precision and automated farming, and telemedicine.

Assessing rural broadband needs

Precision agriculture requires different networks than does watching Netflix, web surfing, or playing video games. Farms with hundreds or thousands of sensors and other devices per acre can put significant load on networks — but not in terms of bandwidth. The load is instead measured in terms of packets and connections per second. Provisioning networks to handle lots of small packets is very different from provisioning them to handle other, more-typical (to the DC crowd), use cases.

On the other end of the agricultural spectrum, many farms don’t own their own combines. Combines cost upwards of a million dollars. One modern combine is sufficient to tend to several hundred acres in a given farming season. It is common for many farmers to hire someone who owns a combine to service their fields. During harvest season, for instance, one combine service may operate on a dozen farms during harvest season. Prior to operation, modern precision systems need to download a great deal of GIS, mapping, weather, crop, and other data. High-speed Internet can literally mean the difference between letting a combine sit idle for many days of a harvest season while it downloads data and servicing enough fields to cover the debt payments on a million dollar piece of equipment.

Going to the other extreme, rural health care relies upon Internet connectivity — but not in the ways it is usually discussed. The stories one hears on the ground aren’t about the need for particularly high-speed connections or specialized low-latency connections to allow remote doctors to control surgical robots. While tele-surgery and access to highly specialized doctors are important applications of telemedicine, the urgent needs today are far more modest: simple video consultations with primary care physicians for routine care, requiring only a moderate-speed Internet connection capable of basic video conferencing. In reality, literally megabits per second (not even 10 mbps) can mean the difference between a remote primary care physician being able to provide basic health services to a rural community and that community going entirely unserved by a doctor.

Efforts to run gigabit connections and dedicated fiber to rural health care facilities may be a great long-term vision — but the on-the-ground need could be served by a reliable 4G wireless connection or DSL line. (Again, to their credit, this is a point that Chairman Pai and Commissioner Carr have been highlighting in their recent travels through rural parts of the country.)

Of course, rural America faces many of the same digital divides faced elsewhere. Even in the wealthiest cities in Nebraska, for instance, significant numbers of students are eligible for free or reduced price school lunches — a metric that corresponds with income — and rely on anchor institutions for Internet access. The problem is worse in much of rural Nebraska, where there may simply be no Internet access at all.

Addressing rural broadband needs

Two things in particular have struck me as I have spoken to rural stakeholders about the digital divide. The first is that this is an “all hands on deck” problem. Everyone I speak to understands the importance of the issue. Everyone is willing to work with and learn from others. Everyone is willing to commit resources and capital to improve upon the status quo, including by undertaking experiments and incurring risks.

The discussions I have in DC, however, including with and among key participants in the DC policy firmament, are fundamentally different. These discussions focus on tweaking contribution factors and cost models to protect or secure revenues; they are, in short, missing the forest for the trees. Meanwhile, the discussion on the ground focuses on how to actually deploy service and overcome obstacles. No amount of cost-model tweaking will do much at all to accomplish either of these.

The second striking, and rather counterintuitive, thing that I have often heard is that closing the rural digital divide isn’t (just) about money. I’ve heard several times the lament that we need to stop throwing more money at the problem and start thinking about where the money we already have needs to go. Another version of this is that it isn’t about the money, it’s about the business case. Money can influence a decision whether to execute upon a project for which there is a business case — but it rarely creates a business case where there isn’t one. And where it has created a business case, that case was often for building out relatively unimportant networks while increasing the opportunity costs of building out more important networks. The networks we need to build are different from those envisioned by the 1996 Telecom Act or FCC efforts to contort that Act to fund Internet build-out.

Rural Broadband Investment

There is, in fact, a third particularly striking thing I have gleaned from speaking with rural stakeholders, and rural providers in particular: They don’t really care about net neutrality, and don’t see it as helpful to closing the digital divide.  

Rural providers, it must be noted, are generally “pro net neutrality,” in the sense that they don’t think that ISPs should interfere with traffic going over their networks; in the sense that they don’t have any plans themselves to engage in “non-neutral” conduct; and also in the sense that they don’t see a business case for such conduct.

But they are also wary of Title II regulation, or of other rules that are potentially burdensome or that introduce uncertainty into their business. They are particularly concerned that Title II regulation opens the door to — and thus creates significant uncertainty about the possibility of — other forms of significant federal regulation of their businesses.

More than anything else, they want to stop thinking, talking, and worrying about net neutrality regulations. Ultimately, the past decade of fights about net neutrality has meant little other than regulatory cost and uncertainty for them, which makes planning and investment difficult — hardly a boon to closing the digital divide.

The basic theory of the Wheeler-era FCC’s net neutrality regulations was the virtuous cycle — that net neutrality rules gave edge providers the certainty they needed in order to invest in developing new applications that, in turn, would drive demand for, and thus buildout of, new networks. But carriers need certainty, too, if they are going to invest capital in building these networks. Rural ISPs are looking for the business case to justify new builds. Increasing uncertainty has only negative effects on the business case for closing the rural digital divide.

Most crucially, the logic of the virtuous cycle is virtually irrelevant to driving demand for closing the digital divide. Edge innovation isn’t going to create so much more value that users will suddenly demand that networks be built; rather, the applications justifying this demand already exist, and most have existed for many years. What stands in the way of the build-out required to service under- or un-served rural areas is the business case for building these (expensive) networks. And the uncertainty and cost associated with net neutrality only exacerbate this problem.

Indeed, rural markets are an area where the virtuous cycle very likely turns in the other direction. Rural communities are actually hotbeds of innovation. And they know their needs far better than Silicon Valley edge companies, so they are likely to build apps and services that better cater to the unique needs of rural America. But these apps and services aren’t going to be built unless their developers have access to the broadband connections needed to build and maintain them, and, most important of all, unless users have access to the broadband connections needed to actually make use of them. The upshot is that, in rural markets, connectivity precedes and drives the supply of edge services not, as the Wheeler-era virtuous cycle would have it, the other way around.

The effect of Washington’s obsession with net neutrality these past many years has been to increase uncertainty and reduce the business case for building new networks. And its detrimental effects continue today with politicized and showboating efforts to to invoke the Congressional Review Act in order to make a political display of the 2017 Restoring Internet Freedom Order. Back in the real world, however, none of this helps to provide rural communities with the type of broadband services they actually need, and the effect is only to worsen the rural digital divide, both politically and technologically.

The Road Ahead …?

The story told above is not a happy one. Closing digital divides, and especially closing the rural digital divide, is one of the most important legal, social, and policy challenges this country faces. Yet the discussion about these issues in DC reflects little of the on-the-ground reality. Rather advocates in DC attack a strawman of the rural digital divide, using it as a foil to protect and advocate for their pet agendas. If anything, the discussion in DC distracts attention and diverts resources from productive ideas.

To end on a more positive note, some are beginning to recognize the importance and direness of the situation. I have noted several times the work of Chairman Pai and Commissioner Carr. Indeed, the first time I met Chairman Pai was when I had the opportunity to accompany him, back when he was Commissioner Pai, on a visit through Diller, Nebraska (pop. 287). More recently, there has been bipartisan recognition of the need for new thinking about the rural digital divide. In February, for instance, a group of Democratic senators asked President Trump to prioritize rural broadband in his infrastructure plans. And the following month Congress enacted, and the President signed, legislation that among other things funded a $600 million pilot program to award grants and loans for rural broadband built out through the Department of Agriculture’s Rural Utilities Service. But both of these efforts rely too heavily on throwing money at the rural divide (speaking of the recent legislation, the head of one Nebraska-based carrier building out service in rural areas lamented that it’s just another effort to give carriers cheap money, which doesn’t do much to help close the divide!). It is, nonetheless, good to see urgent calls for and an interest in experimenting with new ways to deliver assistance in closing the rural digital divide. We need more of this sort of bipartisan thinking and willingness to experiment with new modes of meeting this challenge — and less advocacy for stale, entrenched, viewpoints that have little relevance to the on-the-ground reality of rural America.

There are some who view a host of claimed negative social ills allegedly related to the large size of firms like Amazon as an occasion to call for the company’s break up. And, unfortunately, these critics find an unlikely ally in President Trump, whose tweet storms claim that tech platforms are too big and extract unfair rents at the expense of small businesses. But these critics are wrong: Amazon is not a dangerous monopoly, and it certainly should not be broken up.  

Of course, no one really spells out what it means for these companies to be “too big.” Even Barry Lynn, a champion of the neo-Brandeisian antitrust movement, has shied away from specifics. The best that emerges when probing his writings is that he favors something like a return to Joe Bain’s “Structure-Conduct-Performance” paradigm (but even here, the details are fuzzy).

The reality of Amazon’s impact on the market is quite different than that asserted by its critics. Amazon has had decades to fulfill a nefarious scheme to suddenly raise prices and reap the benefits of anticompetive behavior. Yet it keeps putting downward pressure on prices in a way that seems to be commoditizing goods instead of building anticompetitive moats.

Amazon Does Not Anticompetitively Exercise Market Power

Twitter rants aside, more serious attempts to attack Amazon on antitrust grounds argue that it is engaging in pricing that is “predatory.” But “predatory pricing” requires a specific demonstration of factors — which, to date, have not been demonstrated — in order to justify legal action. Absent a showing of these factors, it has long been understood that seemingly “predatory” conduct is unlikely to harm consumers and often actually benefits consumers.

One important requirement that has gone unsatisfied is that a firm engaging in predatory pricing must have market power. Contrary to common characterizations of Amazon as a retail monopolist, its market power is less than it seems. By no means does it control retail in general. Rather, less than half of all online commerce (44%) takes place on its platform (and that number represents only 4% of total US retail commerce). Of that 44 percent, a significant portion is attributable to the merchants who use Amazon as a platform for their own online retail sales. Rather than abusing a monopoly market position to predatorily harm its retail competitors, at worst Amazon has created a retail business model that puts pressure on other firms to offer more convenience and lower prices to their customers. This is what we want and expect of competitive markets.

The claims leveled at Amazon are the intellectual kin of the ones made against Walmart during its ascendancy that it was destroying main street throughout the nation. In 1993, it was feared that Walmart’s quest to vertically integrate its offerings through Sam’s Club warehouse operations meant that “[r]etailers could simply bypass their distributors in favor of Sam’s — and Sam’s could take revenues from local merchants on two levels: as a supplier at the wholesale level, and as a competitor at retail.” This is a strikingly similar accusation to those leveled against Amazon’s use of its Seller Marketplace to aggregate smaller retailers on its platform.

But, just as in 1993 with Walmart, and now with Amazon, the basic fact remains that consumer preferences shift. Firms need to alter their behavior to satisfy their customers, not pretend they can change consumer preferences to suit their own needs. Preferring small, local retailers to Amazon or Walmart is a decision for individual consumers interacting in their communities, not for federal officials figuring out how best to pattern the economy.

All of this is not to say that Amazon is not large, or important, or that, as a consequence of its success it does not exert influence over the markets it operates in. But having influence through success is not the same as anticompetitively asserting market power.

Other criticisms of Amazon focus on its conduct in specific vertical markets in which it does have more significant market share. For instance, a UK Liberal Democratic leader recently claimed that “[j]ust as Standard Oil once cornered 85% of the refined oil market, today… Amazon accounts for 75% of ebook sales … .”

The problem with this concern is that Amazon’s conduct in the ebook market has had, on net, pro-competitive, not anti-competitive, effects. Amazon’s behavior in the ebook market has actually increased demand for books overall (and expanded output), increased the amount that consumers read, and decreased the price of theses books. Amazon is now even opening physical bookstores. Lina Khan made much hay in her widely cited article last year that this was all part of a grand strategy to predatorily push competitors out of the market:

The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws.

But it’s hard to allege predation in a market when over the past twenty years Amazon has consistently expanded output and lowered overall prices in the book market. Courts and lawmakers have sought to craft laws that encourage firms to provide consumers with more choices at lower prices — a feat that Amazon repeatedly accomplishes. To describe this conduct as anticompetitive is asking for a legal requirement that is at odds with the goal of benefiting consumers. It is to claim that Amazon has a contradictory duty to both benefit consumers and its shareholders, while also making sure that all of its less successful competitors also stay in business.

But far from creating a monopoly, the empirical reality appears to be that Amazon is driving categories of goods, like books, closer to the textbook model of commodities in a perfectly competitive market. Hardly an antitrust violation.

Amazon Should Not Be Broken Up

“Big is bad” may roll off the tongue, but, as a guiding ethic, it makes for terrible public policy. Amazon’s size and success are a direct result of its ability to enter relevant markets and to innovate. To break up Amazon, or any other large firm, is to punish it for serving the needs of its consumers.

None of this is to say that large firms are incapable of causing harm or acting anticompetitively. But we should accept calls for dramatic regulatory intervention  — especially from those in a position to influence regulatory or market reactions to such calls — to be supported by substantial factual evidence and legal and economic theory.

This tendency to go after large players is nothing new. As noted above, Walmart triggered many similar concerns thirty years ago. Thinking about Walmart then, pundits feared that direct competition with Walmart was fruitless:

In the spring of 1992 Ken Stone came to Maine to address merchant groups from towns in the path of the Wal-Mart advance. His advice was simple and direct: don’t compete directly with Wal-Mart; specialize and carry harder-to-get and better-quality products; emphasize customer service; extend your hours; advertise more — not just your products but your business — and perhaps most pertinent of all to this group of Yankee individualists, work together.

And today, some think it would be similarly pointless to compete with Amazon:

Concentration means it is much harder for someone to start a new business that might, for example, try to take advantage of the cheap housing in Minneapolis. Why bother when you know that if you challenge Amazon, they will simply dump your product below cost and drive you out of business?

The interesting thing to note, of course, is that Walmart is now desperately trying to compete with Amazon. But despite being very successful in its own right, and having strong revenues, Walmart doesn’t seem able to keep up.

Some small businesses will close as new business models emerge and consumer preferences shift. This is to be expected in a market driven by creative destruction. Once upon a time Walmart changed retail and improved the lives of many Americans. If our lawmakers can resist the urge to intervene without real evidence of harm, Amazon just might do the same.

The paranoid style is endemic across the political spectrum, for sure, but lately, in the policy realm haunted by the shambling zombie known as “net neutrality,” the pro-Title II set are taking the rhetoric up a notch. This time the problem is, apparently, that the FCC is not repealing Title II classification fast enough, which surely must mean … nefarious things? Actually, the truth is probably much simpler: the Commission has many priorities and is just trying to move along its docket items by the numbers in order to avoid the relentless criticism that it’s just trying to favor ISPs.

Motherboard, picking up on a post by Harold Feld, has opined that the FCC has not yet published its repeal date for the OIO rules in the Federal Register because

the FCC wanted more time to garner support for their effort to pass a bogus net neutrality law. A law they promise will “solve” the net neutrality feud once and for all, but whose real intention is to pre-empt tougher state laws, and block the FCC’s 2015 rules from being restored in the wake of a possible court loss…As such, it’s believed that the FCC intentionally dragged out the official repeal to give ISPs time to drum up support for their trojan horse.

To his credit, Feld admits that this theory is mere “guesses and rank speculation” — but it’s nonetheless disappointing that Motherboard picked this speculation up, described it as coming from “one of the foremost authorities on FCC and telecom policy,” and then pushed the narrative as though it were based on solid evidence.

Consider the FCC’s initial publication in the Federal Register on this topic:

Effective date: April 23, 2018, except for amendatory instructions 2, 3, 5, 6, and 8, which are delayed as follows. The FCC will publish a document in the Federal Register announcing the effective date(s) of the delayed amendatory instructions, which are contingent on OMB approval of the modified information collection requirements in 47 CFR 8.1 (amendatory instruction 5). The Declaratory Ruling, Report and Order, and Order will also be effective upon the date announced in that same document.

To translate this into plain English, the FCC is waiting until OMB signs off on its replacement transparency rules before it repeals the existing rules. Feld is skeptical of this approach, calling it “highly unusual” and claiming that “[t]here is absolutely no reason for FCC Chairman Ajit Pai to have stretched out this process so ridiculously long.” That may be one, arguably valid interpretation, but it’s hardly required by the available evidence.

The 2015 Open Internet Order (“2015 OIO”) had a very long lead time for its implementation. The Restoring Internet Freedom Order (“RIF Order”) was (to put it mildly) created during a highly contentious process. There are very good reasons for the Commission to take its time and make sure it dots its i’s and crosses its t’s. To do otherwise would undoubtedly invite nonstop caterwauling from Title II advocates who felt the FCC was trying to rush through the process. Case in point: as he criticizes the Commission for taking too long to publish the repeal date, Feld simultaneously criticizes the Commission for rushing through the RIF Order.

The Great State Law Preemption Conspiracy

Trying to string together some sort of logical or legal justification for this conspiracy theory, the Motherboard article repeatedly adverts to the ongoing (and probably fruitless) efforts of states to replicate the 2015 OIO in their legislatures:

In addition to their looming legal challenge, ISPs are worried that more than half the states in the country are now pursuing their own net neutrality rules. And while ISPs successfully lobbied the FCC to include language in their repeal trying to ban states from protecting consumers, their legal authority on that front is dubious as well.

It would be a nice story, if it were at all plausible. But, while it’s not a lock that the FCC’s preemption of state-level net neutrality bills will succeed on all fronts, it’s a surer bet that, on the whole, states are preempted from their activities to regulate ISPs as common carriers. The executive action in my own home state of New Jersey is illustrative of this point.

The governor signed an executive order in February that attempts to end-run the FCC’s rules by exercising New Jersey’s power as a purchaser of broadband services. In essence, the executive order requires that any subsidiary of the state government that purchases broadband connectivity only do so from “ISPs that adhere to ‘net neutrality’ principles.“ It’s probably fine for New Jersey, in its own contracts, to require certain terms from ISPs that affect state agencies of New Jersey directly. But it’s probably impermissible that those contractual requirements can be used as a lever to force ISPs to treat third parties (i.e., New Jersey’s citizens) under net neutrality principles.

Paragraphs 190-200 of the RIF Order are pretty clear on this:

We conclude that regulation of broadband Internet access service should be governed principally by a uniform set of federal regulations, rather than by a patchwork of separate state and local requirements…Allowing state and local governments to adopt their own separate requirements, which could impose far greater burdens than the federal regulatory regime, could significantly disrupt the balance we strike here… We therefore preempt any state or local measures that would effectively impose rules or requirements that we have repealed or decided to refrain from imposing in this order or that would impose more stringent requirements for any aspect of broadband service that we address in this order.

The U.S. Constitution is likewise clear on the issue of federal preemption, as a general matter: “laws of the United States… [are] the supreme law of the land.” And well over a decade ago, the Supreme Court held that the FCC was entitled to determine the broadband classification for ISPs (in that case, upholding the FCC’s decision to regulate ISPs under Title I, just as the RIF Order does). Further, the Court has also held that “the statutorily authorized regulations of an agency will pre-empt any state or local law that conflicts with such regulations or frustrates the purposes thereof.”

The FCC chose to re(re)classify broadband as a Title I service. Arguably, this could be framed as deregulatory, even though broadband is still regulated, just more lightly. But even if it were a full, explicit deregulation, that would not provide a hook for states to step in, because the decision to deregulate an industry has “as much pre-emptive force as a decision to regulate.”

Actions, like those of the New Jersey governor, have a bit more wiggle room in the legal interpretation because the state is acting as a “market participant.” So long as New Jersey’s actions are confined solely to its own subsidiaries, as a purchaser of broadband service it can put restrictions or requirements on how that service is provisioned. But as soon as a state tries to use its position as a market participant to create a de facto regulatory effect where it was not permitted to explicitly legislate, it runs afoul of federal preemption law.

Thus, it’s most likely the case that states seeking to impose “measures that would effectively impose rules or requirements” are preempted, and any such requirements are therefore invalid.

Jumping at Shadows

So why are the states bothering to push for their own version of net neutrality? The New Jersey order points to one highly likely answer:

the Trump administration’s Federal Communications Commission… recently illustrated that a free and open Internet is not guaranteed by eliminating net neutrality principles in a way that favors corporate interests over the interests of New Jerseyans and our fellow Americans[.]

Basically, it’s all about politics and signaling to a base that thinks that net neutrality somehow should be a question of political orientation instead of network management and deployment.

Midterms are coming up and some politicians think that net neutrality will make for an easy political position. After all, net neutrality is a relatively low-cost political position to stake out because, for the most part, the downsides of getting it wrong are just higher broadband costs and slower rollout. And given that the unseen costs of bad regulation are rarely recognized by voters, even getting it wrong is unlikely to come back to haunt an elected official (assuming the Internet doesn’t actually end).

There is no great conspiracy afoot. Everyone thinks that we need federal legislation to finally put the endless net neutrality debates to rest. If the FCC takes an extra month to make sure it’s not leaving gaps in regulation, it does not mean that the FCC is buying time for ISPs. In the end simple politics explains state actions, and the normal (if often unsatisfying) back and forth of the administrative state explains the FCC’s decisions.

Farewell

Alden Abbott —  29 March 2018

On Monday, April 2, I will leave the Heritage Foundation to enter federal government service.  Accordingly, today I am signing off as a regular contributor to Truth on the Market.  First and foremost, I owe a great debt of gratitude to Geoff Manne, who was kind enough to afford me access to TOTM.  Geoff’s outstanding leadership has made TOTM the leading blog site bringing to bear sound law and economics insights on antitrust and related regulatory topics.  I was also privileged to have the opportunity to work on an article with TOTM stalwart Thom Lambert, whose concise book How To Regulate is by far the best general resource on sound regulatory principles (it should sit on the desk of the head of every regulatory agency).  I have also greatly benefited from the always insightful analyses of fellow TOTM bloggers Allen Gibby, Eric Fruits, Joanna Shepherd, Kristian Stout, Mike Sykuta, and Neil Turkewitz.  Thanks to all!  I look forward to continuing to seek enlightenment at truthonthemarket.com.

If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.

Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.

In 2017, he published “The Power of Bias in Economics Research.” He recently talked to Russ Roberts on the EconTalk podcast about his research and what it means for economics.

He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.

What is bias?

We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.

In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.

Publication bias

Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.

Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.

The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).

“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.

On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.

Low power

A study with low statistical power has a reduced chance of detecting a true effect.

Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.

Type1-Type2-Error

An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.

Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.

By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.

Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.

In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.

Now, we’re getting to power.

Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.

By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.

Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:

  • If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
  • If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.

In other words, a low power test is more likely to convict the innocent than a high power test.

In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.

MinimumWageResearchFunnelGraph

Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.

(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)

Is economics bogus?

It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:

Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.

For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:

[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.

Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.

In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”

 

Introduction

Last week I attended the 17th Annual Conference of the International Competition Network (ICN) held in New Delhi, India from March 21-23.  The Delhi Conference highlighted the key role of the ICN in promoting global convergence toward “best practices” in substantive and procedural antitrust analysis by national antitrust (“competition”) agencies.  The ICN operates as a virtual network of competition agencies and expert “non-governmental advisers” (NGAs), not governments.  As such, the ICN promulgates “recommended practices,” provides online training and other assistance to competition agencies, and serves as a forum for the building of relationships among competition officials (an activity which facilitates cooperation on particular matters and the exchange of advice on questions of antitrust policy and administration).  There is a general consensus among competition agencies and NGAs (I am one) that the ICN has accomplished a great deal since its launch in 2001 – indeed, it has far surpassed expectations.  Although (not surprisingly) inter-jurisdictional differences in perspective on particular competition issues remain, the ICN has done an excellent job in helping ensure that national competition agencies understand each other as they carry out their responsibilities.  By “speaking a common antitrust language,” informed by economic reasoning, agencies are better able to cooperate on individual matters and evaluate the merits of potential changes in law and procedure.

Pre-ICN Program Hosted by Competition Policy International (CPI)

Special one-day programs immediately preceding the ICN have proliferated in recent years.  On March 20, I participated in the small group one-day program hosted by Competition Policy International (CPI), attended by senior competition agency officials, private practitioners, and scholars.  This program featured a morning roundtable covering problems of extraterritoriality and an afternoon roundtable focused on competition law challenges in the digital economy.

The extraterritoriality session reflected the growing number of competition law matters (particularly cartels and mergers) that have effects in multiple jurisdictions.  There appeared to be general support for the proposition that a competition authority should impose remedies that have extraterritorial application only to the extent necessary to remedy harm to competition within the enforcing jurisdiction.  There also was a general consensus that it is very difficult for a competition authority to cede enforcement jurisdiction to a foreign authority, when the first authority finds domestic harm attributable to extraterritorial conduct and has the ability to assert jurisdiction.  Thus, although efforts to promote comity in antitrust enforcement are worthwhile, it must be recognized that there are practical limitations to such initiatives.  As such, a focus on enhancing coordination and cooperation among multiple agencies investigating the same conduct will be of paramount importance.

The digital economy roundtable directed particular attention to enforcement challenges raised by Internet “digital platforms” (e.g., Google, Facebook, Amazon).  In particular, with respect to digital platforms, roundtable participants discussed whether new business models and disruptive innovations create challenges to existing competition law and practices; what recent technology changes portend for market definition, assessment of market power, and other antitrust enforcement concepts; whether new analytic tools are required; and what are good mechanisms to harmonize regulation and competition enforcement.  Although there was no overall consensus on these topics, there was robust discussion of multi-sided market analysis and differences in approach to digital platform oversight.

An ICN Conference Overview

As in recent years, the ICN Conference itself featured set-piece (no Q&A) plenary sessions involving colloquies among top agency officials regarding cartels, unilateral conduct, mergers, advocacy, and agency effectiveness – the areas covered during the year by the ICN’s specialized working groups.  Numerous break-out sessions allowed ICN delegates to discuss in detail particular developments in these areas, and to evaluate and hash out the relative merits of competing approaches to problems.  At least seven generalizations can be drawn from the Delhi Conference’s deliberations.

First, other international organizations that initially had kept their distance from the ICN, specifically the OECD, the World Bank, and UNCTAD, now engage actively with the ICN.  This is a very positive development indeed.  Research carried out by the OECD on competition policy – for example, on the economic evaluation of regulatory approaches (important for competition advocacy), digital platforms, and public tenders – has been injected as “policy inputs” to discrete ICN initiatives.  Annual Competition advocacy contests cosponsored by the ICN and the World Bank have enabled a large number of agencies (particularly in developing countries) to showcase their successes in helping improve the competitive climate within their jurisdictions.  UNCTAD initiatives on competition and economic development can be effectively presented to new competition agencies through ICN involvement.

Second, competition authorities are focusing more intensively on “vertical mergers” involving firms at different levels of the distribution chain.  The ICN can help agencies be attentive to the need to weigh procompetitive efficiencies as well as new theories of anticompetitive harm in investigating these mergers.

Third, the transformation of economies worldwide through the Internet and the “digital revolution” is posing new challenges – and opportunities – for enforcers.  Policy analysis, informed by economics, is evolving in this area.

Fourth, cartels and bid rigging (collusion in public tenders was the showcase “special project” at the Delhi Conference) investigations remain as significant as ever.  Thinking on the administration of government leniency programs and “ex officio” investigations aimed at ferreting out cartels continues to be refined.

Fifth, the continuing growth in the number and scope of competition laws and the application of those laws to international commerce places a premium on enhanced coordination among competition agencies.  The ICN’s role in facilitating such cooperation thus assumes increased importance.

Sixth, issues of due process, or procedural fairness, commendably are generally recognized as important elements of effective agency administration.  Nevertheless, the precise contours of due process, and its specific application, are not uniform across agencies, and merit continued exploration by the ICN.

Seventh, the question of whether non-purely economic factors (such as fairness, corporate size, and the rights of workers) should be factored into competition analysis is gaining increased traction in a number of jurisdictions, and undoubtedly will be a subject of considerable debate in the years to come.

Conclusion

The ICN is by now a mature organization.  As a virtual network that relies on the power to persuade, not to dictate, it is dynamic, not static.  The ICN continues to respond flexibly to the changing needs of its many members and to global economic developments, within the context of the focused work carried out by its various substantive and process-related working groups.  The Delhi Conference provided a welcome opportunity for a timely review of its accomplishments and an assessment of its future direction.  In short, the ICN remains a highly useful vehicle for welfare-enhancing “soft convergence” among competition law regimes.

 

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

Excess is unflattering, no less when claiming that every evolution in legal doctrine is a slippery slope leading to damnation. In Friday’s New York Times, Lina Khan trots down this alarmist path while considering the implications for the pending Supreme Court case of Ohio v. American Express. One of the core issues in the case is the proper mode of antitrust analysis for credit card networks as two-sided markets. The Second Circuit Court of Appeals agreed with arguments, such as those that we have made, that it is important to consider the costs and benefits to both sides of a two-sided market when conducting an antitrust analysis. The Second Circuit’s opinion is under review in the American Express case.

Khan regards the Second Circuit approach of conducting a complete analysis of these markets as a mistake.

On her reading, the idea that an antitrust analysis of credit card networks should reflect their two-sided-ness would create “de facto antitrust immunity” for all platforms:

If affirmed, the Second Circuit decision would create de facto antitrust immunity for the most powerful companies in the economy. Since internet technologies have enabled the growth of platform companies that serve multiple groups of users, firms like Alphabet, Amazon, Apple, Facebook, and Uber are set to be prime beneficiaries of the Second Circuit’s warped analysis. Amazon, for example, could claim status as a two-sided platform because it connects buyers and sellers of goods; Google because it facilitates a market between advertisers and search users… Indeed, the reason that the tech giants are lining up behind the Second Circuit’s approach is that — if ratified — it would make it vastly more difficult to use antitrust laws against them.

This paragraph is breathtaking. First, its basic premise is wrong. Requiring a complete analysis of the complicated economic effects of conduct undertaken in two sided markets before imposing antitrust liability would not create “de facto antitrust immunity.” It would require that litigants present, and courts evaluate, credible evidence sufficient to establish a claim upon which an enforcement action can be taken — just like in any other judicial proceeding in any area of law. Novel market structures may require novel analytical models and novel evidence, but that is no different with two-sided markets than with any other complicated issue before a court.

Second, the paragraph’s prescribed response would be, in fact, de facto antitrust liability for any firm competing in a two-sided market — that is, as Kahn notes, almost every major tech firm.

A two-sided platform competes with other platforms by facilitating interactions between the two sides of the market. This often requires a careful balancing of the market: in most of these markets too many or too few participants on one side of the market reduces participation on the other side. So these markets play the role of matchmaker, charging one side of the market a premium in order to cross-subsidize a desirable level of participation on the other. This will be discussed more below, but the takeaway for now is that most of these platforms operate by charging one side of the market (or some participants on one side of the market) an above-cost price in order to charge the other side of the market a below-cost price. A platform’s strategy on either side of the market makes no sense without the other, and it does not adopt practices on one side without carefully calibrating them with the other. If one does not consider both sides of these markets, therefore, the simplistic approach that Kahn demands will systematically fail to capture both the intent and the effect of business practices in these markets. More importantly, such an approach could be used to find antitrust violations throughout these industries — no matter the state of competition, market share, or actual consumer effects.

What are two-sided markets?

Khan notes that there is some element of two-sidedness in many (if not most) markets:

Indeed, almost all markets can be understood as having two sides. Firms ranging from airlines to meatpackers could reasonably argue that they meet the definition of “two-sided,” thereby securing less stringent review.

This is true, as far as it goes, as any sale of goods likely involves the selling party acting as some form of intermediary between chains of production and consumption. But such a definition is unworkably broad from the point of view of economic or antitrust analysis. If two-sided markets exist as distinct from traditional markets there must be salient features that define those specialized markets.

Economists have been intensively studying two-sided markets (see, e.g., here, here, and here) for the past two decades (and had recognized many of their basic characteristics even before then). As Khan notes, multi-sided platforms have indeed existed for a long time in the economy. Newspapers, for example, provide a targeted outlet for advertisers and incentives for subscribers to view advertisements; shopping malls aggregate retailers in one physical location to lower search costs for customers, while also increasing the retailers’ sales volume. Relevant here, credit card networks are two-sided platforms, facilitating credit-based transactions between merchants and consumers.

One critical feature of multi-sided platforms is the interdependent demand of platform participants. Thus, these markets require a simultaneous critical mass of users on each side in order to ensure the viability of the platform. For instance, a credit card is unlikely to be attractive to consumers if few merchants accept it; and few merchants will accept a credit card that isn’t used by a sufficiently large group of consumers. To achieve critical mass, a multi-sided platform uses both pricing and design choices, and, without critical mass on all sides, the positive feedback effects that enable the platform’s unique matching abilities might not be achieved.

This highlights the key distinction between traditional markets and multi-sided markets. Most markets have two sides (e.g., buyers and sellers), but that alone doesn’t make them meaningfully multi-sided. In a multi-sided market a key function of the platform is to facilitate the relationship between the sides of the market in order to create and maintain an efficient relationship between them. The platform isn’t merely a reseller of a manufacturer’s goods, for instance, but is actively encouraging or discouraging participation by users on both sides of the platform in order to maximize the value of the platform itself — not the underlying transaction — for those users. Consumers, for instance, don’t really care how many pairs of jeans a clothier stocks; but a merchant does care how many cardholders an issuer has on its network. This is most often accomplished by using prices charged to each side (in the case of credit cards, so-called interchange fees) to keep each side an appropriate size.

Moreover, the pricing that occurs on a two-sided platform is secondary, to a varying extent, to the pricing of the subject of the transaction. In a two-sided market, the prices charged to either side of the market are an expression of the platform’s ability to control the terms on which the different sides meet to transact and is relatively indifferent to the thing about which the parties are transacting.

The nature of two-sided markets highlights the role of these markets as more like facilitators of transactions and less like traditional retailers of goods (though this distinction is a matter of degree, and different two-sided markets can be more-or-less two-sided). Because the platform uses prices charged to each side of the market in order to optimize overall use of the platform (that is, output or volume of transactions), pricing in these markets operates differently than pricing in traditional markets. In short, the pricing on one side of the platform is often used to subsidize participation on the other side of the market, because the overall value to both sides is increased as a result. Or, conversely, pricing to one side of the market may appear to be higher than the equilibrium level when viewed for that side alone, because this funds a subsidy to increase participation on another side of the market that, in turn, creates valuable network effects for the side of the market facing the higher fees.

The result of this dynamic is that it is more difficult to assess the price and output effects in multi-sided markets than in traditional markets. One cannot look at just one side of the platform — at the level of output and price charged to consumers of the underlying product, say — but must look at the combined pricing and output of both the underlying transaction as well as the platform’s service itself, across all sides of the platform.

Thus, as David Evans and Richard Schmalensee have observed, traditional antitrust reasoning is made more complicated in the presence of a multi-sided market:

[I]t is not possible to know whether standard economic models, often relied on for antitrust analysis, apply to multi-sided platforms without explicitly considering the existence of multiple customer groups with interdependent demand…. [A] number of results for single-sided firms, which are the focus of much of the applied antitrust economics literature, do not apply directly to multi-sided platforms.

The good news is that antitrust economists have been focusing significant attention on two- and multi-sided markets for a long while. Their work has included attention to modelling the dynamics and effects of competition in these markets, including how to think about traditional antitrust concepts such as market definition, market power and welfare analysis. What has been lacking, however, has been substantial incorporation of this analysis into judicial decisions. Indeed, this is one of the reasons that the Second Circuit’s opinion in this case was, and why the Supreme Court’s opinion will be, so important: this work has reached the point that courts are recognizing that these markets can and should be analyzed differently than traditional markets.

Getting the two-sided analysis wrong in American Express would harm consumers

Khan describes credit card networks as a “classic case of oligopoly,” and opines that American Express’s contractual anti-steering provision is, “[a]s one might expect, the credit card companies us[ing] their power to block competition.” The initial, inherent tension in this statement should be obvious: the assertion is simultaneously that this a non-competitive, oligopolistic market and that American Express is using the anti-steering provision to harm its competitors. Indeed, rather than demonstrating a classic case of oligopoly, this demonstrates the competitive purpose that the anti-steering provision serves: facilitating competition between American Express and other card issuers.

The reality of American Express’s anti-steering provision, which prohibits merchants who choose to accept AmEx cards from “steering” their customers to pay for purchases with other cards, is that it is necessary in order for American Express to compete with other card issuers. Just like analysis of multi-sided markets needs to consider all sides of the market, platforms competing in these markets need to compete on all sides of the market.

But the use of complex pricing schemes to determine prices on each side of the market to maintain an appropriate volume of transactions in the overall market creates a unique opportunity for competitors to behave opportunistically. For instance, if one platform charges a high fee to one side of the market in order to subsidize another side of the market (say, by offering generous rewards), this creates an opportunity for a savvy competitor to undermine that balancing by charging the first side of the market a lower fee, thus attracting consumers from its competitor and, perhaps, making its pricing strategy unprofitable. This may appear to be mere price competition. But the effects of price competition on one side of a multi-sided market are more complicated to evaluate than those of traditional price competition.

Generally, price competition has the effect of lowering prices for goods, increasing output, decreasing deadweight losses, and benefiting consumers. But in a multi-sided market, the high prices charged to one side of the market can be used to benefit consumers on the other side of the market; and that consumer benefit can increase output on that side of the market in ways that create benefits for the first side of the market. When a competitor poaches a platform’s business on a single side of a multi-sided market, the effects can be negative for users on every side of that platform’s market.

This is most often seen in cases, like with credit cards, where platforms offer differentiated products. American Express credit cards are qualitatively different than Visa and Mastercard credit cards; they charge more (to both sides of the market) but offer consumers a more expansive rewards program (funded by the higher transaction fees charged to merchants) and offer merchants access to what are often higher-value customers (ensured by the higher fees charged to card holders).

If American Express did not require merchants to abide by its anti-steering rule, it wouldn’t be able to offer this form of differentiated product; it would instead be required to compete solely on price. Cardholders exist who prefer higher-status cards with a higher-tier of benefits, and there are merchants that prefer to attract a higher-value pool of customers.

But without the anti-steering provisions, the only competition available is on the price to merchants. The anti-steering rule is needed in order to prevent merchants from free-riding on American Express’s investment in attracting a unique group of card holders to its platform. American Express maintains that differentiation from other cards by providing its card holders with unique and valuable benefits — benefits that are subsidized in part by the fees charged to merchants. But merchants that attract customers by advertising that they accept American Express cards but who then steer those customers to other cards erode the basis of American Express’s product differentiation. Because of the interdependence of both sides of the platform, this ends up undermining the value that consumers receive from the platform as American Express ultimately withdraws consumer-side benefits. In the end, the merchants who valued American Express in the first place are made worse off by virtue of being permitted to selectively free-ride on American Express’s network investment.

At this point it is important to note that many merchants continue to accept American Express cards in light of both the cards’ higher merchant fees and these anti-steering provisions. Meanwhile, Visa and Mastercard have much larger market shares, and many merchants do not accept Amex. The fact that merchants who may be irritated by the anti-steering provision continue to accept Amex despite it being more costly, and the fact that they could readily drop Amex and rely on other, larger, and cheaper networks, suggests that American Express creates real value for these merchants. In other words, American Express, in fact, must offer merchants access to a group of consumers who are qualitatively different from those who use Visa or Mastercard cards — and access to this group of consumers must be valuable to those merchants.

An important irony in this case is that those who criticize American Express’s practices, who are arguing these practices limit price competition and that merchants should be able to steer customers to lower-fee cards, generally also argue that modern antitrust law focuses too myopically on prices and fails to account for competition over product quality. But that is precisely what American Express is trying to do: in exchange for a higher price it offers a higher quality card for consumers, and access to a different quality of consumers for merchants.

Anticompetitive conduct here, there, everywhere! Or nowhere.

The good news is that many on the court — and, for that matter, even Ohio’s own attorney — recognize that the effects of the anti-steering rule on the cardholder side of the market need to be considered alongside their effects on merchants:

JUSTICE KENNEDY: Does output include premiums or rewards to customers?
MR. MURPHY: Yeah. Output would include quality considerations as well.

The bad news is that several justices don’t seem to get it. Justice Kagan, for instance, suggested that “the effect of these anti-steering provisions means a market where we will only have high-cost/high-service products.” Justice Kagan’s assertion reveals the hubris of the would-be regulator, bringing to her evaluation of the market a preconception of what that market is supposed to look like. To wit: following her logic, one can say just as much that without the anti-steering provisions we would have a market with only low-cost/low-service products. Without an evaluation of the relative effects — which is more complicated than simple intuition suggests, especially since one can always pay cash — there is no reason to say that either of these would be a better outcome.

The reality, however, is that it is possible for the market to support both high- and low-cost, and high- and low-service products. In fact, this is the market in which we live today. As Justice Gorsuch said, “American Express’s agreements don’t affect MasterCard or Visa’s opportunity to cut their fees … or to advertise that American Express’s are higher. There is room for all kinds of competition here.” Indeed, one doesn’t need to be particularly creative to come up with competitive strategies that other card issuers could adopt, from those that Justice Gorsuch suggests, to strategies where card issuers are, in fact, “forced” to accept higher fees, which they in turn use to attract more card holders to their networks, such as through sign-up bonuses or awards for American Express customers who use non-American Express cards at merchants who accept them.

A standard response to such proposals is “if that idea is so good, why isn’t the market already doing it?” An important part of the answer in this case is that MasterCard and Visa know that American Express relies on the anti-steering provision in order to maintain its product differentiation.

Visa and Mastercard were initially defendants in this case, as well, as they used similar rules to differentiate some of their products. It’s telling that these larger market participants settled because, to some extent, harming American Express is worth more to them than their own product differentiation. After all, consumers steered away from American Express will generally use Visa or Mastercard (and their own high-priced cards may be cannibalizing from their own low-priced cards anyway, so reducing their value may not hurt so much). It is therefore a better strategy for them to try to use the courts to undermine that provision so that they don’t actually need to compete with American Express.

Without the anti-steering provision, American Express loses its competitive advantage compared to MasterCard and Visa and would be forced to compete against those much larger platforms on their preferred terms. What’s more, this would give those platforms access to American Express’s vaunted high-value card holders without the need to invest resources in competing for them. In other words, outlawing anti-steering provisions could in fact have both anti-competitive intent and effect.

Of course, card networks aren’t necessarily innocent of anticompetitive conduct, one way or the other. Showing that they are on either side of the anti-steering rule requires a sufficiently comprehensive analysis of the industry and its participants’ behavior. But liability cannot be simply determined based on behavior on one side of a two-sided market. These companies can certainly commit anticompetitive mischief, and they need to be held accountable when that happens. But this case is not about letting American Express or tech companies off the hook for committing anticompetitive conduct. This case is about how we evaluate such allegations, weigh them against possible beneficial effects, and put in place the proper thorough analysis for this particular form of business.

Over the last two decades, scholars have studied the nature of multi-sided platforms, and have made a good deal of progress. We should rely on this learning, and make sure that antitrust analysis is sound, not expedient.

The U.S. Federal Trade Commission’s (FTC) well-recognized expertise in assessing unfair or deceptive acts or practices can play a vital role in policing abusive broadband practices.  Unfortunately, however, because Section 5(a)(2) of the FTC Act exempts common carriers from the FTC’s jurisdiction, serious questions have been raised about the FTC’s authority to deal with unfair or deceptive practices in cyberspace that are carried out by common carriers, but involve non-common-carrier activity (in contrast, common carrier services have highly regulated terms and must be made available to all potential customers).

Commendably, the Ninth Circuit held on February 26, in FTC v. AT&T Mobility, that harmful broadband data throttling practices by a common carrier were subject to the FTC’s unfair acts or practices jurisdiction, because the common carrier exception is “activity-based,” and the practices in question did not involve common carrier services.  Key excerpts from the summary of the Ninth Circuit’s opinion follow:

The en banc court affirmed the district court’s denial of AT&T Mobility’s motion to dismiss an action brought by th Federal Trade Commission (“FTC”) under Section 5 of the FTC Act, alleging that AT&T’s data-throttling plan was unfair and deceptive. AT&T Mobility’s data-throttling is a practice by which the company reduced customers’ broadband data speed without regard to actual network congestion. Section 5 of the FTC Act gives the agency enforcement authority over “unfair or deceptive acts or practices,” but exempts “common carriers subject to the Acts to regulate commerce.” 15 U.S.C § 45(a)(1), (2). AT&T moved to dismiss the action, arguing that it was exempt from FTC regulation under Section 5. . . .

The en banc court held that the FTC Act’s common carrier exemption was activity-based, and therefore the phrase “common carriers subject to the Acts to regulate commerce” provided immunity from FTC regulation only to the extent that a common carrier was engaging in common carrier services. In reaching this conclusion, the en banc court looked to the FTC Act’s text, the meaning of “common carrier” according to the courts around the time the statute was passed in 1914, decades of judicial interpretation, the expertise of the FTC and Federal Communications Commission (“FCC”), and legislative history.

Addressing the FCC’s order, issued on March 12, 2015, reclassifying mobile data service from a non-common carriage service to a common carriage service, the en banc court held that the prospective reclassification order did not rob the FTC of its jurisdiction or authority over conduct occurring before the order. Accordingly, the en banc court affirmed the district court’s denial of AT&T’s motion to dismiss.

A key introductory paragraph in the Ninth Circuit’s opinion underscores the importance of the court’s holding for sound regulatory policy:

This statutory interpretation [that the common carrier exception is activity-based] also accords with common sense. The FTC is the leading federal consumer protection agency and, for many decades, has been the chief federal agency on privacy policy and enforcement. Permitting the FTC to oversee unfair and deceptive non-common-carriage practices of telecommunications companies has practical ramifications. New technologies have spawned new regulatory challenges. A phone company is no longer just a phone company. The transformation of information services and the ubiquity of digital technology mean that telecommunications operators have expanded into website operation, video distribution, news and entertainment production, interactive entertainment services and devices, home security and more. Reaffirming FTC jurisdiction over activities that fall outside of common-carrier services avoids regulatory gaps and provides consistency and predictability in regulatory enforcement.

But what can the FTC do about unfair or deceptive practices affecting broadband services, offered by common carriers, subsequent to the FCC’s 2015 reclassification of mobile data service as a common carriage service?  The FTC will be able to act, assuming that the Federal Communications Commission’s December 2017 rulemaking, reclassifying mobile broadband Internet access service as not involving a common carrier service, passes legal muster (as it should).  In order to avoid any legal uncertainty, however, Congress could take the simple step of eliminating the FTC Act’s common carrier exception – an outdated relic that threatens to generate disparate enforcement outcomes toward the same abusive broadband practice, based merely upon whether the parent company is deemed a “common carrier.”